You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@heron.apache.org by GitBox <gi...@apache.org> on 2021/09/09 15:51:22 UTC

[GitHub] [incubator-heron] surahman opened a new pull request #3710: [HERON-3707] ConfigMap Pod Template Support

surahman opened a new pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710


   **_Issue #3707:_**
   
   A preliminary WIP PR to add Pod Template support in Heron similar to how it is handled in [Spark](https://spark.apache.org/docs/latest/running-on-kubernetes.html#pod-template). The `ConfigMap` name should be passed in from the cli flag `config-property`.
   
   The solution is following what has been done by [Spark](https://github.com/apache/spark/blob/de59e01aa4853ef951da080c0d1908d53d133ebe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/PodTemplateConfigMapStep.scala#L39-L54) to produce the following YAML nodes:
   ```YAML
     volumes:
       - name: pod-template-name  # from <POD_TEMPLATE_VOLUME>.
         configMap:
           name: configmap-name  # from <configmapName>.
           items:
           - key: pod-template-key  # from <POD_TEMPLATE_KEY>.
             path: executor-pod-spec-template-file-name # from <EXECUTOR_POD_SPEC_TEMPLATE_FILE_NAME>.
   ```
   
   The first phase of this PR is generating the above basic YAML nodes in a `V1PodSpec`. The objective is to call the `createConfigMapVolumeMount` method from within `createStatefulSet` to add the `V1PodTemplateSpec` somewhere here:
   https://github.com/apache/incubator-heron/blob/b2a4e828569c0445a758ca9f3780600a0855e665/heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java#L386-L394
   
   Remaining on the TODO list is getting the `configmapName` from the cli as well as determining when an actual Pod Template has been supplied. The check for this will happen somewhere in the code block above.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the completed the [logic](https://github.com/surahman/incubator-heron/commit/e3d72c9a81b7c359794c3c1a7c121ad24646e234). I have linked the API references below. The API reference for `listNamespacedConfigMap` states that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.
   
   **Edit:**  I have merged the dev branch to bring the code changes into view for review. Testing private methods is rather complicated and contrived. We might need to reconsider the access level of methods that require testing to be protected instead.
   
   Running a Style Check prior to commits is not flagging issues locally. We need style files that are compatible with the [CheckStyle](https://github.com/checkstyle/checkstyle) linter app and IDE plugins:
   `tools/java/src/org/apache/bazel/checkstyle/heron_coding_style.xml`
   `tools/java/src/org/apache/bazel/checkstyle/apache_coding_style.xml`
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-941290266


   Dangling references to topologies that fail to launch are now removed from the `Topology Manager` and `Scheduler`. I am updating the `docs` as well. I think we have an RC here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917798710


   @joshfischer1108 I have added some tests for the exposed code but the routines in the `V1Controller` are scoped to private. Testing these routines would mean having to make them protected and then exposing them using an accessor testing class that extends the `V1Controller`. I am not sure if changing the access level for the methods in V1Controller makes sense.
   
   I have also added tests for the new Constants which were added to ensure complete patch code coverage. I feel testing them is redundant and that testing should be limited to routines and generated objects.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-956449189


   Thank you @joshfischer1108 I really appreciate you looking these changes over!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932331781


   From the error, this is undoubtedly a permissions issue and I am wondering if the `verb` actions in the role are adequate. I believe `configMapRef` are resolved by `kubelet` and it may have permissions to access the ConfigMaps. Pods presumably do not since they are in the concluding stages of bootstrapping - assuming the principle of least privilege. With all that said, if you have tested by manually applying a role via `kubectl` then there might be a deeper issue.
   
   The call to `listNamespacedConfigMap`s does not deviate far from the default, so it is not undertaking anything which should have a need for specific permissions. From the fact that the Heron API Server is able to submit a topology to K8s using the V1 API, I think it is safe to assume that it does have a valid/accepted bearer token and credentials.
   
   My testing capabilities are severely restricted by system resources.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r714944117



##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java
##########
@@ -386,7 +386,7 @@ private V1StatefulSet createStatefulSet(Resource containerResource, int numberOf
     statefulSetSpec.selector(selector);
 
     // create a pod template
-    final V1PodTemplateSpec podTemplateSpec = loadPodFromTemplate();
+    final V1PodTemplateSpec podTemplateSpec = new V1PodTemplateSpec();

Review comment:
       Hi @nwangtw, thank you! I have spun up a build/test Ubuntu18.04 Docker container on my machine and I get the following output with:
   
   ```bash
   INFO: Elapsed time: 198.476s, Critical Path: 9.83s
   INFO: 855 processes: 191 internal, 664 local.
   INFO: Build completed successfully, 855 total actions
   PASSED: //heron/stmgr/tests/cpp/server:stmgr_unittest (see /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/testlogs/heron/stmgr/tests/cpp/server/stmgr_unittest/test.log)
   INFO: From Testing //heron/stmgr/tests/cpp/server:stmgr_unittest
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [       OK ] StMgr.test_pplan_decode (2011 ms)
   [ RUN      ] StMgr.test_tuple_route
   [       OK ] StMgr.test_tuple_route (2009 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [       OK ] StMgr.test_custom_grouping_route (2008 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [       OK ] StMgr.test_back_pressure_instance (4010 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [       OK ] StMgr.test_spout_death_under_backpressure (6149 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [       OK ] StMgr.test_back_pressure_stmgr (5004 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (5033 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5014 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4016 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect[       OK ] StMgr.test_metricsmgr_reconnect (3005 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38260 ms total)
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38260 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   //heron/stmgr/tests/cpp/server:stmgr_unittest                   (cached) PASSED in 38.3s
   
   INFO: Build completed successfully, 855 total actions
   ```
   
   Re-running with the whole suite yields the following:
   
   ```bash
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest (1/3 cached) FLAKY, failed in 2 out of 3 in 5.1s
     Stats over 3 runs: max = 5.1s, min = 0.0s, avg = 1.7s, dev = 2.4s
   ```
   
   What I can discern from the test logs is that all tests inclusive of `test_custom_grouping_route` pass, which indicates a timeout on `test_back_pressure_instance`:
   https://github.com/apache/incubator-heron/blob/396f2b848da0f56dcfda1d917928358133851cf5/heron/stmgr/tests/cpp/server/stmgr_unittest.cpp#L960
   
   I have refactored the code a little to make it more readable on the `dev` branch but that should not be causing this test to pass locally. I shall clean up the Git commit-tree and merge the changes on `dev` into the main feature branch to see if it now passes on the Travis CI build 🤞🏼.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-924531062


   Hi @nicknezis, could you please clarify whether this is what a `ConfigMap` with a `Pod Template` would look like, and if not could you please provide an example?
   
   ```yaml
   apiVersion: v1
   kind: ConfigMap
   metadata:
     name: some-config-map-name
   data:
     heron.kubernetes.pod.template.configmap.name: |
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: heron-tracker
         namespace: default
       spec:
         selector:
           matchLabels:
             app: heron-tracker
         template:
           metadata:
             labels:
               app: heron-tracker
           spec:
             containers:
               - name: heron-tracker
                 image: apache/heron:latest
                 ports:
                   - containerPort: 8888
                     name: api-port
                 command: ["sh", "-c"]
                 args:
                   - >-
                     heron-tracker
                     --type=zookeeper
                     --name=kubernetes
                     --hostport=zookeeper:2181
                     --rootpath="/heron"
                 resources:
                   requests:
                     cpu: "100m"
                     memory: "200M"
                   limits:
                     cpu: "400m"
                     memory: "512M"
   ```
   
   I am in the final stages of testing the code and need to make sure I have the correct format for the `ConfigMap`s in case I need to make tweaks. The `heron.kubernetes.pod.template.configmap.name` is the key variable in the Java code but in the actual document, it will be the value associated with the key.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926950066


   It seems that Spark has a [hardcoded key](https://github.com/apache/spark/blob/ff3f3c45668364da9bd10992791b5ee9a46fea21/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala#L86) for the ConfigMap, but I think this is because they create the ConfigMap and mount it. I like the flexibility of your suggested modification to the config value.
   
   I'm sorry that I've been a bit busy lately. I will try to review the code and test a bit tonight.
   
   Thanks again for helping add this feature. I'm already excited to add more subtle improvements to the Kubernetes scheduler. :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-928631544


   Build and all tests passing locally. It should be safe to ignore the flaky Travis CI build & test check. There must be a more practical way to trigger a recheck without a `git push` action. I do not believe there is any point in triggering a re-check.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934007453


   I suspect what I had in the ConfigMap was being replaced with `getPodSpec()` in the code. We may need to merge the code configured aspects with anything loaded from the PodTemplate.
   
   For example from the Spark PodTemplate documentation:
   
   ```
   It is important to note that Spark is opinionated about certain pod configurations so there are values in the pod template that will always be overwritten by Spark. Therefore, users of this feature should note that specifying the pod template file only lets Spark start with a template pod instead of an empty pod during the pod-building process. For details, see the full list of pod template values that will be overwritten by spark.
   
   Pod template files can also define multiple containers. In such cases, you can use the spark properties spark.kubernetes.driver.podTemplateContainerName and spark.kubernetes.executor.podTemplateContainerName to indicate which container should be used as a basis for the driver or executor. If not specified, or if the container name is not valid, Spark will assume that the first container in the list will be the driver or executor container.
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-930300053






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-928631544


   Build and all tests passing locally. It should be safe to ignore the flaky Travis CI build & test check. There must be a more practical way to trigger a recheck without a `git push` action. I do not believe there is any point in triggering a re-check.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the cleanup and initial [stubbing](https://github.com/surahman/incubator-heron/commit/be879d1277a2c1e0dae42fcc655efd204b8165bb). I have linked the API references below. The API reference for `listNamespacedConfigMap` states that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We will need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-924531062


   Hi @nicknezis, could you please clarify whether this is what a `ConfigMap` with a `Pod Template` would look like, and if not could you please provide an example?
   
   ```yaml
   apiVersion: v1
   kind: ConfigMap
   metadata:
     name: some-config-map-name
   data:
     configmap-name: |
       apiVersion: apps/v1
       kind: PodTemplate
       metadata:
         name: heron-tracker
         namespace: default
       template:
         metadata:
           labels:
             app: heron-tracker
         spec:
           containers:
             - name: heron-tracker
               image: apache/heron:latest
               ports:
                 - containerPort: 8888
                   name: api-port
               resources:
                 requests:
                   cpu: "100m"
                   memory: "200M"
                 limits:
                   cpu: "400m"
                   memory: "512M"
   ```
   
   I am in the final stages of testing the code and need to make sure I have the correct format for the `ConfigMap`s in case I need to make tweaks. The `heron.kubernetes.pod.template.configmap.name` is the key variable in the Java code but in the actual document, it will be the value associated with the key.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917699893


   @surahman Is this ready for review?  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926946450


   I uncovered an issue where if you have Pod Templates with the same target name in multiple `ConfigMaps` the first one we come across will be loaded up. This possesses the potential of being a silent error. As an alternative, I propose we implement the following protocol for naming and referencing the Pod Templates:
   `CONFIGMAP-NAME.POD-TEMPLATE-NAME`.
   
   The `CONFIGMAP-NAME` would be the name of the ConfigMap where the Pod Template, named `POD-TEMPLATE-NAME` is located. The user would concatenate the two names with a `.`. We can safely rely on `kubectl` to ensure the ConfigMap and Pod Template names are unique.
   
   This would mean creating a new protected method in `V1Controller` called `getPodTemplateLocation` which will return a `Pair` <`CONFIGMAP-NAME`, `POD-TEMPLATE-NAME`>. The string would be split on the first occurrence of a `.`, and if it is invalid an exception will be thrown.
   
   I will update the documentation being written up as soon as I have the new method completed, tested, and wired into `loadPodFromTemplate` with updated tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-921992801


   Okay, so I am going to endeavour to break this down - please bear with me as I am still relatively new to the codebase and K8s API...
   
   ---
   
   Starting with the reference code in [Spark](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L96-L113), and keeping in mind that the Spark architecture is different from Heron's:
   
   This simply gets the Hadoop config from the driver/coordinator:
   
   ```scala
   val hadoopConf = SparkHadoopUtil.get.newConfiguration(conf)
   ```
   
   These two lines are going to download the remote file from the driver/coordinator and make them local to a machine. The first line [downloads](https://github.com/joan38/kubernetes-client/blob/6fbf23b7a997e572456256c4714222ea734bd845/kubernetes-client/src/com/goyeau/kubernetes/client/api/PodsApi.scala#L119-L148) the file, assuming I am looking at the correct Scala K8s API, and the second retrieves a file descriptor/handle on the downloaded file:
   
   ```scala
   val localFile = downloadFile(templateFileName, Utils.createTempDir(), conf, hadoopConf)
   val templateFile = new File(new java.net.URI(localFile).getPath)
   ```
   
   This third line then does the heavy lifting of reading the Pod Template into a Pod Config from the newly copied local file:
   
   ```scala
   val pod = kubernetesClient.pods().load(templateFile).get()
   ```
   
   The final line sets up the Spark container with the Pod Template and specified name:
   
   ```scala
   selectSparkContainer(pod, containerName)
   ```
   
   ---
   
   Moving on to what we need to do on the Heron side:
   
   1. Read the `ConfigMap` name from the `--config-property` option. I set the key for this to `heron.kubernetes.pod.template.configmap.name` with the value being the file name.
   2. Read the YAML `ConfigMap` and extract the YAML node tree which contains the Pod Template. For this, we will either need a YAML parser or need to find a utility in the K8s Java API to do the job. I think the K8s Java API should include a utility for this, a lack thereof would remain a significant oversight on their part.
   3. Create a `V1PodTemplateSpec` object using the results from step 2.
   4. Iron out permission issues during testing, should they arise.
   
   I am not familiar with the K8s API but will start digging around for a YAML config to V1 object parser, if anyone is aware of where it is please let me know. There are some suggestions [here](https://github.com/kubernetes-client/java/issues/170) and the YAML reader within K8s is found [here](https://javadoc.io/static/io.kubernetes/client-java/6.0.1/io/kubernetes/client/util/Yaml.html).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922103249


   My confusion seems to stem from when the K8s cluster is provided with the K8s executor pod configs. Judging from [this](https://github.com/apache/incubator-heron/blob/master/website2/docs/schedulers-k8s-by-hand.md) a config file is not submitted to the cluster for the executor Pods, but a default one is created by `createStatefulSet` and [submitted](https://github.com/apache/incubator-heron/blob/396f2b848da0f56dcfda1d917928358133851cf5/heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java#L126) with the job.
   
   If that is the case, and that has been my assumption thus far, then we would then need to read in the [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) file that was provided as a parameter to `--config-property`. We would then do a simple check around [here](https://github.com/apache/incubator-heron/blob/396f2b848da0f56dcfda1d917928358133851cf5/heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java#L386) and insert the contents of the ConfigMap into the `PodTemplateSpec`. If this is correct, would it not be simpler to declare a Pod Template file for the executor Pods and provide that as a parameter to `--config-property`, read it in using the k8s V1 YAML parser and place it into the `PodTemplateSpec` in the `StatefulSet`?
   
   If there is a k8s config submitted (I do not have any experience with Heron with k8s) then we could locate the configs in the `Config` object and linearly probe the `V1ConfigMapList` for a name provided by the `--config-property` and insert that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-939633719


   I have finalised the feature. I am moving on to tracing where the `submit` command originates within the `scheduler-core` to locate where to delete the topologies from in the event of a failed submission to K8s. I shall update the documentation shortly. Input on what follows is genuinely appreciated and we need broader testing.
   
   <details><summary>Pod Template</summary>
   
   ```yaml
   apiVersion: v1
   kind: PodTemplate
   metadata:
     name: pod-template-example
     namespace: default
   template:
     metadata:
       name: acking-pod-template-example
     spec:
       containers:
         # Executor container
         - name: executor
           securityContext:
             allowPrivilegeEscalation: false
           env:
           - name: var_one
             value: "variable one"
           - name: var_two
             value: "variable two"
           - name: var_three
             value: "variable three"
           - name: POD_NAME
             value: "MUST BE OVERWRITTEN"
           - name: HOST
             value: "REPLACED WITH ACTUAL HOST"
           ports:
           - name: overwritten
             protocol: TCP
             containerPort: 6001
           - name: tcp-port-kept
             protocol: TCP
             containerPort: 5555
           - name: udp-port-kept
             protocol: UDP
             containerPort: 5556
           volumeMounts:
           - name: shared-volume
             mountPath: /shared_volume
   
         # Sidecar container
         - name: sidecar-container
           image: alpine
           volumeMounts:
           - name: shared-volume
             mountPath: /shared_volume
   
       # Volumes
       volumes:
       - name: shared-volume
         emptyDir: {}
   ```
   
   </details>
   
   <details><summary>describe pod acking-0</summary>
   
   ```bash
   Name:           acking-0
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-748f986d6f
                   statefulset.kubernetes.io/pod-name=acking-0
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       5555/TCP, 5556/UDP, 6001/TCP, 6002/TCP, 6003/TCP, 6004/TCP, 6005/TCP, 6006/TCP, 6007/TCP, 6008/TCP, 6009/TCP
       Host Ports:  0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0-5579618957728031586.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking6276c0d9-866f-4ac4-b8af-d54a9d51b3f9 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-bi
 nary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --she
 ll-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:        (v1:status.podIP)
         POD_NAME:   acking-0 (v1:metadata.name)
         var_one:    variable one
         var_three:  variable three
         var_two:    variable two
       Mounts:
         /shared_volume from shared-volume (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-csb95 (ro)
     sidecar-container:
       Image:        alpine
       Port:         <none>
       Host Port:    <none>
       Environment:  <none>
       Mounts:
         /shared_volume from shared-volume (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-csb95 (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     shared-volume:
       Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
       Medium:     
       SizeLimit:  <unset>
     kube-api-access-csb95:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Burstable
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  17s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   ```
   
   </details>
   
   ## Heron Configured Items in Pod Templates
   
   Heron will locate the container named `executor` in the Pod Template and customize it as outlined below. All other containers within the Pod Template will remain unchanged.
   
   ### Executor Container
   
   All metadata for the `executor` container will be overwritten by Heron. In some other cases, values from the Pod Template for the `executor` will be overwritten by Heron as outline below.
   
   | Name | Description | Policy |
   |---|---|---|
   | `image` | The `executor` container's image. | Overwritten by Heron using values form the config.
   | `env` | Environment variables are made available within the container. The `HOST` and `POD_NAME` keys are required by Heron and are thus reserved. | Merged with Heron's values taking precedence. Deduplication is based on `name`.
   | `ports` | Port numbers opened within the container. Some of these port number are required by Heron and are thus reserved. The reserved ports are defined in Heron's constants as [`6001`-`6010`]. | Merged with Heron's values taking precedence. Deduplication is based on the `containerPort` value.
   | `limits` | Heron will attempt to load values for `cpu` and `memory` from its configs. If these values are not provided in the containers specs, Heron will place values from its configs. | User input takes precedence over Heron's values. This allows for per job custom resource limits.
   | `volumeMounts` | These are the mount points within the `executor` container for the `volumes` available in the Pod. | Merged with Heron's values taking precedence. Deduplication is based on the `name` value.
   | Annotation: `prometheus.io/scrape` | Flag to indicate whether Prometheus logs can be scraped and is set to `true`. | Value is are overridden by Heron. |
   | Annotation `prometheus.io/port` | Port address for Prometheus log scraping and is set to `8080`. | Values are overridden by Heron.
   | Annotation: Pod | Pod's revision/version hash. | Automatically set.
   | Annotation: Service | Labels services can use to attach to the Pod. | Automatically set.
   | Label: `app` | Name of the application lauching the Pod and is set to `Heron`. | Values are overridden by Heron.
   | Label: `topology`| The name of topology which was provided when submitting. | User defined and supplied on the CLI.
   
   ### Pod
   
   The following items will be set in the Pod Template's `spec` by Heron.
   
   | Name | Description | Policy |
   |---|---|---|
   `terminationGracePeriodSeconds` | Grace period to wait before shutting down the Pod after a `SIGTERM` signal and is set to `0` seconds. | Values are overridden by Heron.
   | `tolerations` | Attempts to colocate Pods with `tolerations` and `taints` onto nodes hosting Pods with matching `tolerations` and `taints`. <br>  Keys:<br>`node.kubernetes.io/not-ready` <br> `node.alpha.kubernetes.io/notReady` <br> `node.alpha.kubernetes.io/unreachable`. <br> Values (common):<br> `operator: "Exists"`<br> `effect: NoExecute`<br> `tolerationSeconds: 10L` | Values are overridden by Heron.
   | `containers` | Container images to be used on the executor Pods. | All `containers`, excluding the `executor`, are loaded as-is.
   | `volumes` | Volumes to be made available to the entire Pod. | Merged with Heron's values taking precedence. Deduplication is based on the `name` value.
   | `secretVolumes` | Secrets to be mounted as volumes within the Pod. | Loaded from the Heron configs if present.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728231302



##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/KubernetesContext.java
##########
@@ -172,6 +179,15 @@ static String getContainerVolumeMountPath(Config config) {
     return config.getStringValue(KUBERNETES_CONTAINER_VOLUME_MOUNT_PATH);
   }
 
+  public static String getPodTemplateConfigMapName(Config config) {
+    return config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_NAME);
+  }
+
+  public static boolean getPodTemplateConfigMapDisabled(Config config) {
+    final String disabled = config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_DISABLED);
+    return disabled != null && disabled.toLowerCase(Locale.ROOT).equals("true");

Review comment:
       Good catch, I shall affect this change.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));

Review comment:
       You are right, I will make the update.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));
+      }
+
+      try {
+        // Clear state from the Scheduler via RPC.
+        Scheduler.KillTopologyRequest killTopologyRequest = Scheduler.KillTopologyRequest
+            .newBuilder()
+            .setTopologyName(topologyName).build();
+
+        ISchedulerClient schedulerClient = new SchedulerClientFactory(config, runtime)
+            .getSchedulerClient();
+        if (!schedulerClient.killTopology(killTopologyRequest)) {
+          final String logMessage =
+              String.format("Failed to remove topology '%s' from scheduler after failed submit. "
+                  + "Please re-try the kill command.", topologyName);
+          errorMessage.append(String.format("%n%s", logMessage));
+          LOG.log(Level.SEVERE, logMessage);
+        }
+      // SUPPRESS CHECKSTYLE IllegalCatch
+      } catch (Exception ignored){
+        // The above call to clear the Scheduler may fail. This situation can be ignored.

Review comment:
       I was reluctant to add a `log` message because of the rather dense logging on the API server. If an error occurs from this shutdown attempt it will be because the Scheduler was unreachable. What I can do is log a message at the `FINER` or `INFO` levels.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932636387


   I am not sure if you tried this but think we need to set up a [`Service Account`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) and assign it to the Heron API Server Pod. We then bind the Role to the `Service Account` [like so](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects).
   
   From [Stack Overflow](https://stackoverflow.com/questions/52995962/kubernetes-namespace-default-service-account):
   
   > 5. The default permissions for a service account don't allow it to list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
   
   **_Edit:_**
   It appears as though the `ClusterRoles` and `ServiceAccount` are in the K8s configs for the Heron API Server:
   
   * [General](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/general/apiserver.yaml)
   * [Minikube](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/minikube/apiserver.yaml)
   
   This makes life a lot easier with only the following being additionally required:
   
   <details>
     <summary>Role</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: Role
   metadata:
     name: heron-apiserver-configmap-role
     namespace: default
   rules:
   - apiGroups:
     - ""
     resources:
     - configmaps
     verbs:
     - get
     - watch
     - list
   ```
   </details>
   
   <details>
     <summary>RoleBinding</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: RoleBinding
   metadata:
     name: heron-apiserver-configmap-rolebinding
     namespace: default
   roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: Role
     name: heron-apiserver-configmap-role
   subjects:
   - kind: ServiceAccount
     name: heron-apiserver
     namespace: default
   ```
   </details>
   
   I think it would be safe to add these to the Heron API Server K8s configs because it is adequately restrictive. I am not sure if both a `ClusterRole` and `Role` can be assigned at the same time, if not we would need to aggregate into the `ClusterRole`. The `ClusterRole` has a reference to the `cluster-admin` and I believe this is why it can submit topologies. The `Role` might need to be a `ClusterRole` to support aggregation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-929833982


   I'm slowly finding time to test. So far here are a couple of minor tweaks we will need to make:
   1. Add configmaps k8s role permissions
   ```
   rules:
   - apiGroups: 
     - ""
     resources: 
     - configmaps
     verbs: 
     - get
     - watch
     - list
   ```
   2. The `listConfigMaps` call is looking in the default namespace, but it should be using `getNamespace()` instead for that value.
   
   I'm making the edits and testing locally. Will report back with any other findings.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-957601846


   Thank you @joshfischer1108 🎆 ! There is another PR incoming sometime today or tomorrow for the CLI PVC support 😄 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933896528


   I suspect it might be the API version. It should be:
   ```yaml
   apiVersion: v1
   ```
   and not:
   
   ```yaml
   apiVersion: apps/v1
   ```
   
   **_Edit:_**
   
   Verifying the Pod Template using the `metadata`, `annotations`, and `labels` will be misleading:
   https://github.com/apache/incubator-heron/blob/2190502da0ad723db86a13216f5d9acd0b4c6474/heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java#L388-L394
   
   I also find [kubeval](https://kubeval.instrumenta.dev/) helpful.
   
   There are some code modifications that will be coming in soon. I need to perform the check for the Pod Template earlier in `submit` rather than in `createStatefulSet`. This is to avoid orphaned/failed topologies from persisting.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-945160111


   > If my Pod Template has tolerations, will the logic currently wipe that out?
   
   The current logic will discard all user-provided `tolerations` and replace them with the Heron default values. Should we be replacing the Heron provided defaults with the values provided in the Pod Template? Or merge with them with Herons giving the Heron values precedence?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933893882


   Hi @nicknezis, I have spun up the K8s Heron locally but only have 4 cores and 4gb I can allocate. As such the `acking` topology remains in a `pending` state. I will test further later. The topology appears to be launching just fine, but I am unsure if the template is being used. No additional `Roles` or `RoleBindings` were required:
   
   <details>
     <summary>Config Map of Pod Template</summary>
   
   ```bash
   minikube kubectl -- get configmaps configmap-pod-template -o yaml
   ```
   
   ```yaml
   apiVersion: v1
   data:
     pod_template.yaml: |
       apiVersion: v1
       kind: PodTemplate
       metadata:
         name: pod-template-example
         namespace: default
       template:
         metadata:
           name: acking-pod-template-example
   kind: ConfigMap
   metadata:
     creationTimestamp: "2021-10-04T21:49:11Z"
     name: configmap-pod-template
     namespace: default
     resourceVersion: "1021"
     uid: da578cac-27cc-4378-8cf3-664a208fcd96
   ```
   </details>
   
   <details>
     <summary>Heron Submit</summary>
   
   ```bash
    kubernetes ~/.heron/examples/heron-api-examples.jar \
   --verbose \
   --config-property heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml \
   org.apache.heron.examples.api.AckingTopology acking
   ```
   
   ```bash
   [2021-10-04 17:54:40 -0400] [DEBUG]: Input Command Line Args: {'cluster/[role]/[env]': 'kubernetes', 'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': '', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Input Command Line Args: {'cluster/[role]/[env]': 'kubernetes', 'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': '', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Using cluster definition from file /home/saad/.config/heron/kubernetes/cli.yaml
   [2021-10-04 17:54:40 -0400] [DEBUG]: Processed Command Line Args: {'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': 'http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit', 'cluster': 'kubernetes', 'role': 'saad', 'environ': 'default', 'submit_user': 'saad', 'deploy_mode': 'server'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Submit Args {'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': 'http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit', 'cluster': 'kubernetes', 'role': 'saad', 'environ': 'default', 'submit_user': 'saad', 'deploy_mode': 'server'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Invoking class using command: `/usr/bin/java -client -Xmx1g -cp '/home/saad/.heron/examples/heron-api-examples.jar:/home/saad/.heron/lib/third_party/*' org.apache.heron.examples.api.AckingTopology acking`
   [2021-10-04 17:54:40 -0400] [DEBUG]: Heron options: {cmdline.topologydefn.tmpdirectory=/tmp/tmpvj0pjzzc,cmdline.topology.initial.state=RUNNING,cmdline.topology.role=saad,cmdline.topology.environment=default,cmdline.topology.cluster=kubernetes,cmdline.topology.file_name=/home/saad/.heron/examples/heron-api-examples.jar,cmdline.topology.class_name=org.apache.heron.examples.api.AckingTopology,cmdline.topology.submit_user=saad}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Topology config: kvs {
     key: "topology.component.rammap"
     value: "word:1073741824,exclaim1:1073741824"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.team.environment"
     serialized_value: "\254\355\000\005t\000\007default"
     type: JAVA_SERIALIZED_VALUE
   }
   kvs {
     key: "topology.container.disk"
     value: "2147483648"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.container.ram"
     value: "4294967296"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.enable.message.timeouts"
     value: "true"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.serializer.classname"
     value: "org.apache.heron.api.serializer.KryoSerializer"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.debug"
     value: "true"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.max.spout.pending"
     value: "1000000000"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.container.cpu"
     value: "3.0"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.stateful.spill.state"
     value: "false"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.name"
     value: "acking"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.team.name"
     value: "saad"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.stateful.spill.state.location"
     value: "./spilled-state/"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.component.parallelism"
     value: "1"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.stmgrs"
     value: "2"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.worker.childopts"
     value: "-XX:+HeapDumpOnOutOfMemoryError"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.reliability.mode"
     value: "ATLEAST_ONCE"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.message.timeout.secs"
     value: "10"
     type: STRING_VALUE
   }
   
   [2021-10-04 17:54:40 -0400] [DEBUG]: Component config:
   [2021-10-04 17:54:40 -0400] [DEBUG]: word => kvs {
     key: "topology.component.parallelism"
     value: "2"
     type: STRING_VALUE
   }
   
   [2021-10-04 17:54:40 -0400] [DEBUG]: exclaim1 => kvs {
     key: "topology.component.parallelism"
     value: "2"
     type: STRING_VALUE
   }
   
   [2021-10-04 17:54:40 -0400] [INFO]: Launching topology: 'acking'
   [2021-10-04 17:54:40 -0400] [INFO]: {'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '/home/saad/.heron/release.yaml', 'service_url': 'http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit', 'cluster': 'kubernetes', 'role': 'saad', 'environ': 'default', 'submit_user': 'saad', 'deploy_mode': 'server'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Starting new HTTP connection (1): localhost
   [2021-10-04 17:54:43 -0400] [DEBUG]: http://localhost:8001 "POST /api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/topologies HTTP/1.1" 201 78
   [2021-10-04 17:54:43 -0400] [INFO]: Successfully launched topology 'acking' 
   [2021-10-04 17:54:43 -0400] [DEBUG]: Elapsed time: 2.732s.
   ```
   </details>
   
   <details>
     <summary>Describe Pods</summary>
   
   ```bash
   Name:           acking-0
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-566bc767b6
                   statefulset.kubernetes.io/pod-name=acking-0
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6002/TCP, 6003/TCP, 6001/TCP, 6009/TCP, 6008/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--3038063642361095900.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking56b40818-10c0-4cda-ac3a-2c27303fef23 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-0 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xfcf2 (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-xfcf2:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  58s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   
   
   Name:           acking-1
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-566bc767b6
                   statefulset.kubernetes.io/pod-name=acking-1
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6002/TCP, 6003/TCP, 6001/TCP, 6009/TCP, 6008/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--3038063642361095900.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking56b40818-10c0-4cda-ac3a-2c27303fef23 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-1 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7vwm (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-c7vwm:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  58s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   
   
   Name:           acking-2
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-566bc767b6
                   statefulset.kubernetes.io/pod-name=acking-2
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6002/TCP, 6003/TCP, 6001/TCP, 6009/TCP, 6008/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--3038063642361095900.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking56b40818-10c0-4cda-ac3a-2c27303fef23 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-2 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snvvp (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-snvvp:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  58s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   ```
   
   </details>
   
   <details>
   <summary>Get Pods</summary>
   
   ```bash
   NAME   READY   STATUS    RESTARTS   AGE
   zk-0   0/1     Pending   0          0s
   zk-0   0/1     Pending   0          0s
   zk-0   0/1     ContainerCreating   0          0s
   zk-0   0/1     Running             0          1s
   bookie-m8p4c   0/1     Pending             0          0s
   bookie-m8p4c   0/1     Pending             0          0s
   bookie-m8p4c   0/1     Init:0/1            0          0s
   zk-0           1/1     Running             0          10s
   bookie-m8p4c   0/1     Init:0/1            0          36s
   bookie-m8p4c   0/1     PodInitializing     0          38s
   bookie-m8p4c   1/1     Running             0          39s
   heron-tracker-7559f9cb58-fr2w5   0/2     Pending             0          0s
   heron-tracker-7559f9cb58-fr2w5   0/2     Pending             0          0s
   heron-tracker-7559f9cb58-fr2w5   0/2     ContainerCreating   0          0s
   heron-tracker-7559f9cb58-fr2w5   2/2     Running             0          1s
   heron-apiserver-76cf46fc94-fdwbs   0/1     Pending             0          0s
   heron-apiserver-76cf46fc94-fdwbs   0/1     Pending             0          0s
   heron-apiserver-76cf46fc94-fdwbs   0/1     Init:0/1            0          0s
   heron-apiserver-76cf46fc94-fdwbs   0/1     PodInitializing     0          2s
   heron-apiserver-76cf46fc94-fdwbs   1/1     Running             0          3s
   acking-0                           0/1     Pending             0          0s
   acking-1                           0/1     Pending             0          0s
   acking-0                           0/1     Pending             0          0s
   acking-2                           0/1     Pending             0          0s
   acking-1                           0/1     Pending             0          0s
   acking-2                           0/1     Pending             0          0s
   ```
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933896528


   I suspect it might be the API version. It should be:
   ```yaml
   apiVersion: v1
   ```
   and not:
   
   ```yaml
   apiVersion: apps/v1
   ```
   
   **_Edit:_**
   
   Verifying the Pod Template in the `metadata`, `annotations`, and `labels` will be misleading:
   https://github.com/apache/incubator-heron/blob/2190502da0ad723db86a13216f5d9acd0b4c6474/heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java#L388-L394
   
   I also find [kubeval](https://kubeval.instrumenta.dev/) helpful.
   
   There are some code modifications that will be coming in soon. I need to perform the check for the Pod Template earlier in `submit` rather than in `createStatefulSet`. This is to avoid orphaned/failed topologies from persisting.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-925185069


   The test suite is complete. The builds keep failing with a timeout on one specific test and I am unsure of how to adjust it:
   ```bash
   //heron/stmgr/tests/cpp/server:stmgr_unittest                           TIMEOUT in 3 out of 3 in 315.0s
     Stats over 3 runs: max = 315.0s, min = 315.0s, avg = 315.0s, dev = 0.0s
   ```
   I think the timer for the tests is too aggressive for the Travis CI build machines. There was a point earlier where it did pass build but there tend to be random failures of varying varieties during the builds.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-939338054


   Allowing for other containers might be good. It seems like one of the key benefits of the Podtemplate feature. It allows for running sidecar containers.
   
   The image name being replaced is good. We already have that exposed as a config item on the Heron API server.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728385601



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       Perhaps we keep it, but chose a default value and uncomment it? My vote would be for `configmap.disabled=false` as this is the default behavior in Spark. But I agree it is good to keep it present so that an admin is aware of the toggle.
   
   Another thought I just had is that we might want to update any other Kubernetes deployment yamls for API Server. For 
    example, there is an equivalent file in the Helm chart. This would ideally be templatized to expose this toggle.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-943445404


   > I didn't realize I had the ability to push to your branch.
   
   These permissions are part of the GH PR workflow and are only on the PR branch. You will not be able to push to the `dev` branch. GH assigns itself permissions, for inline code suggestion committing, as well as for assignees for the PR branch.
   
   > I made the Helm chart update so that should be good.
    
   Thank you 😄 
   
   > We should also make `getPodTemplateConfigMapDisabled(Config config)` return `false` by default when it isn't set so that it is consistent with the yamls.
   
   It should already do this and the tests in `KubernetesContextTest` confirm this:
   
   ```java
       Assert.assertFalse(KubernetesContext.getPodTemplateConfigMapDisabled(config));
       Assert.assertFalse(KubernetesContext
           .getPodTemplateConfigMapDisabled(configWithPodTemplateConfigMap));
   
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-930428115


   A very quick attempt at setting up `RBAC` for K8s. We need to get the K8s API key and I am unsure if this is already in the `configuration` object. There is a `setSecretKeyRefs` but this is setting up environment variables for the containers.
   
   I have not wired the `configureRBAC` into the `V1Controller` constructor yet until I can get the K8s API key set up in the routine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926714865


   @surahman Would you be willing to write the documentation on how to use your contribution to Heron?  e.g.  Add a page our two to our current heron site explaining use cases, why people would use this and etc?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the completed the [logic](https://github.com/surahman/incubator-heron/commit/e3d72c9a81b7c359794c3c1a7c121ad24646e234). I have linked the API references below. The API reference for `listNamespacedConfigMap` states that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.
   
   **Edit:**  I have merged the dev branch to bring the code changes into view for review. Testing private methods is rather complicated and contrived. We might need to reconsider the access level of methods that require testing to be protected instead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-938223306


   I have made some enhancements to allow the merging of the default executor container configuration with one which is provided in the Pod Template. The Heron defaults will overwrite anything provided in the Pod Template by a user. This enhancement allows for some tweaking of the executor container specs within constraints.
   
   Only a single container is permitted per executor. This is important to avoid the launching of additional containers within a Pod.
   
   What are your thoughts? Should we be allowing the executor container to be modified?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r714944117



##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java
##########
@@ -386,7 +386,7 @@ private V1StatefulSet createStatefulSet(Resource containerResource, int numberOf
     statefulSetSpec.selector(selector);
 
     // create a pod template
-    final V1PodTemplateSpec podTemplateSpec = loadPodFromTemplate();
+    final V1PodTemplateSpec podTemplateSpec = new V1PodTemplateSpec();

Review comment:
       Hi @nwangtw, thank you! I have spun up a build/test Ubuntu18.04 Docker container on my machine and I get the following output with:
   
   ```bash
   INFO: Elapsed time: 198.476s, Critical Path: 9.83s
   INFO: 855 processes: 191 internal, 664 local.
   INFO: Build completed successfully, 855 total actions
   PASSED: //heron/stmgr/tests/cpp/server:stmgr_unittest (see /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/testlogs/heron/stmgr/tests/cpp/server/stmgr_unittest/test.log)
   INFO: From Testing //heron/stmgr/tests/cpp/server:stmgr_unittest
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [       OK ] StMgr.test_pplan_decode (2011 ms)
   [ RUN      ] StMgr.test_tuple_route
   [       OK ] StMgr.test_tuple_route (2009 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [       OK ] StMgr.test_custom_grouping_route (2008 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [       OK ] StMgr.test_back_pressure_instance (4010 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [       OK ] StMgr.test_spout_death_under_backpressure (6149 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [       OK ] StMgr.test_back_pressure_stmgr (5004 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (5033 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5014 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4016 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect[       OK ] StMgr.test_metricsmgr_reconnect (3005 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38260 ms total)
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38260 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   //heron/stmgr/tests/cpp/server:stmgr_unittest                   (cached) PASSED in 38.3s
   
   INFO: Build completed successfully, 855 total actions
   ```
   
   Re-running with the whole suite yields the following:
   
   ```bash
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest (1/3 cached) FLAKY, failed in 2 out of 3 in 5.1s
     Stats over 3 runs: max = 5.1s, min = 0.0s, avg = 1.7s, dev = 2.4s
   ```
   
   What I can discern from the test logs is that all tests inclusive of `test_custom_grouping_route` pass, which indicates a timeout on `test_back_pressure_instance`:
   https://github.com/apache/incubator-heron/blob/396f2b848da0f56dcfda1d917928358133851cf5/heron/stmgr/tests/cpp/server/stmgr_unittest.cpp#L960
   
   I have refactored the code a little to make it more readable on the `dev` branch but that should not be causing this test to pass locally. I shall clean up the Git commit-tree and merge the changes on `dev` into the main feature branch to see if it now passes on the Travis CI build.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926265633


   > ```shell
   > docker/scripts/dev-env-create.sh heron-dev
   > ```
   
   Thank you for steps to reproduce.  Will look more into this tonight.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932808466


   Is there an issue parsing the Pod Template or is it at the K8s deployment level? Could it be a permission issue? Either issue will arise again even if you read directly from a template file.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 merged pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 merged pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917798710


   @joshfischer1108 I have added some tests for the exposed code but the routines in the `V1Controller` are scoped to private. Testing these routines would mean having to make them protected and then exposing them using an accessor testing class that extends the `V1Controller`. I am not sure if changing the access level for the methods in V1Controller makes sense.
   
   I have also added tests for the new Constants which were added to ensure complete patch code coverage. I feel testing them is redundant and that testing should be limited to routines and generated objects.
   
   Edit: I have object reflection setup in the testing base for the `V1ControllerTest` to gain access to the private routines..


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917798710


   @joshfischer1108 I have added some tests for the exposed code but the routines in the V1Controller are scoped to private. Testing these routines would mean having to make them protected and then exposing them using an accessor testing class that extends the V1Controller. I am not sure if changing the access level for the methods in V1Controller makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-919134335


   > @joshfischer1108 I have added some tests for the exposed code but the routines in the `V1Controller` are scoped to private. Testing these routines would mean having to make them protected and then exposing them using an accessor testing class that extends the `V1Controller`. I am not sure if changing the access level for the methods in V1Controller makes sense.
   > 
   > I have also added tests for the new Constants which were added to ensure complete patch code coverage. I feel testing them is redundant and that testing should be limited to routines and generated objects.
   > 
   > Edit: I have object reflection setup in the testing base for the `V1ControllerTest` to gain access to the private routines..
   
   Well done.  Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922011806


   I think that's correct, but steps 2 might be somewhat simpler than you describe. I'll try to find a better example, but I think step 2 should be something like [this example](https://github.com/fabric8io/kubernetes-client/blob/04e0b2e2530751c90aeb9c0465ad3219c46a636f/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/ConfigMapExample.java#L52). It is Fabric8 API, but there should be a similar `get` method using the official k8s api.
   
   Once you lookup the ConfigMap object, you should be able to call [`getData()`](https://github.com/kubernetes-client/java/blob/69cc44bac6764a93d9d8e384c326554a21bb2d89/kubernetes/src/main/java/io/kubernetes/client/openapi/models/V1ConfigMap.java#L145) to retrieve the ConfigMap's value which should be the `template` that is passed into `kubernetesClient.pods().load(template).get()`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-955088288


   Finally able to do some testing. I think we're still missing the `Role` update to include `configmap` resource `get` and `list`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-923142393


   I am not sure why the following tests keep timing out:
   ```bash
   //heron/stmgr/tests/cpp/server:stmgr_unittest                           TIMEOUT in 3 out of 3 in 315.0s
     Stats over 3 runs: max = 315.0s, min = 315.0s, avg = 315.0s, dev = 0.0s
   Test cases: finished with 6 passing and 1 failing out of 7 test cases
   ```
   There are no details being provided and my code is not doing anything which would deviate from generating the default Pod Template when no parameter is set using `--config-property`.
   
   **_Edit:_**
   I have extracted the `getConfigMaps` routine for stubbing purposes to simplify testing. There is a problem with verifying the exception details with Java Reflections because the ultimate exception thrown is a `NoSuchMethodException`. Updates are on the parallel `dev` branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926720725


   @joshfischer1108 sure, but we still need to run some deployment testing and need @nicknezis's feedback when he has time. I shall write up the basics but this is really Nick's feature and he is more knowledgeable about K8s, so I would ask him to add to it and sign off on the documentation.
   
   I have tried to be very careful with code from a security standpoint: hard failures and nothing silent or byzantine. We do not want a situation where a Pod Template is set but there is a path in the code that allows a bad configuration through. We need very critical and careful reviewing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-930428115


   A very quick attempt at setting up `RBAC` for K8s. We need to get the K8s API key and I am unsure if this is already in the `configuration` object. There is a `setSecretKeyRefs` but this is setting up environment variables for the containers.
   
   I have not wired the `configureRBAC` into the `V1Controller` constructor yet until I can get the K8s API key set up in the routine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934004555


   Thank you for testing things out. I have reverted the changes I made to where the checks for a valid Pod Template occur as it was not resolving the issue. An issue arises when attempting to submit a topology with a bad ConfigMap/Pod Template where the topology persists but pods are not created. For some reason this does not do what it is supposed to:
   
   https://github.com/apache/incubator-heron/blob/2190502da0ad723db86a13216f5d9acd0b4c6474/heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java#L172-L178
   
   If there is an issue with persisted state in the submitted Pods to K8s it might be an issue further downstream. The test suite validates that `loadPodFromTemplate` is creating the `V1PodTemplateSpec` from input. It would be great to have a second set of eyes on this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933896528


   I suspect it might be the API version. It should be:
   ```yaml
   apiVersion: v1
   ```
   and not:
   
   ```yaml
   apiVersion: apps/v1
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926304481


   > Thank you for steps to reproduce. Will look more into this tonight.
   
   Thank you @joshfischer1108, Travis CI seems to be passing all tests now. I think we are well-positioned to review and test to see if any adjustments need to be made.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917709147


   @joshfischer1108, yes please. I think it would be beneficial at this point to get some people who are more experienced with the codebase to once over things. We can iterate from there as needed.
   
   We also need someone who can deploy and test the usage of Pod Templates in a `ConfigMap`s.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-918745197


   There are some issues with testing the `createStatefulSet` method in the `V1Controller`, namely with the private scope and an inability to mock it. It aslo performs some reads from the disk for files when it is setting up the Topology config. This requires mocking to simulate the disk reads, and there is no such functionality in place within the `ToplogyUtilsTest`s.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r714969520



##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java
##########
@@ -386,7 +386,7 @@ private V1StatefulSet createStatefulSet(Resource containerResource, int numberOf
     statefulSetSpec.selector(selector);
 
     // create a pod template
-    final V1PodTemplateSpec podTemplateSpec = loadPodFromTemplate();
+    final V1PodTemplateSpec podTemplateSpec = new V1PodTemplateSpec();

Review comment:
       The following timer warnings are being output, full log is below:
   
   ```bash
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   ```
   
   <details><summary>Test ouput</summary>
   <p>
   
   ```bash
   INFO: Options provided by the client:
     Inherited 'common' options: --isatty=1 --terminal_columns=100
   INFO: Reading rc options for 'test' from /heron/tools/bazel.rc:
     Inherited 'build' options: --genrule_strategy=standalone --host_force_python=PY3 --ignore_unsupported_sandboxing --spawn_strategy=standalone --workspace_status_command scripts/release/status.sh
   INFO: Reading rc options for 'test' from /root/.bazelrc:
     Inherited 'build' options: --verbose_failures --host_force_python=PY3 --spawn_strategy=standalone --genrule_strategy=standalone --local_ram_resources=4096 --local_cpu_resources=2 --announce_rc
   INFO: Reading rc options for 'test' from /root/.bazelrc:
     'test' options: --test_strategy=standalone
   INFO: Found applicable config definition build:ubuntu in file /heron/tools/bazel.rc: --experimental_action_listener=tools/java:compile_java --experimental_action_listener=tools/cpp:compile_cpp --experimental_action_listener=tools/python:compile_python --genrule_strategy=standalone --ignore_unsupported_sandboxing --linkopt -lm --linkopt -lpthread --linkopt -lrt --spawn_strategy=standalone --workspace_status_command scripts/release/status.sh --copt=-O3
   WARNING: /heron/heron/healthmgr/tests/java/BUILD:52:13: in srcs attribute of java_library rule //heron/healthmgr/tests/java:healthmgr-tests: please do not import '//heron/healthmgr/src/java:org/apache/heron/healthmgr/HealthManager.java' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'java_library', the error might have been caused by the macro implementation
   INFO: Analyzed 612 targets (0 packages loaded, 0 targets configured).
   INFO: Found 379 targets and 233 test targets...
   INFO: Elapsed time: 148.802s, Critical Path: 49.52s
   INFO: 234 processes: 1 internal, 233 local.
   INFO: Build completed successfully, 234 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.4s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.2s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 2.0s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.6s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.4s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.4s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.5s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.3s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.7s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.0s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.5s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.8s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.6s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 5.1s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.3s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.3s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.5s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.1s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.5s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.6s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.6s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.8s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.5s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.7s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.7s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.6s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.4s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.6s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.7s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.7s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.8s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.5s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 3.3s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 1.0s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.4s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 0.9s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 0.9s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 1.1s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 1.4s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.6s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.4s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.5s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.3s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.5s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.4s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.8s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.5s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.7s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.1s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 2.0s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.4s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.5s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.3s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.2s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.3s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.4s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.3s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.7s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 12.1s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.4s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.7s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.6s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.6s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.4s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.9s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 1.0s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.3s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.5s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.4s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.9s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 2.5s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.3s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.4s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.4s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.5s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.8s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.4s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.5s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.4s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.2s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.3s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.3s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.3s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 1.9s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.3s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 2.1s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 1.8s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 1.1s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.7s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.3s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.2s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 0.9s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 1.4s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.8s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 40.2s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.7s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 27.0s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.4s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 0.8s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.0s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.0s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.2s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.3s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.4s
   
   INFO: Build completed successfully, 234 total actions
   ```
   
   </p>
   </details>




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926128468


   This is a preplexing issue. The tests appear to pass on fresh Ubuntu18.04 Docker Container on a better spec'd machine but fail on a lower spec'd machine. Commands to setup and execute the build and test are as follows:
   
   ```bash
   cd heron_root_dir
   docker/scripts/dev-env-create.sh heron-dev
   
   ./bazel_configure.py
   bazel build --config=ubuntu heron/...
   
   bazel test --config=ubuntu --test_output=errors --cache_test_results=no --test_verbose_timeout_warnings heron/...
   bazel test --config=ubuntu --test_output=all --cache_test_results=no --test_verbose_timeout_warnings heron/stmgr/tests/cpp/server:stmgr_unittest
   ```
   
   I have when I `cat` the `V1Controller.java` file in the containers terminal `loadPodFromTemplate` is as it should be and wired up correctly in `createStatefulSet` on line 387. Could someone else please clone the repo and test if it passes the build? @joshfischer1108, @nicknezis, and anyone else who has some time I could use an extra set of objective eyes on this. I am sure it is just something silly that got overlooked.
   
   I have also cleaned out the `loacPodFromTemplate` function and set it up to simple return a new/empty `V1PodTemplateSpec` to no avail on the more constrained system:
   
   ```java
     private V1PodTemplateSpec loadPodFromTemplate() {
   
       return new V1PodTemplateSpec();
     }
   ```
   
   <details><summary>Complete test ouput - higher spec</summary>
   <p>
   
   ```bash
   INFO: Elapsed time: 459.454s, Critical Path: 42.93s
   INFO: 1374 processes: 331 internal, 1043 local.
   INFO: Build completed successfully, 1374 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.2s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.5s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.6s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.5s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 4.9s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.4s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.0s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.5s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.5s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.5s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.9s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.7s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.8s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.7s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.8s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 2.9s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 0.9s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.0s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 1.0s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.9s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.5s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.5s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.4s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.6s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.8s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.0s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 1.7s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.5s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.3s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.1s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.1s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.5s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.9s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.5s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.8s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.9s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.5s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.7s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.0s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.5s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 1.9s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.7s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.2s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.2s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.4s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 2.0s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 0.9s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.6s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.0s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 1.0s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.8s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.6s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.5s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 1.1s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.1s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.4s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.5s
   
   INFO: Build completed successfully, 1374 total actions
   ```
   
   </p>
   </details>
   
   <details><summary>//heron/stmgr/tests/cpp/server:stmgr_unittest - higher spec</summary>
   <p>
   
   ```bash
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [warn] Added a signal to event base 0x55c0c05e0840 with signals already added to event_base 0x55c0c05e0580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0b00 with signals already added to event_base 0x55c0c05e0840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0dc0 with signals already added to event_base 0x55c0c05e0b00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1080 with signals already added to event_base 0x55c0c05e0dc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_pplan_decode (2013 ms)
   [ RUN      ] StMgr.test_tuple_route
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base (nil).  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e39c0 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tuple_route (2006 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e39c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_custom_grouping_route (2009 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_instance (4002 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_spout_death_under_backpressure (6144 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3440 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr (5005 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c05e3440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072ec00 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (4035 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072ec00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f9c0 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5015 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072f9c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c000 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072fc80 with signals already added to event_base 0x55c0c072c000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca000 with signals already added to event_base 0x55c0c072fc80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca2c0 with signals already added to event_base 0x55c0c1cca000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca580 with signals already added to event_base 0x55c0c1cca2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca840 with signals already added to event_base 0x55c0c1cca580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4008 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect
   [warn] Added a signal to event base 0x55c0c1cccc00 with signals already added to event_base 0x55c0c1cca840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc940 with signals already added to event_base 0x55c0c1cccc00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc680 with signals already added to event_base 0x55c0c1ccc940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc3c0 with signals already added to event_base 0x55c0c1ccc680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc100 with signals already added to event_base 0x55c0c1ccc3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccbb80 with signals already added to event_base 0x55c0c1ccc100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb600 with signals already added to event_base 0x55c0c1ccbb80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb340 with signals already added to event_base 0x55c0c1ccb600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb080 with signals already added to event_base 0x55c0c1ccb340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_metricsmgr_reconnect (4004 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38241 ms total)
   
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38241 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   Target //heron/stmgr/tests/cpp/server:stmgr_unittest up-to-date:
     bazel-bin/heron/stmgr/tests/cpp/server/stmgr_unittest
   INFO: Elapsed time: 39.218s, Critical Path: 38.29s
   INFO: 2 processes: 1 internal, 1 local.
   INFO: Build completed successfully, 2 total actions
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   
   INFO: Build completed successfully, 2 total actions
   ```
   </p>
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-919251511


   I will write up what I have found on the issue with testing `createStatefulSet`. It is achievable but will be rather involved and require some serious digging in the codebase. I think your judgement is sound to not convolve this issue with the other.
   
   On a side note, I think it is time to switch over from Travis CI to Github Actions. From personal experience, I feel that GH Actions are faster to run than Travis CI and it might speed things up to not rely on a third-party service.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-930300053


   Thank you @nicknezis, I have begun to affect changes on `dev`.
   
   > 1. Add configmaps k8s role permissions
   > 
   > ```
   > rules:
   > - apiGroups: 
   >   - ""
   >   resources: 
   >   - configmaps
   >   verbs: 
   >   - get
   >   - watch
   >   - list
   > ```
   
   I have looked into K8s `RBAC` on the v11 API and found [`createNamespacedRole`](https://github.com/kubernetes-client/java/blob/release-11/kubernetes/docs/RbacAuthorizationV1Api.md#createNamespacedRole) within the [`RbacAuthorizationV1Api`](https://github.com/kubernetes-client/java/blob/release-11/kubernetes/docs/RbacAuthorizationV1Api.md). It seems reasonably straightforward to get the above role into a [`V1Role`](https://github.com/kubernetes-client/java/blob/release-11/kubernetes/docs/V1Role.md) but it does require an API key. This can be set up in the `V1Controller` constructor.
   
   > 2. The `listConfigMaps` call is looking in the default namespace, but it should be using `getNamespace()` instead for that value.
   
   Good point, do not access directly and use the getter routine, resolved.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the completed the [logic](https://github.com/surahman/incubator-heron/commit/e3d72c9a81b7c359794c3c1a7c121ad24646e234). I have linked the API references below. The API reference for `listNamespacedConfigMap` states that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922017460


   Doing a bit of research, I see a `listNamespacedConfigMap` method that returns `V1ConfigMapList`. That class has a `getItems` [method](https://github.com/kubernetes-client/java/blob/f20788272291c0e79a8c831d8d5a7dd94d96d2de/kubernetes/src/main/java/io/kubernetes/client/openapi/models/V1ConfigMapList.java#L91) that returns a `List<V1ConfigMap>`. Not sure if there is a more direct way to get the specifically named `ConfigMap`, but this seems to match the logic in that Fabric8 example I previously linked to.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nwangtw commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nwangtw commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r727736026



##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));

Review comment:
       "\n%s"? Maybe `append("\n"); append(e.getMessage());` is more efficient.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));
+      }
+
+      try {
+        // Clear state from the Scheduler via RPC.
+        Scheduler.KillTopologyRequest killTopologyRequest = Scheduler.KillTopologyRequest
+            .newBuilder()
+            .setTopologyName(topologyName).build();
+
+        ISchedulerClient schedulerClient = new SchedulerClientFactory(config, runtime)
+            .getSchedulerClient();
+        if (!schedulerClient.killTopology(killTopologyRequest)) {
+          final String logMessage =
+              String.format("Failed to remove topology '%s' from scheduler after failed submit. "
+                  + "Please re-try the kill command.", topologyName);
+          errorMessage.append(String.format("%n%s", logMessage));

Review comment:
       "\n%s"? Maybe `append("\n"); append(e.getMessage());` is more efficient.

##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/KubernetesContext.java
##########
@@ -172,6 +179,15 @@ static String getContainerVolumeMountPath(Config config) {
     return config.getStringValue(KUBERNETES_CONTAINER_VOLUME_MOUNT_PATH);
   }
 
+  public static String getPodTemplateConfigMapName(Config config) {
+    return config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_NAME);
+  }
+
+  public static boolean getPodTemplateConfigMapDisabled(Config config) {
+    final String disabled = config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_DISABLED);
+    return disabled != null && disabled.toLowerCase(Locale.ROOT).equals("true");

Review comment:
       I am wondering if `equalsIgnoreCase()` is easier to maintain.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));
+      }
+
+      try {
+        // Clear state from the Scheduler via RPC.
+        Scheduler.KillTopologyRequest killTopologyRequest = Scheduler.KillTopologyRequest
+            .newBuilder()
+            .setTopologyName(topologyName).build();
+
+        ISchedulerClient schedulerClient = new SchedulerClientFactory(config, runtime)
+            .getSchedulerClient();
+        if (!schedulerClient.killTopology(killTopologyRequest)) {
+          final String logMessage =
+              String.format("Failed to remove topology '%s' from scheduler after failed submit. "
+                  + "Please re-try the kill command.", topologyName);
+          errorMessage.append(String.format("%n%s", logMessage));
+          LOG.log(Level.SEVERE, logMessage);
+        }
+      // SUPPRESS CHECKSTYLE IllegalCatch
+      } catch (Exception ignored){
+        // The above call to clear the Scheduler may fail. This situation can be ignored.

Review comment:
       Could be useful to have the info in the errorMessage I feel.

##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       do we want to remove this line or keep it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-943297568


   We should also make `getPodTemplateConfigMapDisabled(Config config)` return `false` by default when it isn't set so that it is consistent with the yamls.
   
   For the Helm chart, we just need the add the following line to `tools.yaml`:
   ```
   -D heron.kubernetes.pod.template.configmap.disabled={{ .Values.disablePodTemplates }}
   ```
   And in the `values.yaml.template`:
   ```
   # Support for ConfigMap mounted PodTemplates
   disablePodTemplates: false
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728445745



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       > Another thought I just had is that we might want to update any other Kubernetes deployment yamls for API Server
   
   I shall look into this. I am currently working my way through testing some methods in the `V1Conttoller`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-938223306


   I have made some enhancements to allow the merging of the default executor container configuration with one which is provided in the Pod Template. The Heron defaults will overwrite anything provided in the Pod Template by a user. This enhancement allows for some tweaking of the executor container specs within constraints.
   
   Only a single container is permitted per executor. This is important to avoid the launching of additional containers within a Pod.
   
   What are your thoughts?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728231089



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       Hi @nwangtw, thank you for the review 😄.
   
   This is a pertinent question and an equally good point. I left it there so that people are aware that the feature can be disabled and to make it easy to switch it off without having to look anything up. I am indifferent about discarding it but feel it should be there.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r739688875



##########
File path: .travis.yml
##########
@@ -47,4 +46,4 @@ script:
   - python -V
   - which python3
   - python3 -V
-  - travis-wait-improved --timeout=180m scripts/travis/ci.sh

Review comment:
       What was the reason for removing this?

##########
File path: heron/schedulers/tests/java/org/apache/heron/scheduler/kubernetes/KubernetesContextTest.java
##########
@@ -0,0 +1,62 @@
+/**

Review comment:
       Good catch on adding headers




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-956449189






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-957592887






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the completed the [logic](https://github.com/surahman/incubator-heron/commit/e3d72c9a81b7c359794c3c1a7c121ad24646e234). I have linked the API references below. The API reference for `listNamespacedConfigMap` states that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.
   
   **Edit:**  I have merged the dev branch to bring the code changes into view for review. Testing private methods is rather complicated and contrived. We might need to reconsider the access level of methods that require testing to be protected instead.
   
   Running a Style Check prior to commits is not flagging issues locally. We need style files that are compatible with newer versions of [CheckStyle](https://github.com/checkstyle/checkstyle) linter app and IDE plugins:
   `tools/java/src/org/apache/bazel/checkstyle/heron_coding_style.xml`
   The current workaround is to install the plugin and use an older version of the engine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934905204


   Sysadmins will probably want a kill switch for this feature, and as such a boot flag that will turn this off must be provided.
   
   Any idea on if the `-D` boot/command line flag for the `heron-apiserver` adds the Key-Value pair to the `Config` object? I have base logic completed, and I am writing the test suite.
   
   ```bash
   heron-apiserver
   --base-template kubernetes
   --cluster kubernetes
   <ALL OTHER -D FLAG PARAMETERS>
   -D heron.kubernetes.pod.template.configmap.disabled=true
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728231089



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       Hi @nwangtw, thank you for the review 😄.
   
   This is a pertinent question and an equally good point. I left it there so that people are aware that the feature can be disabled and to make it easy to switch it off without having to look anything up. I am indifferent about discarding it but feel it should be there.

##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/KubernetesContext.java
##########
@@ -172,6 +179,15 @@ static String getContainerVolumeMountPath(Config config) {
     return config.getStringValue(KUBERNETES_CONTAINER_VOLUME_MOUNT_PATH);
   }
 
+  public static String getPodTemplateConfigMapName(Config config) {
+    return config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_NAME);
+  }
+
+  public static boolean getPodTemplateConfigMapDisabled(Config config) {
+    final String disabled = config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_DISABLED);
+    return disabled != null && disabled.toLowerCase(Locale.ROOT).equals("true");

Review comment:
       Good catch, I shall affect this change.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));

Review comment:
       You are right, I will make the update.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));
+      }
+
+      try {
+        // Clear state from the Scheduler via RPC.
+        Scheduler.KillTopologyRequest killTopologyRequest = Scheduler.KillTopologyRequest
+            .newBuilder()
+            .setTopologyName(topologyName).build();
+
+        ISchedulerClient schedulerClient = new SchedulerClientFactory(config, runtime)
+            .getSchedulerClient();
+        if (!schedulerClient.killTopology(killTopologyRequest)) {
+          final String logMessage =
+              String.format("Failed to remove topology '%s' from scheduler after failed submit. "
+                  + "Please re-try the kill command.", topologyName);
+          errorMessage.append(String.format("%n%s", logMessage));
+          LOG.log(Level.SEVERE, logMessage);
+        }
+      // SUPPRESS CHECKSTYLE IllegalCatch
+      } catch (Exception ignored){
+        // The above call to clear the Scheduler may fail. This situation can be ignored.

Review comment:
       I was reluctant to add a `log` message because of the rather dense logging on the API server. If an error occurs from this shutdown attempt it will be because the Scheduler was unreachable. What I can do is log a message at the `FINER` or `INFO` levels.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-938223306


   I have made some enhancements to allow the merging of the default executor container configuration with one which is provided in the Pod Template. The Heron defaults will overwrite anything provided in the Pod Template by a user. This enhancement allows for some tweaking of the executor container specs within constraints.
   
   Only a single container is permitted per executor. This is important to avoid the launching of additional containers within a Pod.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-941290266


   Dangling references to topologies that fail to launch are now removed from the `Topology Manager` and `Scheduler`. I am updating the `docs` as well. I think we have an RC here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-938223306


   I have made some enhancements to allow the merging of the default executor container configuration with one which is provided in the Pod Template. The Heron defaults will overwrite anything provided in the Pod Template by a user. This enhancement allows for some tweaking of the executor container specs within constraints.
   
   Only a single container is permitted per executor. This is important to avoid the launching of additional containers within a Pod.
   
   What are your thoughts? Shoule we be allowing the executor container to be modified?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-939395365


   I have gone ahead and added the ability for additional `Container`s to be included. If a System Admin takes issue with this functionality, they can disable it with the provided flag.
   
   I shall go ahead and add the overwriting-merge for the `Volume Mounts` and `Ports`. These are essential for sidecars and various other support container patterns.
   
   The Pod Template is simple with the sidecar loading an `alpine` image. I performed some tests on merging spec lists and encountered a bug, which I have squashed. All changes remain on `dev` pending a merge.
   
   I am working on a desktop that is CPU constrained so the Pods will remain in a `pending` state.
   
   <details><summary>Pod Template</summary>
   
   ```yaml
   apiVersion: v1
   kind: PodTemplate
   metadata:
     name: pod-template-example
     namespace: default
   template:
     metadata:
       name: acking-pod-template-example
     spec:
       containers:
         # Executor container
         - name: executor
           securityContext:
             allowPrivilegeEscalation: false
           env:
           - name: Var_One
             value: "First Variable"
           - name: Var_Two
             value: "Second Variable"
           - name: Var_Three
             value: "Third Variable"
           - name: POD_NAME
             value: "MUST BE OVERWRITTEN"
           - name: HOST
             value: "REPLACED WITH ACTUAL HOST"
   
         # Sidecar container
         - name: sidecar-container
           image: alpine
   ```
   
   </details>
   
   <details><summary>describe pods acking-0</summary>
   
   ```bash
   Name:           acking-0
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-74f89d8bd9
                   statefulset.kubernetes.io/pod-name=acking-0
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6008/TCP, 6001/TCP, 6002/TCP, 6009/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP, 6003/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0-3560430130176919824.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking196b29f8-aad9-42c2-a6c8-7987ef4602e9 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-bi
 nary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --she
 ll-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:        (v1:status.podIP)
         POD_NAME:   acking-0 (v1:metadata.name)
         Var_One:    First Variable
         Var_Three:  Third Variable
         Var_Two:    Second Variable
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h62wk (ro)
     sidecar-container:
       Image:        alpine
       Port:         <none>
       Host Port:    <none>
       Environment:  <none>
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h62wk (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-h62wk:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Burstable
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age                 From               Message
     ----     ------            ----                ----               -------
     Warning  FailedScheduling  38s (x2 over 115s)  default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   ```
   
   </details>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-920537024


   Looking over the code, I think I failed to point out an important aspect of what Spark is doing with this feature. This code is important to understand the goal. When the scheduler is defining the Pod, it will check for a PodTemplate, and if not defined create a default. Currently the Heron scheduler only starts with a default Pod that it creates from scratch.
   
   This [Spark code](https://github.com/apache/spark/blob/0494dc90af48ce7da0625485a4dc6917a244d580/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesDriverBuilder.scala#L30-L38) illustrates the branching logic that checks for an existing PodTemplate config item.
   
   This is the [method](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L96) used by Spark to actually load the template and create the pod spec.
   
   The mounting of the ConfigMap as a volume, is actually something Spark specific that we might not need in Heron. In Spark there is a concept of a driver pod, and this creates the executor pods. So this is why they mount the Executor's PodTemplate ConfigMap into the Driver pod. If I'm not mistaken, our version of `loadPodFromTemplate` could directly lookup the ConfigMap PodTemplate without the need to mount the ConfigMap into the pod.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 merged pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 merged pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922162384


   Yes, I think we might be saying the same thing. The pod configs would be loaded before job submission. The K8s scheduler exists in the Heron API Server. So someone wanting to use this feature would install Heron API Server and the specifically named ConfigMap PodTemplate. Then when submitting the topology with the new `--config-property`, the Heron scheduler would lookup and use the PodTemplate, instead of making the default StatefulSet with `createStatefulSet`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926128468


   This is a preplexing issue. The tests appear to pass on fresh Ubuntu18.04 Docker Container on a better spec'd machine but fail on a lower spec'd machine. Commands to setup and execute the build and test are as follows:
   
   ```bash
   cd heron_root_dir
   docker/scripts/dev-env-create.sh heron-dev
   
   ./bazel_configure.py
   bazel build --config=ubuntu heron/...
   
   bazel test --config=ubuntu --test_output=errors --cache_test_results=no --test_verbose_timeout_warnings heron/...
   bazel test --config=ubuntu --test_output=all --cache_test_results=no --test_verbose_timeout_warnings heron/stmgr/tests/cpp/server:stmgr_unittest
   ```
   
   I have when I `cat` the `V1Controller.java` file in the containers terminal `loadPodFromTemplate` is as it should be and wired up correctly in `createStatefulSet` on line 387. Could someone else please clone the repo and test if it passes the build? @joshfischer1108, @nicknezis, and anyone else who has some time I could use an extra set of objective eyes on this. I am sure it is just something silly that got overlooked.
   
   I have also cleaned out the `loacPodFromTemplate` function and set it up to simple return a new/empty `V1PodTemplateSpec` to no avail on the more constrained system:
   
   ```java
     private V1PodTemplateSpec loadPodFromTemplate() {
   
       return new V1PodTemplateSpec();
     }
   ```
   
   <details><summary>Complete test ouput - higher spec</summary>
   <p>
   
   ```bash
   INFO: Elapsed time: 459.454s, Critical Path: 42.93s
   INFO: 1374 processes: 331 internal, 1043 local.
   INFO: Build completed successfully, 1374 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.2s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.5s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.6s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.5s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 4.9s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.4s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.0s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.5s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.5s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.5s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.9s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.7s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.8s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.7s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.8s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 2.9s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 0.9s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.0s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 1.0s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.9s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.5s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.5s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.4s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.6s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.8s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.0s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 1.7s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.5s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.3s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.1s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.1s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.5s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.9s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.5s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.8s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.9s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.5s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.7s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.0s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.5s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 1.9s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.7s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.2s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.2s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.4s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 2.0s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 0.9s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.6s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.0s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 1.0s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.8s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.6s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.5s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 1.1s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.1s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.4s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.5s
   
   INFO: Build completed successfully, 1374 total actions
   ```
   
   </p>
   </details>
   
   <details><summary>//heron/stmgr/tests/cpp/server:stmgr_unittest - higher spec</summary>
   <p>
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [warn] Added a signal to event base 0x55c0c05e0840 with signals already added to event_base 0x55c0c05e0580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0b00 with signals already added to event_base 0x55c0c05e0840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0dc0 with signals already added to event_base 0x55c0c05e0b00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1080 with signals already added to event_base 0x55c0c05e0dc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_pplan_decode (2013 ms)
   [ RUN      ] StMgr.test_tuple_route
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base (nil).  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e39c0 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tuple_route (2006 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e39c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_custom_grouping_route (2009 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_instance (4002 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_spout_death_under_backpressure (6144 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3440 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr (5005 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c05e3440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072ec00 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (4035 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072ec00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f9c0 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5015 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072f9c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c000 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072fc80 with signals already added to event_base 0x55c0c072c000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca000 with signals already added to event_base 0x55c0c072fc80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca2c0 with signals already added to event_base 0x55c0c1cca000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca580 with signals already added to event_base 0x55c0c1cca2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca840 with signals already added to event_base 0x55c0c1cca580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4008 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect
   [warn] Added a signal to event base 0x55c0c1cccc00 with signals already added to event_base 0x55c0c1cca840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc940 with signals already added to event_base 0x55c0c1cccc00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc680 with signals already added to event_base 0x55c0c1ccc940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc3c0 with signals already added to event_base 0x55c0c1ccc680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc100 with signals already added to event_base 0x55c0c1ccc3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccbb80 with signals already added to event_base 0x55c0c1ccc100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb600 with signals already added to event_base 0x55c0c1ccbb80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb340 with signals already added to event_base 0x55c0c1ccb600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb080 with signals already added to event_base 0x55c0c1ccb340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_metricsmgr_reconnect (4004 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38241 ms total)
   
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38241 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   Target //heron/stmgr/tests/cpp/server:stmgr_unittest up-to-date:
     bazel-bin/heron/stmgr/tests/cpp/server/stmgr_unittest
   INFO: Elapsed time: 39.218s, Critical Path: 38.29s
   INFO: 2 processes: 1 internal, 1 local.
   INFO: Build completed successfully, 2 total actions
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   
   INFO: Build completed successfully, 2 total actions
   </p>
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-930428115


   A very quick attempt at setting up `RBAC` for K8s. We need to get the K8s API key and I am unsure if this is already in the `configuration` object. There is a `setSecretKeyRefs` but this is setting up environment variables for the containers.
   
   I have not wired the `configureRBAC` into the `V1Controller` constructor yet until I can get the K8s API key set up in the routine.
   
   **_Edit:_**
   It appears as though the `ClusterRoles` and `ServiceAccount` are in the K8s configs for the [Heron API Server](https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/apiserver.yaml). This makes life a lot easier with only the following being additionally required:
   
   <details>
     <summary>Role</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: Role
   metadata:
     name: heron-apiserver-configmap-role
     namespace: default
   rules:
   - apiGroups:
     - ""
     resources:
     - configmaps
     verbs:
     - get
     - watch
     - list
   ```
   </details>
   
   <details>
     <summary>RoleBinding</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: RoleBinding
   metadata:
     name: heron-apiserver-configmap-rolebinding
     namespace: default
   roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: Role
     name: heron-apiserver-configmap-role
   subjects:
   - kind: ServiceAccount
     name: heron-apiserver
     namespace: default
   ```
   </details>
   
   I think it would be safe to add these to the Heron API Server K8s configs because it is adequately restrictive. I am not sure if both a `ClusterRole` and `Role` can be assigned at the same time, if not we would need to aggregate into the `ClusterRole`. The `ClusterRole` has a reference to the `cluster-admin` and I believe this is why it can submit topologies.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-952077827


   Thank you @joshfischer1108, I really appreciate someone with your OSS experience giving this a review. It is fortunate you did not review anything yet because the changes I am making today will improve performance and tidy up the code significantly.
   
   I run the complete battery of tests using a clean Heron Ubuntu 18 LTS Docker container before merging. TravisCI is exceptionally flaky so here is to hoping it is having a good day 🤞🏼. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-921992801


   Okay, so I am going to endeavour to break this down - please bear with me as I am still relatively new to the codebase and K8s API...
   
   ---
   
   Starting with the reference code in [Spark](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L96-L113), and keeping in mind that the Spark architecture is different from Heron's:
   
   This simply gets the Hadoop config from the driver/coordinator:
   
   ```scala
   val hadoopConf = SparkHadoopUtil.get.newConfiguration(conf)
   ```
   
   These two lines are going to download the remote file from the driver/coordinator and make them local to a machine. The first line [downloads](https://github.com/joan38/kubernetes-client/blob/6fbf23b7a997e572456256c4714222ea734bd845/kubernetes-client/src/com/goyeau/kubernetes/client/api/PodsApi.scala#L119-L148) the file, assuming I am looking at the correct Scala K8s API, and the second retrieves a file descriptor/handle on the downloaded file:
   
   ```scala
   val localFile = downloadFile(templateFileName, Utils.createTempDir(), conf, hadoopConf)
   val templateFile = new File(new java.net.URI(localFile).getPath)
   ```
   
   This third line then does the heavy lifting of reading the Pod Template into a Pod Config from the newly copied local file:
   
   ```scala
   val pod = kubernetesClient.pods().load(templateFile).get()
   ```
   
   The final line sets up the Spark container with the Pod Template and specified name:
   
   ```scala
   selectSparkContainer(pod, containerName)
   ```
   
   ---
   
   Moving on to what we need to do on the Heron side:
   
   1. Read the `ConfigMap` name from the `--config-property` option. I set the key for this to `heron.kubernetes.pod.template.configmap.name` with the value being the file name.
   2. Read the YAML `ConfigMap` and extract the YAML node tree which contains the Pod Template. For this, we will either need a YAML parser or need to find a utility in the K8s Java API to do the job. I think the K8s Java API should include a utility for this, a lack thereof would remain a significant oversight on their part.
   3. Create a `V1PodTemplateSpec` object using the results from step 2.
   4. Iron out permission issues during testing, should they arise.
   
   I am not familiar with the K8s API but will start digging around for a YAML config to V1 object parser, if anyone has aware of where it is please let me know.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917798710


   @joshfischer1108 I have added some of the basic tests and will flesh out the `V1Controller` tests as time permits.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-923142393


   I am not sure why the following tests keep timing out:
   ```bash
   //heron/stmgr/tests/cpp/server:stmgr_unittest                           TIMEOUT in 3 out of 3 in 315.0s
     Stats over 3 runs: max = 315.0s, min = 315.0s, avg = 315.0s, dev = 0.0s
   Test cases: finished with 6 passing and 1 failing out of 7 test cases
   ```
   There are no details being provided and my code is not doing anything which would deviate from generating the default Pod Template when no parameter is set using `--config-property`.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-920551592


   I think I have an idea of what needs to happen... time to iterate :wink:.
   
   @nicknezis You are correct, Spark's architecture has a driver/coordinator with a fleet of executors. Is there already a `loadPodFromTemplate` in Heron or do we need to put one together? I could not find anything in the `V1Controller` or K8S scheduler codebase in Heron. Currently `createStatefulSet` relies on the `V1` K8S API to put together a default Pod Template in which most of the fields are simply set to `null`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-946955256


   I have added a check for multiple `executor` container specifications in the provided Pod Template. I am conflicted about whether we should be checking for this. It should not be the responsibility of the framework to verify a valid Pod Template. The complete battery of tests is passing locally.
   
   @nicknezis I think it is safe to say that the core of the functionality for the is feature is now complete, and as such, I am leaving this PR until review. I am also pinging @joshfischer1108 in case we need more reviewers.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-947926499


   Thank you, Josh!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932636387


   I am not sure if you tried this but think we need to set up a [`Service Account`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) and assign it to the Heron API Server Pod. We then bind the Role to the `Service Account` [like so](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects).
   
   From [Stack Overflow](https://stackoverflow.com/questions/52995962/kubernetes-namespace-default-service-account):
   
   > 5. The default permissions for a service account don't allow it to list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932798773


   So I think I was running an old version of the image. I'm able to pull the configmap with the role edit I provided. But it's failing to create the podtemplate with the config map I loaded. So I'm researching how best to create a podtemplate from a file. Spark uses Fabric8 API, but I need to figure out the official java API equivalent.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932331781


   From the error, this is undoubtedly a permissions issue and I am wondering if the `verb` actions in the role are adequate. I believe `configMapRef` are resolved by `kubelet` and it may have permissions to access the ConfigMaps. Pods presumably do not since they are in the concluding stages of bootstrapping - assuming the principle of least privilege.
   
   The call to `listNamespacedConfigMap`s does not deviate far from the default, so it is not undertaking anything which should have a need for specific permissions. From the fact that the Heron API Server is able to submit a topology to K8s using the V1 API, I think it is safe to assume that it does have a valid/accepted bearer token and credentials.
   
   My testing capabilities are severely restricted by system resources.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926128468


   This is a preplexing issue. The tests appear to pass on fresh Ubuntu18.04 Docker Container on a better spec'd machine but fail on a lower spec'd machine. Commands to setup and execute the build and test are as follows:
   
   ```bash
   cd heron_root_dir
   docker/scripts/dev-env-create.sh heron-dev
   
   ./bazel_configure.py
   bazel build --config=ubuntu heron/...
   
   bazel test --config=ubuntu --test_output=errors --cache_test_results=no --test_verbose_timeout_warnings heron/...
   bazel test --config=ubuntu --test_output=all --cache_test_results=no --test_verbose_timeout_warnings heron/stmgr/tests/cpp/server:stmgr_unittest
   ```
   
   When I `cat` the `V1Controller.java` file in the containers terminal `loadPodFromTemplate` it is as it should be and wired up correctly in `createStatefulSet` on line 387. Could someone else please clone the repo and test if it passes the build? @joshfischer1108, @nicknezis, and anyone else who has some time, I could use an extra set of objective eyes on this. I am sure it is just something silly that got overlooked.
   
   I have also cleaned out the `loadPodFromTemplate` function and set it up to simple return a new/empty `V1PodTemplateSpec` to no avail on the more constrained system:
   
   ```java
     private V1PodTemplateSpec loadPodFromTemplate() {
   
       return new V1PodTemplateSpec();
     }
   ```
   
   <details><summary>Complete test ouput - higher spec</summary>
   <p>
   
   ```bash
   INFO: Elapsed time: 459.454s, Critical Path: 42.93s
   INFO: 1374 processes: 331 internal, 1043 local.
   INFO: Build completed successfully, 1374 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.2s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.5s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.6s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.5s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 4.9s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.4s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.0s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.5s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.5s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.5s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.9s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.7s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.8s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.7s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.8s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 2.9s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 0.9s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.0s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 1.0s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.9s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.5s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.5s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.4s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.6s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.8s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.0s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 1.7s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.5s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.3s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.1s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.1s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.5s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.9s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.5s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.8s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.9s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.5s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.7s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.0s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.5s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 1.9s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.7s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.2s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.2s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.4s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 2.0s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 0.9s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.6s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.0s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 1.0s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.8s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.6s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.5s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 1.1s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.1s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.4s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.5s
   
   INFO: Build completed successfully, 1374 total actions
   ```
   
   </p>
   </details>
   
   <details><summary>//heron/stmgr/tests/cpp/server:stmgr_unittest - higher spec</summary>
   <p>
   
   ```bash
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [warn] Added a signal to event base 0x55c0c05e0840 with signals already added to event_base 0x55c0c05e0580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0b00 with signals already added to event_base 0x55c0c05e0840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0dc0 with signals already added to event_base 0x55c0c05e0b00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1080 with signals already added to event_base 0x55c0c05e0dc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_pplan_decode (2013 ms)
   [ RUN      ] StMgr.test_tuple_route
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base (nil).  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e39c0 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tuple_route (2006 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e39c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_custom_grouping_route (2009 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_instance (4002 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_spout_death_under_backpressure (6144 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3440 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr (5005 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c05e3440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072ec00 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (4035 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072ec00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f9c0 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5015 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072f9c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c000 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072fc80 with signals already added to event_base 0x55c0c072c000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca000 with signals already added to event_base 0x55c0c072fc80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca2c0 with signals already added to event_base 0x55c0c1cca000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca580 with signals already added to event_base 0x55c0c1cca2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca840 with signals already added to event_base 0x55c0c1cca580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4008 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect
   [warn] Added a signal to event base 0x55c0c1cccc00 with signals already added to event_base 0x55c0c1cca840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc940 with signals already added to event_base 0x55c0c1cccc00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc680 with signals already added to event_base 0x55c0c1ccc940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc3c0 with signals already added to event_base 0x55c0c1ccc680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc100 with signals already added to event_base 0x55c0c1ccc3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccbb80 with signals already added to event_base 0x55c0c1ccc100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb600 with signals already added to event_base 0x55c0c1ccbb80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb340 with signals already added to event_base 0x55c0c1ccb600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb080 with signals already added to event_base 0x55c0c1ccb340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_metricsmgr_reconnect (4004 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38241 ms total)
   
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38241 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   Target //heron/stmgr/tests/cpp/server:stmgr_unittest up-to-date:
     bazel-bin/heron/stmgr/tests/cpp/server/stmgr_unittest
   INFO: Elapsed time: 39.218s, Critical Path: 38.29s
   INFO: 2 processes: 1 internal, 1 local.
   INFO: Build completed successfully, 2 total actions
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   
   INFO: Build completed successfully, 2 total actions
   ```
   </p>
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-931809880


   I might be wrong, but I think the RBAC is handled automatically. When you get the default API, it knows to use the embedded token that is mounted in every pod. I thought my updating the `Role` would fix it, but I still get `Forbidden` in the API server's log output. I'll play a bit more with it tonight.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932808466


   Is there an issue parsing the Pod Template or is it at the K8s deployment level? Could it be a permission issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926527047


   > > Thank you for steps to reproduce. Will look more into this tonight.
   > 
   > Thank you @joshfischer1108, Travis CI seems to be passing all tests now. I think we are well-positioned to review and test to see if any adjustments need to be made.
   
   I had some issues building last night.  I was hoping that all would work in the Docker image, but no luck. But this shouldn't affect the work that you have done. I"m working on a Mac M1 and we still have some extra work for this platform.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926707909


   Thank you @joshfischer1108. I am hoping that updating the build scripts and bringing the dependencies up to date will resolve some, if not all, of the Apple silicon issues. I am aware there are a few virtualization issues with Apple silicon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-927132082


   I have made the changes to support the `--config-map heron.kubernetes.pod.template.configmap.name=CONFIGMAP-NAEM.POD-TEMPLATE-NAME` parameter.
   
   Local build and the full battery of tests are passing. TravisCI appears to be really flaky and is failing even the documentation build which is based on the current `master/main` branch. I think it is time to seriously look at Github Actions which are run natively on Azure. My experience with GH Actions is that it is lightning fast compared to TravisCI but we may need to ask GH about unlimited open-source runner time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-927132082


   I have made the changes to support the `--config-map heron.kubernetes.pod.template.configmap.name=CONFIGMAP-NAME.POD-TEMPLATE-NAME` parameter.
   
   Local build and the full battery of tests are passing. TravisCI appears to be really flaky and is failing even the documentation build which is based on the current `master/main` branch. I think it is time to seriously look at Github Actions which are run natively on Azure. My experience with GH Actions is that it is lightning fast compared to TravisCI but we may need to ask GH about unlimited open-source runner time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917741666


   @surahman Please add unit tests to cover the current code changes.  I realize there are no unit tests atm on this file.  But I think this is a good time to start.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-939336529


   @nicknezis I am confirming that all items are being carried over from the provided Pod Template. `Environment` and `Limits` variables are being augmented with all Heron defaults taking precedence. We need to consider whether we need to augment the `Ports`, `Volume Mounts`, and the `Resource Requests`. 
   
   Any container provided in the Pod Template that is not named as `executor` is being discarded. The image provided for the `executor`  is overwritten by the Heron default value. This is for security concerns.
   
   In the example below I have added some random `Environment` variables to demonstrate:
   
   <details><summary>pos-template.yaml</summary>
   
   ```yaml
   apiVersion: v1
   kind: PodTemplate
   metadata:
     name: pod-template-example
     namespace: default
   template:
     metadata:
       name: acking-pod-template-example
     spec:
       containers:
         - name: executor
           securityContext:
             allowPrivilegeEscalation: false
           env:
           - name: Porsche
             value: "992 4S GTS"
           - name: Porsche
             value: "992 GT3 Touring"
           - name: Everything-Else
             value: "turds"
         - name: BusyBox
           image: busybox:latest
           env:
           - name: BusyBox_ENV
             value: "should not exist"
   ```
   
   </details>
   
   <details><summary>kubectl describe pods acking-0</summary>
   
   ```bash
   Name:         acking-0
   Namespace:    default
   Priority:     0
   Node:         minikube/192.168.49.2
   Start Time:   Sat, 09 Oct 2021 13:39:11 -0400
   Labels:       app=heron
                 controller-revision-hash=acking-7f746f959c
                 statefulset.kubernetes.io/pod-name=acking-0
                 topology=acking
   Annotations:  prometheus.io/port: 8080
                 prometheus.io/scrape: true
   Status:       Running
   IP:           172.17.0.9
   IPs:
     IP:           172.17.0.9
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Container ID:  docker://2dfcf887ef3eb6893716ebf8a97953a94c27723b2df7649dadb1763e8d5408f5
       Image:         apache/heron:testbuild
       Image ID:      docker://sha256:dfea9b424c7cf8061d495969b54cf862a2cabb582b1576d0f9d0f7cd060a1f7e
       Ports:         6005/TCP, 6006/TCP, 6008/TCP, 6003/TCP, 6004/TCP, 6009/TCP, 6001/TCP, 6002/TCP, 6007/TCP
       Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--1080570153153064408.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=ackingfef5147f-5e10-4515-98bd-82342e957919 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       State:          Running
         Started:      Sat, 09 Oct 2021 13:39:12 -0400
       Ready:          True
       Restart Count:  0
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         Everything-Else:  turds
         Porsche:          992 GT3 Touring
         POD_NAME:         acking-0 (v1:metadata.name)
         Porsche:          992 4S GTS
         HOST:              (v1:status.podIP)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9hwk (ro)
   Conditions:
     Type              Status
     Initialized       True 
     Ready             True 
     ContainersReady   True 
     PodScheduled      True 
   Volumes:
     kube-api-access-p9hwk:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type    Reason     Age   From               Message
     ----    ------     ----  ----               -------
     Normal  Scheduled  21s   default-scheduler  Successfully assigned default/acking-0 to minikube
     Normal  Pulled     20s   kubelet            Container image "apache/heron:testbuild" already present on machine
     Normal  Created    20s   kubelet            Created container executor
     Normal  Started    20s   kubelet            Started container executor
   ```
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-927182139


   I have added `INFO` level logging to `loadPodFromTemplate` to indicate whether a default or custom Pod Template was configured. The builds and all tests are passing locally but I have not merged in case there are Travis CI issues. The changes are at the head of the `dev` branch.
   
   Edit: Merged, TravisCI 🤞🏼.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-955314447


   Thank you @nicknezis for all the time and effort you have put into this PR, I really appreciate it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-957606350


   I will wait for that PR to come in then I think we are ready to create another Heron release and work towards graduating out of the incubator.
   
   💯 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-947905924


   > I have added a check for multiple `executor` container specifications in the provided Pod Template. I am conflicted about whether we should be checking for this. It should not be the responsibility of the framework to verify a valid Pod Template. The complete battery of tests is passing locally.
   > 
   > @nicknezis I think it is safe to say that the core of the functionality for the is feature is now complete, and as such, I am leaving this PR until review. I am also pinging @joshfischer1108 in case we need more reviewers.
   
   I'll be able to look over this PR and the accompanying docs this weekend. I've recently restarted the build as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-920551592


   I think I have an idea of what needs to happen... time to iterate :wink:.
   
   @nicknezis You are correct, Spark's architecture has a driver/coordinator with a fleet of executors. Is there already a `loadPodFromTemplate` in Heron or do we need to put one together? I could not find anything in the `V1Controller` or K8S scheduler codebase in Heron. [This](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L105) indicates there is a built-in function in the K8S client library (scala and this potentially java too) that should handle the parse and assembly of the Pod Template.
   
   Currently `createStatefulSet` relies on the `V1` K8S API to put together a default Pod Template in which most of the fields are simply set to `null`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-927182139


   I have added `INFO` level logging to `loadPodFromTemplate` to indicate whether a default or custom Pod Template was configured. The builds and all tests are passing locally but I have not merged in case there are Travis CI issues. The changes are at the head of the `dev` branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926956359


   Hi @nicknezis, no worries.
   
   ~What I will do is set the Pod Template name to [`podspec-configmap-key`](https://github.com/apache/spark/blob/ff3f3c45668364da9bd10992791b5ee9a46fea21/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala#L86) to keep things familiar with Spark's methodology and reserve the CLI input via `--config-property` to the name of the actual ConfigMap to lookup from the list returned from the client.~
   
   Sorry, just re-read your comment - I shall go ahead with my previous plan.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917716337


   I'll try testing this tonight. I hadn't noticed that the Spark code used a volume mount. Will have to review that this does what we need. As you said, if not, we can iterate as needed. Thank you for working this feature request.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934015788


   > Yes, and I added some logic to set labels and annotations with config properties. So no need to use PodTemplate for setting those. 
   
   👍🏼 
   
   > We should list the parts of the PodTemplate that will be replaced, and the config items that can be used to set them (i.e. Env variables, labels, annotations).
   
   We shall add that to the other PR for documentation - I shall add a note there.
   
   > For `getPodSpec()`, maybe we just always modify the `PodSpec` that exists on the PodTemplate (instead of setting a brand new `PodSpec`).
   
   Sounds like a plan.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933895467


   Oh cool. I'll test this out tonight. I was having trouble creating a pod template that would parse properly. I can test with the one you posted.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932461676


   If it so happens that a Role is required it might [not be feasible](https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/) to set it up programmatically:
   
   > A user can only create/update a role if they already have all the permissions contained in the role, at the same scope as the role (cluster-wide for a `ClusterRole`, within the same namespace or cluster-wide for a `Role`).
   
   There is an example of a Role that can read a specific ConfigMap:
   ```yaml
   rules:
   - apiGroups: [""]
     resources: ["configmaps"]
     resourceNames: ["my-config"]
     verbs: ["get"]
   ```
   
   It might also be better to require the user to manually configure the Role in K8s than silently forcing it upon them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932636387


   I am not sure if you tried this but think we need to set up a [`Service Account`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) and assign it to the Heron API Server Pod. We then bind the Role to the `Service Account` [like so](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects).
   
   From [Stack Overflow](https://stackoverflow.com/questions/52995962/kubernetes-namespace-default-service-account):
   
   > 5. The default permissions for a service account don't allow it to list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
   
   **_Edit:_**
   It appears as though the `ClusterRoles` and `ServiceAccount` are in the K8s configs for the Heron API Server:
   
   * [General](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/general/apiserver.yaml)
   * [Minikube](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/minikube/apiserver.yaml)
   
   This makes life a lot easier with only the following being additionally required:
   
   <details>
     <summary>Role</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: Role
   metadata:
     name: heron-apiserver-configmap-role
     namespace: default
   rules:
   - apiGroups:
     - ""
     resources:
     - configmaps
     verbs:
     - get
     - watch
     - list
   ```
   </details>
   
   <details>
     <summary>RoleBinding</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: RoleBinding
   metadata:
     name: heron-apiserver-configmap-rolebinding
     namespace: default
   roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: Role
     name: heron-apiserver-configmap-role
   subjects:
   - kind: ServiceAccount
     name: heron-apiserver
     namespace: default
   ```
   </details>
   
   I think it would be safe to add these to the Heron API Server K8s configs because it is adequately restrictive. It would be very unwise of anyone to place sensitive information in a general resource in any namespace, they should be using a `Secret`, and I believe we should not be opening a security loophole here.
   
   I also believe it is possible to assign multiple `Role`s and `ClusterRole`s to the same `ServiceAccount`. RBAC is additive and only whitelists permissions. @nicknezis, when you have some time, could you please once-over the `Role`s and test? If all is well I can update the K8s deployment scripts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-938223306


   I have made some enhancements to allow the merging of the default executor container configuration with one which is provided in the Pod Template. The Heron defaults will overwrite anything provided in the Pod Template by a user. This enhancement allows for some tweaking of the executor container specs within constraints.
   
   Only a single container is permitted per executor. This is important to avoid the launching of additional containers within a Pod.
   
   What are your thoughts? Should we be allowing the executor container to be modified?
   
   **_Edit:_** If we do want a Pod Template to be able to provide additional configs I am going to 🗑️  and 🔥 the `mergeExecutorContainer` and just modify the  `getContainer`. It is a simpler and cleaner solution to just pass in the `V1Container` object when has been parsed from input and let `getContainer` overwrite with `Heron` defaults 🤦🏼‍♂️.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-952473046


   What is wrong with TravisCI!?🤦🏼‍♂️ My last try at force-pushing to get the job on to a TravisCI cluster capable of passing the build.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r739752330



##########
File path: .travis.yml
##########
@@ -47,4 +46,4 @@ script:
   - python -V
   - which python3
   - python3 -V
-  - travis-wait-improved --timeout=180m scripts/travis/ci.sh

Review comment:
       Yes, that script was causing the issue due to Python dependencies. I tested without the script and it worked. I think the default behavior is to terminate a build if there is no output for more than 10 minutes. I'm not sure if we have this issue in our build. If we do need to put it back, then we will have to resolve the Python issues.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-957592887


   > Thank you @joshfischer1108 I really appreciate you looking these changes over!
   
   Ok the build is green 🙌 .  Let's go!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-945343876


   Simple enough set of changes given the tweaks I made earlier. The full battery of tests is passing locally - here is to hoping TravisCI is in a good mood 🤞🏼.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-945310763


   This is a good question. I think the merging would be best because there currently is no other way to add tolerations if they are desired.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-952204247


   I got rid of the `listNamespaceConfigMaps` usage. Utilizing this really bothered me because in an actual production K8s cluster there are potentially thousands of `ConfigMap`s loaded in any given namespace. This could lead to memory issues on the Heron API server, no to mention it is exceedingly inefficient. If I can retrieve a specific `ConfigMap` on the CLI then it stands to reason that there should be a matching Java API call: `readNamespacedConfigMap`.
   
   Changes are on the `dev` branch and everything is passing with the complete testing battery. K8s cluster deployment tests are working as well. I have included the test results below. I hope TravisCI is in a good mood today 🎲 🤞🏼.
   
   <details>
       <summary>Test suite in clean Heron Docker Ubuntu 18.04 LTS</summary>
   
   ```bash
   INFO: Elapsed time: 1787.995s, Critical Path: 207.90s
   INFO: 5581 processes: 2396 internal, 3185 local.
   INFO: Build completed successfully, 5581 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.4s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.4s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.4s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.6s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.2s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 1.0s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (1.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.5s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.3s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.6s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 5.3s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.3s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.9s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.3s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.1s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.6s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.6s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.8s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.6s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.6s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.5s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.5s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.6s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.6s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.5s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.2s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 3.0s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 11.0s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 1.0s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 0.9s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 0.9s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.8s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.3s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.5s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.4s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.5s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.3s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.7s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.3s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.7s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.3s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.8s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 2.0s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.4s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 3.7s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.3s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.5s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.4s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.8s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.4s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.6s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.3s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.8s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 1.0s
   //heron/schedulers/tests/java:KubernetesUtilsTest                        PASSED in 0.4s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 1.1s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.1s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.8s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.7s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 2.6s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 1.0s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.0s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.6s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.8s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.5s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 1.0s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.3s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.4s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.7s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.3s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.4s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.7s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 1.0s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.5s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.6s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 0.9s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.9s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.1s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.1s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 39.2s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.7s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.4s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 0.7s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.0s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.7s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.4s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 1.2s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.4s
   
   INFO: Build completed successfully, 5581 total actions
   ```
   
   </details>
   
   
   <details>
       <summary>Describe Pod acking-0</summary>
   
   ```bash
   Name:           acking-0
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-7c4b6d7bb8
                   statefulset.kubernetes.io/pod-name=acking-0
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       5555/TCP, 5556/UDP, 6001/TCP, 6002/TCP, 6003/TCP, 6004/TCP, 6005/TCP, 6006/TCP, 6007/TCP, 6008/TCP, 6009/TCP
       Host Ports:  0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--5007119198915049925.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking6cbc52b1-58bb-485a-866b-b4c4f5ce2a42 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:        (v1:status.podIP)
         POD_NAME:   acking-0 (v1:metadata.name)
         var_one:    variable one
         var_three:  variable three
         var_two:    variable two
       Mounts:
         /shared_volume from shared-volume (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4cxrh (ro)
     sidecar-container:
       Image:        alpine
       Port:         <none>
       Host Port:    <none>
       Environment:  <none>
       Mounts:
         /shared_volume from shared-volume (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4cxrh (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     shared-volume:
       Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
       Medium:     
       SizeLimit:  <unset>
     kube-api-access-4cxrh:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Burstable
   Node-Selectors:              <none>
   Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 10s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  44s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   ```
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926128468


   This is a preplexing issue. The tests appear to pass on fresh Ubuntu18.04 Docker Container on a better spec'd machine but fail on a lower spec'd machine. Commands to setup and execute the build and test are as follows:
   
   ```bash
   cd heron_root_dir
   docker/scripts/dev-env-create.sh heron-dev
   
   ./bazel_configure.py
   bazel build --config=ubuntu heron/...
   
   bazel test --config=ubuntu --test_output=errors --cache_test_results=no --test_verbose_timeout_warnings heron/...
   bazel test --config=ubuntu --test_output=all --cache_test_results=no --test_verbose_timeout_warnings heron/stmgr/tests/cpp/server:stmgr_unittest
   ```
   
   When I `cat` the `V1Controller.java` file in the containers terminal `loadPodFromTemplate` it is as it should be and wired up correctly in `createStatefulSet` on line 387. Could someone else please clone the repo and test if it passes the build? @joshfischer1108, @nicknezis, and anyone else who has some time, I could use an extra set of objective eyes on this. I am sure it is just something silly that got overlooked.
   
   I have also cleaned out the `loacPodFromTemplate` function and set it up to simple return a new/empty `V1PodTemplateSpec` to no avail on the more constrained system:
   
   ```java
     private V1PodTemplateSpec loadPodFromTemplate() {
   
       return new V1PodTemplateSpec();
     }
   ```
   
   <details><summary>Complete test ouput - higher spec</summary>
   <p>
   
   ```bash
   INFO: Elapsed time: 459.454s, Critical Path: 42.93s
   INFO: 1374 processes: 331 internal, 1043 local.
   INFO: Build completed successfully, 1374 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.2s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.5s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.6s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.5s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 4.9s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.4s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.0s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.5s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.5s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.5s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.9s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.7s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.8s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.7s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.8s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 2.9s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 0.9s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.0s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 1.0s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.9s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.5s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.5s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.4s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.6s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.8s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.0s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 1.7s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.5s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.3s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.1s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.1s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.5s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.9s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.5s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.8s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.9s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.5s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.7s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.0s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.5s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 1.9s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.7s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.2s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.2s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.4s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 2.0s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 0.9s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.6s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.0s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 1.0s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.8s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.6s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.5s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 1.1s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.1s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.4s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.5s
   
   INFO: Build completed successfully, 1374 total actions
   ```
   
   </p>
   </details>
   
   <details><summary>//heron/stmgr/tests/cpp/server:stmgr_unittest - higher spec</summary>
   <p>
   
   ```bash
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [warn] Added a signal to event base 0x55c0c05e0840 with signals already added to event_base 0x55c0c05e0580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0b00 with signals already added to event_base 0x55c0c05e0840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0dc0 with signals already added to event_base 0x55c0c05e0b00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1080 with signals already added to event_base 0x55c0c05e0dc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_pplan_decode (2013 ms)
   [ RUN      ] StMgr.test_tuple_route
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base (nil).  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e39c0 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tuple_route (2006 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e39c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_custom_grouping_route (2009 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_instance (4002 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_spout_death_under_backpressure (6144 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3440 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr (5005 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c05e3440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072ec00 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (4035 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072ec00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f9c0 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5015 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072f9c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c000 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072fc80 with signals already added to event_base 0x55c0c072c000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca000 with signals already added to event_base 0x55c0c072fc80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca2c0 with signals already added to event_base 0x55c0c1cca000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca580 with signals already added to event_base 0x55c0c1cca2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca840 with signals already added to event_base 0x55c0c1cca580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4008 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect
   [warn] Added a signal to event base 0x55c0c1cccc00 with signals already added to event_base 0x55c0c1cca840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc940 with signals already added to event_base 0x55c0c1cccc00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc680 with signals already added to event_base 0x55c0c1ccc940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc3c0 with signals already added to event_base 0x55c0c1ccc680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc100 with signals already added to event_base 0x55c0c1ccc3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccbb80 with signals already added to event_base 0x55c0c1ccc100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb600 with signals already added to event_base 0x55c0c1ccbb80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb340 with signals already added to event_base 0x55c0c1ccb600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb080 with signals already added to event_base 0x55c0c1ccb340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_metricsmgr_reconnect (4004 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38241 ms total)
   
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38241 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   Target //heron/stmgr/tests/cpp/server:stmgr_unittest up-to-date:
     bazel-bin/heron/stmgr/tests/cpp/server/stmgr_unittest
   INFO: Elapsed time: 39.218s, Critical Path: 38.29s
   INFO: 2 processes: 1 internal, 1 local.
   INFO: Build completed successfully, 2 total actions
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   
   INFO: Build completed successfully, 2 total actions
   ```
   </p>
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926956359


   Hi @nicknezis, no worries.
   
   What I will do is set the Pod Template name to [`podspec-configmap-key`](https://github.com/apache/spark/blob/ff3f3c45668364da9bd10992791b5ee9a46fea21/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Constants.scala#L86) to keep things familiar with Spark's methodology and reserve the CLI input via `--config-property` to the name of the actual ConfigMap to lookup from the list returned from the client.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the cleanup and initial [stubbing](https://github.com/surahman/incubator-heron/commit/5d72ac92c8f37ab118af5642b9eda18c077344ab). I have linked the API references below. The API reference for `listNamespacedConfigMap` states that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We will need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-920551592


   I think I have an idea of what needs to happen... time to iterate :wink:.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-922278498


   I have begun the cleanup and initial [stubbing](https://github.com/surahman/incubator-heron/commit/23b30d99320a51ac7f54c377f2a6f55576d5f839). I have linked the API references below. The API reference for `listNamespacedConfigMap` sates that only the namespace is a required parameter but I am running into issues with just providing the namespace. If you have any guidance on what to set these to it would be greatly appreciated, I am working on figuring them out.
   
   We will need to iterate over the `ConfigMap`s in the `KubernetesConstants.DEFAULT_NAMESPACE` and probe each for a key with the specified `podTemplateConfigMapName`. This data is stored as a String which will require the V1 YAML parser to convert it to the `V1PodTemplateSpec`. The routine will return the default constructed (empty) `V1PodTemplateSpec` if no Pod Template name is set via `--config-property`.
   
   [`listNameSpacedConfigMapName`](https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/apis/CoreV1Api.html#listNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.String-java.lang.String-java.lang.String-java.lang.Integer-java.lang.String-java.lang.String-java.lang.Integer-java.lang.Boolean-) API.
   
   [`V1ConfigMap`](https://javadoc.io/static/io.kubernetes/client-java-api/7.0.0/io/kubernetes/client/openapi/models/V1ConfigMap.html) API.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-923142393


   I am not sure why the following tests keep timing out:
   ```bash
   //heron/stmgr/tests/cpp/server:stmgr_unittest                           TIMEOUT in 3 out of 3 in 315.0s
     Stats over 3 runs: max = 315.0s, min = 315.0s, avg = 315.0s, dev = 0.0s
   Test cases: finished with 6 passing and 1 failing out of 7 test cases
   ```
   There are no details being provided and my code is not doing anything which would deviate from generating the default Pod Template when no parameter is set using `--config-property`.
   
   **_Edit:_**
   I have extracted the `getConfigMaps` routine for stubbing purposes to simplify testing. I have rejigged and refactored the tests to support checks for the exception messages being thrown. More complex and complete tests are to follow. Updates are on the parallel `dev` branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-921926947


   Correct, Heron does not yet have a `loadPodFromTemplate` function. This PR's goal is to add it. We need to look up the ConfigMap, get the embedded template and then pass that into the `kubernetesClient.pods().load(templateFile).get()` call. 
   
   I don't think we can mount the template ConfigMap into the scheduler, so I think instead we will need to do a K8s call to get the ConfigMap. When we get to actually testing, perhaps we will need to add new K8s permissions, but we can figure that out once we can test running the new logic.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-921992801


   Okay, so I am going to endeavour to break this down - please bear with me as I am still relatively new to the codebase and K8s API...
   
   ---
   
   Starting with the reference code in [Spark](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L96-L113), and keeping in mind that the Spark architecture is different from Heron's:
   
   This simply gets the Hadoop config from the driver/coordinator:
   
   ```scala
   val hadoopConf = SparkHadoopUtil.get.newConfiguration(conf)
   ```
   
   These two lines are going to download the remote file from the driver/coordinator and make them local to a machine. The first line [downloads](https://github.com/joan38/kubernetes-client/blob/6fbf23b7a997e572456256c4714222ea734bd845/kubernetes-client/src/com/goyeau/kubernetes/client/api/PodsApi.scala#L119-L148) the file, assuming I am looking at the correct Scala K8s API, and the second retrieves a file descriptor/handle on the downloaded file:
   
   ```scala
   val localFile = downloadFile(templateFileName, Utils.createTempDir(), conf, hadoopConf)
   val templateFile = new File(new java.net.URI(localFile).getPath)
   ```
   
   This third line then does the heavy lifting of reading the Pod Template into a Pod Config from the newly copied local file:
   
   ```scala
   val pod = kubernetesClient.pods().load(templateFile).get()
   ```
   
   The final line sets up the Spark container with the Pod Template and specified name:
   
   ```scala
   selectSparkContainer(pod, containerName)
   ```
   
   ---
   
   Moving on to what we need to do on the Heron side:
   
   1. Read the `ConfigMap` name from the `--config-property` option. I set the key for this to `heron.kubernetes.pod.template.configmap.name` with the value being the file name.
   2. Read the YAML `ConfigMap` and extract the YAML node tree which contains the Pod Template. For this, we will either need a YAML parser or need to find a utility in the K8s Java API to do the job. I think the K8s Java API should include a utility for this, a lack thereof would remain a significant oversight on their part.
   3. Create a `V1PodTemplateSpec` object using the results from step 2.
   4. Iron out permission issues during testing, should they arise.
   
   I am not familiar with the K8s API but will start digging around for a YAML config to V1 object parser, if anyone is aware of where it is please let me know. There are some suggestions [here](https://github.com/kubernetes-client/java/issues/170).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-918745197


   There are some issues with testing the `createStatefulSet` method in the `V1Controller`, namely with the private scope and an inability to mock it. It also performs some reads from the disk for files when it is setting up the Topology config. This requires mocking to simulate the disk reads, and there is no such functionality in place within the `ToplogyUtilsTest`s.
   
   I have a WIP `testCreateStatefulSet` method setup but it seems like it might have to be removed along with a refactoring of everything else.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932636387


   We need to set up an account for the Heron API Server and bind the Role to it like [this](https://github.com/helm/helm/issues/5100#issuecomment-533787541).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-932636387


   I am not sure if you tried this but think we need to set up a [`Service Account`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) and assign it to the Heron API Server Pod. We then bind the Role to the `Service Account` [like so](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects).
   
   From [Stack Overflow](https://stackoverflow.com/questions/52995962/kubernetes-namespace-default-service-account):
   
   > 5. The default permissions for a service account don't allow it to list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
   
   **_Edit:_**
   It appears as though the `ClusterRoles` and `ServiceAccount` are in the K8s configs for the [Heron API Server](https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/apiserver.yaml). This makes life a lot easier with only the following being additionally required:
   
   <details>
     <summary>Role</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: Role
   metadata:
     name: heron-apiserver-configmap-role
     namespace: default
   rules:
   - apiGroups:
     - ""
     resources:
     - configmaps
     verbs:
     - get
     - watch
     - list
   ```
   </details>
   
   <details>
     <summary>RoleBinding</summary>
   
   ```yaml
   apiVersion: rbac.authorization.k8s.io/v1
   kind: RoleBinding
   metadata:
     name: heron-apiserver-configmap-rolebinding
     namespace: default
   roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: Role
     name: heron-apiserver-configmap-role
   subjects:
   - kind: ServiceAccount
     name: heron-apiserver
     namespace: default
   ```
   </details>
   
   I think it would be safe to add these to the Heron API Server K8s configs because it is adequately restrictive. I am not sure if both a `ClusterRole` and `Role` can be assigned at the same time, if not we would need to aggregate into the `ClusterRole`. The `ClusterRole` has a reference to the `cluster-admin` and I believe this is why it can submit topologies.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-929833982


   I'm slowly finding time to test. So far here are a couple of minor tweaks we will need to make:
   1. Add configmaps k8s role permissions
   ```
   rules:
   - apiGroups: 
     - ""
     resources: 
     - configmaps
     verbs: 
     - get
     - watch
     - list
   ```
   2. The `listConfigMaps` call is looking in the default namespace, but it should be using `getNamespace()` instead for that value.
   
   I'm making the edits and testing locally. Will report back with any other findings.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-939094577


   Confirming from the API Server logs, thank you @nicknezis for helping me get to them, that the pod template loading is working from both the `ConfigMap` as well as the default. I can also confirm that the kill switch is working as it is supposed to.
   
   The current issue is surrounding whether the custom Pod Templates are being loaded. I am using the following Pod Template:
   ```yaml
   apiVersion: v1
   kind: PodTemplate
   metadata:
     name: pod-template-example
     namespace: default
   template:
     metadata:
       name: acking-pod-template-example
     spec:
       containers:
         - name: executor
           securityContext:
             allowPrivilegeEscalation: false
   ```
   
   and seeing the following on my laptop:
   
   <details><summary>describe pod acking-0</summary>
   
   ```bash
   Name:         acking-0
   Namespace:    default
   Priority:     0
   Node:         minikube/192.168.49.2
   Start Time:   Fri, 08 Oct 2021 16:11:20 -0400
   Labels:       app=heron
                 controller-revision-hash=acking-6bb45848bd
                 statefulset.kubernetes.io/pod-name=acking-0
                 topology=acking
   Annotations:  prometheus.io/port: 8080
                 prometheus.io/scrape: true
   Status:       Running
   IP:           172.17.0.7
   IPs:
     IP:           172.17.0.7
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Container ID:  docker://63224408899b2581937fc81e36cd59bb5b9f2e9f091fc80c07c1d5abc33bf552
       Image:         apache/heron:testbuild
       Image ID:      docker://sha256:cccb4b3998bb9266c58c47115165e4bf63dead8e26d315436bec099f1bc475aa
       Ports:         6005/TCP, 6001/TCP, 6008/TCP, 6004/TCP, 6009/TCP, 6003/TCP, 6006/TCP, 6002/TCP, 6007/TCP
       Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0-3360812953995423633.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=ackinge310cae0-1eaf-4a6b-9134-9e9b310e74d0 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-bi
 nary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --she
 ll-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       State:          Running
         Started:      Fri, 08 Oct 2021 16:11:21 -0400
       Ready:          True
       Restart Count:  0
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-0 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6vjzm (ro)
   Conditions:
     Type              Status
     Initialized       True 
     Ready             True 
     ContainersReady   True 
     PodScheduled      True 
   Volumes:
     kube-api-access-6vjzm:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type    Reason     Age    From               Message
     ----    ------     ----   ----               -------
     Normal  Scheduled  4m58s  default-scheduler  Successfully assigned default/acking-0 to minikube
     Normal  Pulled     4m57s  kubelet            Container image "apache/heron:testbuild" already present on machine
     Normal  Created    4m57s  kubelet            Created container executor
     Normal  Started    4m57s  kubelet            Started container executor
   ```
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934905204


   Sysadmins will probably want a kill switch for this feature, and as such a boot flag that will turn this off must be provided.
   
   Any idea on if the `-D` boot/command line flag for the `heron-apiserver` adds the Key-Value pair to the `Config` object? I have base logic completed, and I am writing the test suite.
   
   ```bash
   heron-apiserver
   --base-template kubernetes
   --cluster kubernetes
   <ALL OTHER -D FLAG PARAMETERS>
   -D heron.kubernetes.pod.template.configmap.disabled=true
   ```
   
   **_Edit:_** I am confirming this feature as working.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-951985869


   > Thank you, Josh!
   
   @surahman  I must apologize.  I did not get to test the last weekend.  I will get it done by this weekend. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r739691460



##########
File path: .travis.yml
##########
@@ -47,4 +46,4 @@ script:
   - python -V
   - which python3
   - python3 -V
-  - travis-wait-improved --timeout=180m scripts/travis/ci.sh

Review comment:
       Not sure, but @nicknezis fixed an issue with the TravisCI 🥊 Python and that might line may have been removed during the debug process.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-957592887






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-957601846


   Thank you @joshfischer1108 🎆 ! There is another PR incoming sometime today or tomorrow for the CLI PVC support 😄 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926128468


   This is a preplexing issue. The tests appear to pass on fresh Ubuntu18.04 Docker Container on a better spec'd machine but fail on a lower spec'd machine. Commands to setup and execute the build and test are as follows:
   
   ```bash
   cd heron_root_dir
   docker/scripts/dev-env-create.sh heron-dev
   
   ./bazel_configure.py
   bazel build --config=ubuntu heron/...
   
   bazel test --config=ubuntu --test_output=errors --cache_test_results=no --test_verbose_timeout_warnings heron/...
   bazel test --config=ubuntu --test_output=all --cache_test_results=no --test_verbose_timeout_warnings heron/stmgr/tests/cpp/server:stmgr_unittest
   ```
   
   I have when I `cat` the `V1Controller.java` file in the containers terminal `loadPodFromTemplate` is as it should be and wired up correctly in `createStatefulSet` on line 387. Could someone else please clone the repo and test if it passes the build? @joshfischer1108, @nicknezis, and anyone else who has some time I could use an extra set of objective eyes on this. I am sure it is just something silly that got overlooked.
   
   I have also cleaned out the `loacPodFromTemplate` function and set it up to simple return a new/empty `V1PodTemplateSpec` to no avail on the more constrained system:
   
   ```java
     private V1PodTemplateSpec loadPodFromTemplate() {
   
       return new V1PodTemplateSpec();
     }
   ```
   
   <details><summary>Complete test ouput - higher spec</summary>
   <p>
   
   ```bash
   INFO: Elapsed time: 459.454s, Critical Path: 42.93s
   INFO: 1374 processes: 331 internal, 1043 local.
   INFO: Build completed successfully, 1374 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.2s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.5s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.6s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.5s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 4.9s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.4s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.0s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.5s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.5s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.5s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.9s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.7s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.8s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.7s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.8s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 2.9s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 0.9s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.0s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 1.0s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.9s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.5s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.5s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.4s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.6s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.8s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.0s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 1.7s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.5s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.3s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.1s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.1s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.5s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.9s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.5s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.8s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.9s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.5s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.7s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.0s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.5s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 1.9s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.7s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.2s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.2s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.4s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 2.0s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 0.9s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.6s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.0s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 1.0s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.8s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.6s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.5s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 1.1s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.1s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.4s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.5s
   
   INFO: Build completed successfully, 1374 total actions
   ```
   
   </p>
   </details>
   
   <details><summary>//heron/stmgr/tests/cpp/server:stmgr_unittest - higher spec</summary>
   <p>
   
   ```bash
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [warn] Added a signal to event base 0x55c0c05e0840 with signals already added to event_base 0x55c0c05e0580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0b00 with signals already added to event_base 0x55c0c05e0840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0dc0 with signals already added to event_base 0x55c0c05e0b00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1080 with signals already added to event_base 0x55c0c05e0dc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_pplan_decode (2013 ms)
   [ RUN      ] StMgr.test_tuple_route
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base (nil).  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e39c0 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tuple_route (2006 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e39c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_custom_grouping_route (2009 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_instance (4002 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_spout_death_under_backpressure (6144 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3440 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr (5005 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c05e3440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072ec00 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (4035 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072ec00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f9c0 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5015 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072f9c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c000 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072fc80 with signals already added to event_base 0x55c0c072c000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca000 with signals already added to event_base 0x55c0c072fc80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca2c0 with signals already added to event_base 0x55c0c1cca000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca580 with signals already added to event_base 0x55c0c1cca2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca840 with signals already added to event_base 0x55c0c1cca580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4008 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect
   [warn] Added a signal to event base 0x55c0c1cccc00 with signals already added to event_base 0x55c0c1cca840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc940 with signals already added to event_base 0x55c0c1cccc00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc680 with signals already added to event_base 0x55c0c1ccc940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc3c0 with signals already added to event_base 0x55c0c1ccc680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc100 with signals already added to event_base 0x55c0c1ccc3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccbb80 with signals already added to event_base 0x55c0c1ccc100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb600 with signals already added to event_base 0x55c0c1ccbb80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb340 with signals already added to event_base 0x55c0c1ccb600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb080 with signals already added to event_base 0x55c0c1ccb340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_metricsmgr_reconnect (4004 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38241 ms total)
   
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38241 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   Target //heron/stmgr/tests/cpp/server:stmgr_unittest up-to-date:
     bazel-bin/heron/stmgr/tests/cpp/server/stmgr_unittest
   INFO: Elapsed time: 39.218s, Critical Path: 38.29s
   INFO: 2 processes: 1 internal, 1 local.
   INFO: Build completed successfully, 2 total actions
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   
   INFO: Build completed successfully, 2 total actions
   </p>
   </details>
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728444987



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       > configmap.disabled=false
   
   I think you are right that we should set it to `false` by default and uncomment it. I shall go ahead and affect this change.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nwangtw commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nwangtw commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r727736026



##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));

Review comment:
       "\n%s"? Maybe `append("\n"); append(e.getMessage());` is more efficient.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));
+      }
+
+      try {
+        // Clear state from the Scheduler via RPC.
+        Scheduler.KillTopologyRequest killTopologyRequest = Scheduler.KillTopologyRequest
+            .newBuilder()
+            .setTopologyName(topologyName).build();
+
+        ISchedulerClient schedulerClient = new SchedulerClientFactory(config, runtime)
+            .getSchedulerClient();
+        if (!schedulerClient.killTopology(killTopologyRequest)) {
+          final String logMessage =
+              String.format("Failed to remove topology '%s' from scheduler after failed submit. "
+                  + "Please re-try the kill command.", topologyName);
+          errorMessage.append(String.format("%n%s", logMessage));

Review comment:
       "\n%s"? Maybe `append("\n"); append(e.getMessage());` is more efficient.

##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/KubernetesContext.java
##########
@@ -172,6 +179,15 @@ static String getContainerVolumeMountPath(Config config) {
     return config.getStringValue(KUBERNETES_CONTAINER_VOLUME_MOUNT_PATH);
   }
 
+  public static String getPodTemplateConfigMapName(Config config) {
+    return config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_NAME);
+  }
+
+  public static boolean getPodTemplateConfigMapDisabled(Config config) {
+    final String disabled = config.getStringValue(KUBERNETES_POD_TEMPLATE_CONFIGMAP_DISABLED);
+    return disabled != null && disabled.toLowerCase(Locale.ROOT).equals("true");

Review comment:
       I am wondering if `equalsIgnoreCase()` is easier to maintain.

##########
File path: heron/scheduler-core/src/java/org/apache/heron/scheduler/LaunchRunner.java
##########
@@ -169,13 +173,45 @@ public void call() throws LauncherException, PackingException, SubmitDryRunRespo
           "Failed to set execution state for topology '%s'", topologyName));
     }
 
-    // launch the topology, clear the state if it fails
-    if (!launcher.launch(packedPlan)) {
+    // Launch the topology, clear the state if it fails. Some schedulers throw exceptions instead of
+    // returning false. In some cases the scheduler needs to have the topology deleted.
+    try {
+      if (!launcher.launch(packedPlan)) {
+        throw new TopologySubmissionException(null);
+      }
+    } catch (TopologySubmissionException e) {
+      // Compile error message to throw.
+      final StringBuilder errorMessage = new StringBuilder(
+          String.format("Failed to launch topology '%s'", topologyName));
+      if (e.getMessage() != null) {
+        errorMessage.append(String.format("%n%s", e.getMessage()));
+      }
+
+      try {
+        // Clear state from the Scheduler via RPC.
+        Scheduler.KillTopologyRequest killTopologyRequest = Scheduler.KillTopologyRequest
+            .newBuilder()
+            .setTopologyName(topologyName).build();
+
+        ISchedulerClient schedulerClient = new SchedulerClientFactory(config, runtime)
+            .getSchedulerClient();
+        if (!schedulerClient.killTopology(killTopologyRequest)) {
+          final String logMessage =
+              String.format("Failed to remove topology '%s' from scheduler after failed submit. "
+                  + "Please re-try the kill command.", topologyName);
+          errorMessage.append(String.format("%n%s", logMessage));
+          LOG.log(Level.SEVERE, logMessage);
+        }
+      // SUPPRESS CHECKSTYLE IllegalCatch
+      } catch (Exception ignored){
+        // The above call to clear the Scheduler may fail. This situation can be ignored.

Review comment:
       Could be useful to have the info in the errorMessage I feel.

##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       do we want to remove this line or keep it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-944974133


   @nicknezis thank you for fixing the build script! I think the next thing to work on is exercising the Python daemons...
   
   Do you think it would be handy to allow users to add to the Pod `tolerations` and the `metadata`? The Heron supplied values would take precedence, of course.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-945050628


   I think we're ok with `metadata` because there are properties which can set the labels and annotations. Tolerations I'm not as sure about. If my Pod Template has tolerations, will the logic currently wipe that out? I think it would make sense for the tolerations to persist.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nwangtw commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nwangtw commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r714544245



##########
File path: heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java
##########
@@ -386,7 +386,7 @@ private V1StatefulSet createStatefulSet(Resource containerResource, int numberOf
     statefulSetSpec.selector(selector);
 
     // create a pod template
-    final V1PodTemplateSpec podTemplateSpec = loadPodFromTemplate();
+    final V1PodTemplateSpec podTemplateSpec = new V1PodTemplateSpec();

Review comment:
       Hmm. Maybe it is because `loadPodFromTemplate()` throws an exception?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926707909


   Thank you @joshfischer1108. I am hoping that updating the build scripts and bringing the dependencies up to date will resolve some, if not all, of the Apple silicon issues. I am aware there are a few virtualization issues with Apple silicon. Updating the build scripts and dependencies will likely be an involved and  "all available hands" situation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926736471


   > @joshfischer1108 sure, but we still need to run some deployment testing and need @nicknezis's feedback when he has time. I shall write up the basics but this is really Nick's feature and he is more knowledgeable about K8s, so I would ask him to add to it and sign off on the documentation.
   > 
   > I have tried to be very careful with code from a security standpoint: hard failures and nothing silent or byzantine. We do not want a situation where a Pod Template is set but there is a path in the code that allows a bad configuration through. We need very critical and careful reviewing.
   
   Agreed on Nick's sign off.  I think it would be good for you to author the docs with Nick's guidance.  I can help in some aspects as well..  The documentation can come with a different pull request.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933998100


   Yes that did help. `Kubeval` is definitely helpful. Making a valid PodTemplate was taking me longer than expected. 
   
   I was able to have the submit succeed, but I'm not able to see things persist. I tried adding an InitContainer, but I didn't see it in the resulting StatefulSet. I'll try looking over the logic to find the issue, but this might be related to some of the changes you mentioned you will soon be making?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934013942


   > 
   > I think we will need to follow the same protocol and standards and document this information. Not all fields are lists or maps...
   
   Yes, and I added some logic to set labels and annotations with config properties. So no need to use PodTemplate for setting those. We should list the parts of the PodTemplate that will be replaced, and the config items that can be used to set them (i.e. Env variables, labels, annotations).
   
   For `getPodSpec()`, maybe we just always modify the `PodSpec` that exists on the PodTemplate (instead of setting a brand new `PodSpec`).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-919135566


   > There are some issues with testing the `createStatefulSet` method in the `V1Controller`, namely with the private scope and an inability to mock it. It also performs some reads from the disk for files when it is setting up the Topology config. This requires mocking to simulate the disk reads, and there is no such functionality in place within the `ToplogyUtilsTest`s.
   > 
   > I have a WIP `testCreateStatefulSet` method setup but it seems like it might have to be removed along with a refactoring of everything else.
   
   Ok, let's create an issue around this and clean this up later.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934905204


   Sysadmins will probably want a kill switch for this feature, and as such a boot flag that will turn this off must be provided.
   
   Any idea on if the `-D` boot/command line flag for the `heron-apiserver` adds the Key-Value pair to the `Config` object? I have base logic completed, and I am writing the test suite.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nicknezis commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nicknezis commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-943306744


   I didn't realize I had the ability to push to your branch. I made the Helm chart update so that should be good.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917798710


   @joshfischer1108 I have added some tests for the exposed code but the routines in the `V1Controller` are scoped to private. Testing these routines would mean having to make them protected and then exposing them using an accessor testing class that extends the `V1Controller`.  I am not sure if changing the access level for the methods in `V1Controller` makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-924531062


   Hi @nicknezis, could you please clarify whether this is what a `ConfigMap` with a `Pod Template` would look like, and if not could you please provide an example?
   
   ```yaml
   apiVersion: v1
   kind: ConfigMap
   metadata:
     name: some-config-map-name
   data:
     heron.kubernetes.pod.template.configmap.name: |
       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: heron-tracker
         namespace: default
       spec:
         selector:
           matchLabels:
             app: heron-tracker
         template:
           metadata:
             labels:
               app: heron-tracker
           spec:
             containers:
               - name: heron-tracker
                 image: apache/heron:latest
                 ports:
                   - containerPort: 8888
                     name: api-port
                 command: ["sh", "-c"]
                 args:
                   - >-
                     heron-tracker
                     --type=zookeeper
                     --name=kubernetes
                     --hostport=zookeeper:2181
                     --rootpath="/heron"
                 resources:
                   requests:
                     cpu: "100m"
                     memory: "200M"
                   limits:
                     cpu: "400m"
                     memory: "512M"
   ```
   
   I am in the final stages of testing the code and need to make sure I have the correct format for the `ConfigMap`s in case I need to make tweaks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-926128468


   This is a preplexing issue. The tests appear to pass on fresh Ubuntu18.04 Docker Container on a better spec'd machine but fail on a lower spec'd machine. Commands to setup and execute the build and test are as follows:
   
   ```bash
   cd heron_root_dir
   docker/scripts/dev-env-create.sh heron-dev
   
   ./bazel_configure.py
   bazel build --config=ubuntu heron/...
   
   bazel test --config=ubuntu --test_output=errors --cache_test_results=no --test_verbose_timeout_warnings heron/...
   bazel test --config=ubuntu --test_output=all --cache_test_results=no --test_verbose_timeout_warnings heron/stmgr/tests/cpp/server:stmgr_unittest
   ```
   
   When I `cat` the `V1Controller.java` file in the containers terminal `loadPodFromTemplate` it is as it should be and wired up correctly in `createStatefulSet` on line 387. Could someone else please clone the repo and test if it passes the build? @joshfischer1108, @nicknezis, and anyone else who has some time, I could use an extra set of objective eyes on this. I am sure it is just something silly that got overlooked.
   
   I have also cleaned out the `loadPodFromTemplate` function and set it up to simple return a new/empty `V1PodTemplateSpec` to no avail on the more constrained system:
   
   ```java
     private V1PodTemplateSpec loadPodFromTemplate() {
   
       return new V1PodTemplateSpec();
     }
   ```
   
   <details><summary>Complete test ouput - higher spec</summary>
   <p>
   
   ```bash
   INFO: Elapsed time: 459.454s, Critical Path: 42.93s
   INFO: 1374 processes: 331 internal, 1043 local.
   INFO: Build completed successfully, 1374 total actions
   //heron/api/tests/cpp:serialization_unittest                             PASSED in 0.0s
   //heron/api/tests/java:BaseWindowedBoltTest                              PASSED in 0.3s
   //heron/api/tests/java:ConfigTest                                        PASSED in 0.2s
   //heron/api/tests/java:CountStatAndMetricTest                            PASSED in 0.3s
   //heron/api/tests/java:GeneralReduceByKeyAndWindowOperatorTest           PASSED in 0.4s
   //heron/api/tests/java:HeronSubmitterTest                                PASSED in 1.9s
   //heron/api/tests/java:JoinOperatorTest                                  PASSED in 0.4s
   //heron/api/tests/java:KVStreamletShadowTest                             PASSED in 0.5s
   //heron/api/tests/java:KeyByOperatorTest                                 PASSED in 0.5s
   //heron/api/tests/java:LatencyStatAndMetricTest                          PASSED in 0.3s
   //heron/api/tests/java:ReduceByKeyAndWindowOperatorTest                  PASSED in 0.4s
   //heron/api/tests/java:StreamletImplTest                                 PASSED in 0.4s
   //heron/api/tests/java:StreamletShadowTest                               PASSED in 0.4s
   //heron/api/tests/java:StreamletUtilsTest                                PASSED in 0.3s
   //heron/api/tests/java:UtilsTest                                         PASSED in 0.3s
   //heron/api/tests/java:WaterMarkEventGeneratorTest                       PASSED in 0.4s
   //heron/api/tests/java:WindowManagerTest                                 PASSED in 0.3s
   //heron/api/tests/java:WindowedBoltExecutorTest                          PASSED in 0.5s
   //heron/api/tests/scala:api-scala-test                                   PASSED in 0.9s
     WARNING: //heron/api/tests/scala:api-scala-test: Test execution time (0.9s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/ckptmgr/tests/java:CheckpointManagerServerTest                   PASSED in 0.6s
   //heron/common/tests/cpp/basics:fileutils_unittest                       PASSED in 0.0s
   //heron/common/tests/cpp/basics:rid_unittest                             PASSED in 0.0s
   //heron/common/tests/cpp/basics:strutils_unittest                        PASSED in 0.0s
   //heron/common/tests/cpp/basics:utils_unittest                           PASSED in 0.0s
   //heron/common/tests/cpp/config:topology-config-helper_unittest          PASSED in 0.0s
   //heron/common/tests/cpp/errors:errors_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:module_unittest                          PASSED in 0.0s
   //heron/common/tests/cpp/errors:syserrs_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/metrics:count-metric_unittest                   PASSED in 0.0s
   //heron/common/tests/cpp/metrics:mean-metric_unittest                    PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-count-metric_unittest             PASSED in 0.0s
   //heron/common/tests/cpp/metrics:multi-mean-metric_unittest              PASSED in 0.0s
   //heron/common/tests/cpp/metrics:time-spent-metric_unittest              PASSED in 1.3s
   //heron/common/tests/cpp/network:http_unittest                           PASSED in 0.1s
   //heron/common/tests/cpp/network:order_unittest                          PASSED in 0.1s
   //heron/common/tests/cpp/network:packet_unittest                         PASSED in 0.0s
   //heron/common/tests/cpp/network:piper_unittest                          PASSED in 2.0s
   //heron/common/tests/cpp/network:rate_limit_unittest                     PASSED in 4.1s
   //heron/common/tests/cpp/network:switch_unittest                         PASSED in 0.2s
   //heron/common/tests/cpp/threads:spcountdownlatch_unittest               PASSED in 2.0s
   //heron/common/tests/java:ByteAmountTest                                 PASSED in 0.3s
   //heron/common/tests/java:CommunicatorTest                               PASSED in 0.3s
   //heron/common/tests/java:ConfigReaderTest                               PASSED in 0.3s
   //heron/common/tests/java:EchoTest                                       PASSED in 0.6s
   //heron/common/tests/java:FileUtilsTest                                  PASSED in 1.0s
   //heron/common/tests/java:HeronServerTest                                PASSED in 1.5s
   //heron/common/tests/java:PackageTypeTest                                PASSED in 0.3s
   //heron/common/tests/java:SysUtilsTest                                   PASSED in 4.9s
   //heron/common/tests/java:SystemConfigTest                               PASSED in 0.4s
   //heron/common/tests/java:TopologyUtilsTest                              PASSED in 0.4s
   //heron/common/tests/java:WakeableLooperTest                             PASSED in 1.3s
   //heron/common/tests/python/pex_loader:pex_loader_unittest               PASSED in 0.7s
   //heron/downloaders/tests/java:DLDownloaderTest                          PASSED in 0.8s
   //heron/downloaders/tests/java:ExtractorTests                            PASSED in 0.4s
   //heron/downloaders/tests/java:RegistryTest                              PASSED in 0.4s
   //heron/executor/tests/python:executor_unittest                          PASSED in 1.0s
   //heron/healthmgr/tests/java:BackPressureDetectorTest                    PASSED in 0.6s
   //heron/healthmgr/tests/java:BackPressureSensorTest                      PASSED in 0.6s
   //heron/healthmgr/tests/java:BufferSizeSensorTest                        PASSED in 0.5s
   //heron/healthmgr/tests/java:DataSkewDiagnoserTest                       PASSED in 0.5s
   //heron/healthmgr/tests/java:ExecuteCountSensorTest                      PASSED in 0.5s
   //heron/healthmgr/tests/java:GrowingWaitQueueDetectorTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:HealthManagerTest                           PASSED in 0.9s
   //heron/healthmgr/tests/java:HealthPolicyConfigReaderTest                PASSED in 0.5s
   //heron/healthmgr/tests/java:LargeWaitQueueDetectorTest                  PASSED in 0.7s
   //heron/healthmgr/tests/java:MetricsCacheMetricsProviderTest             PASSED in 0.6s
   //heron/healthmgr/tests/java:PackingPlanProviderTest                     PASSED in 0.6s
   //heron/healthmgr/tests/java:ProcessingRateSkewDetectorTest              PASSED in 0.6s
   //heron/healthmgr/tests/java:ScaleUpResolverTest                         PASSED in 0.7s
   //heron/healthmgr/tests/java:SlowInstanceDiagnoserTest                   PASSED in 0.6s
   //heron/healthmgr/tests/java:UnderProvisioningDiagnoserTest              PASSED in 0.5s
   //heron/healthmgr/tests/java:WaitQueueSkewDetectorTest                   PASSED in 0.5s
   //heron/instance/tests/java:ActivateDeactivateTest                       PASSED in 0.5s
   //heron/instance/tests/java:BoltInstanceTest                             PASSED in 0.5s
   //heron/instance/tests/java:BoltStatefulInstanceTest                     PASSED in 2.5s
   //heron/instance/tests/java:ConnectTest                                  PASSED in 0.6s
   //heron/instance/tests/java:CustomGroupingTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectBoltTest                           PASSED in 0.6s
   //heron/instance/tests/java:EmitDirectSpoutTest                          PASSED in 0.8s
   //heron/instance/tests/java:GlobalMetricsTest                            PASSED in 0.3s
   //heron/instance/tests/java:HandleReadTest                               PASSED in 0.7s
   //heron/instance/tests/java:HandleWriteTest                              PASSED in 5.8s
   //heron/instance/tests/java:MultiAssignableMetricTest                    PASSED in 0.3s
   //heron/instance/tests/java:SpoutInstanceTest                            PASSED in 2.6s
   //heron/instance/tests/java:SpoutStatefulInstanceTest                    PASSED in 2.5s
   //heron/instance/tests/python/network:event_looper_unittest              PASSED in 2.9s
   //heron/instance/tests/python/network:gateway_looper_unittest            PASSED in 10.8s
   //heron/instance/tests/python/network:heron_client_unittest              PASSED in 0.9s
   //heron/instance/tests/python/network:metricsmgr_client_unittest         PASSED in 1.0s
   //heron/instance/tests/python/network:protocol_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/network:st_stmgrcli_unittest               PASSED in 0.9s
   //heron/instance/tests/python/utils:communicator_unittest                PASSED in 0.9s
   //heron/instance/tests/python/utils:custom_grouping_unittest             PASSED in 0.9s
   //heron/instance/tests/python/utils:global_metrics_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:log_unittest                         PASSED in 0.8s
   //heron/instance/tests/python/utils:metrics_helper_unittest              PASSED in 1.0s
   //heron/instance/tests/python/utils:outgoing_tuple_helper_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:pplan_helper_unittest                PASSED in 1.0s
   //heron/instance/tests/python/utils:py_metrics_unittest                  PASSED in 0.9s
   //heron/instance/tests/python/utils:topology_context_impl_unittest       PASSED in 0.9s
   //heron/instance/tests/python/utils:tuple_helper_unittest                PASSED in 0.9s
   //heron/io/dlog/tests/java:DLInputStreamTest                             PASSED in 0.5s
   //heron/io/dlog/tests/java:DLOutputStreamTest                            PASSED in 0.5s
   //heron/metricscachemgr/tests/java:CacheCoreTest                         PASSED in 0.5s
   //heron/metricscachemgr/tests/java:MetricsCacheQueryUtilsTest            PASSED in 0.4s
   //heron/metricscachemgr/tests/java:MetricsCacheTest                      PASSED in 0.4s
   //heron/metricsmgr/tests/java:FileSinkTest                               PASSED in 0.5s
   //heron/metricsmgr/tests/java:HandleTManagerLocationTest                 PASSED in 0.5s
   //heron/metricsmgr/tests/java:MetricsCacheSinkTest                       PASSED in 9.4s
   //heron/metricsmgr/tests/java:MetricsManagerServerTest                   PASSED in 0.6s
   //heron/metricsmgr/tests/java:MetricsUtilTests                           PASSED in 0.4s
   //heron/metricsmgr/tests/java:PrometheusSinkTests                        PASSED in 0.4s
   //heron/metricsmgr/tests/java:SinkExecutorTest                           PASSED in 0.5s
   //heron/metricsmgr/tests/java:TManagerSinkTest                           PASSED in 9.4s
   //heron/metricsmgr/tests/java:WebSinkTest                                PASSED in 0.5s
   //heron/packing/tests/java:FirstFitDecreasingPackingTest                 PASSED in 0.6s
   //heron/packing/tests/java:PackingPlanBuilderTest                        PASSED in 0.4s
   //heron/packing/tests/java:PackingUtilsTest                              PASSED in 0.4s
   //heron/packing/tests/java:ResourceCompliantRRPackingTest                PASSED in 0.8s
   //heron/packing/tests/java:RoundRobinPackingTest                         PASSED in 0.6s
   //heron/packing/tests/java:ScorerTest                                    PASSED in 0.2s
   //heron/scheduler-core/tests/java:HttpServiceSchedulerClientTest         PASSED in 1.0s
   //heron/scheduler-core/tests/java:JsonFormatterUtilsTest                 PASSED in 0.4s
   //heron/scheduler-core/tests/java:LaunchRunnerTest                       PASSED in 1.1s
   //heron/scheduler-core/tests/java:LauncherUtilsTest                      PASSED in 1.7s
   //heron/scheduler-core/tests/java:LibrarySchedulerClientTest             PASSED in 0.5s
   //heron/scheduler-core/tests/java:RuntimeManagerMainTest                 PASSED in 2.3s
   //heron/scheduler-core/tests/java:RuntimeManagerRunnerTest               PASSED in 2.1s
   //heron/scheduler-core/tests/java:SchedulerClientFactoryTest             PASSED in 1.1s
   //heron/scheduler-core/tests/java:SchedulerMainTest                      PASSED in 3.0s
   //heron/scheduler-core/tests/java:SchedulerServerTest                    PASSED in 0.5s
   //heron/scheduler-core/tests/java:SchedulerUtilsTest                     PASSED in 1.2s
   //heron/scheduler-core/tests/java:SubmitDryRunRenderTest                 PASSED in 1.6s
   //heron/scheduler-core/tests/java:SubmitterMainTest                      PASSED in 1.1s
   //heron/scheduler-core/tests/java:UpdateDryRunRenderTest                 PASSED in 1.5s
   //heron/scheduler-core/tests/java:UpdateTopologyManagerTest              PASSED in 11.9s
   //heron/schedulers/tests/java:AuroraCLIControllerTest                    PASSED in 0.5s
   //heron/schedulers/tests/java:AuroraContextTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:AuroraLauncherTest                         PASSED in 0.8s
   //heron/schedulers/tests/java:AuroraSchedulerTest                        PASSED in 2.9s
   //heron/schedulers/tests/java:HeronExecutorTaskTest                      PASSED in 1.5s
   //heron/schedulers/tests/java:HeronMasterDriverTest                      PASSED in 1.7s
   //heron/schedulers/tests/java:KubernetesContextTest                      PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesControllerTest                   PASSED in 0.3s
   //heron/schedulers/tests/java:KubernetesLauncherTest                     PASSED in 0.7s
   //heron/schedulers/tests/java:KubernetesSchedulerTest                    PASSED in 0.7s
   //heron/schedulers/tests/java:LaunchableTaskTest                         PASSED in 0.5s
   //heron/schedulers/tests/java:LocalLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:LocalSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MarathonControllerTest                     PASSED in 1.0s
   //heron/schedulers/tests/java:MarathonLauncherTest                       PASSED in 0.7s
   //heron/schedulers/tests/java:MarathonSchedulerTest                      PASSED in 0.4s
   //heron/schedulers/tests/java:MesosFrameworkTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:MesosLauncherTest                          PASSED in 0.5s
   //heron/schedulers/tests/java:MesosSchedulerTest                         PASSED in 0.6s
   //heron/schedulers/tests/java:NomadSchedulerTest                         PASSED in 1.9s
   //heron/schedulers/tests/java:SlurmControllerTest                        PASSED in 1.1s
   //heron/schedulers/tests/java:SlurmLauncherTest                          PASSED in 0.9s
   //heron/schedulers/tests/java:SlurmSchedulerTest                         PASSED in 1.1s
   //heron/schedulers/tests/java:TaskResourcesTest                          PASSED in 0.3s
   //heron/schedulers/tests/java:TaskUtilsTest                              PASSED in 0.3s
   //heron/schedulers/tests/java:V1ControllerTest                           PASSED in 1.6s
   //heron/schedulers/tests/java:VolumesTests                               PASSED in 0.3s
   //heron/schedulers/tests/java:YarnLauncherTest                           PASSED in 0.6s
   //heron/schedulers/tests/java:YarnSchedulerTest                          PASSED in 0.4s
   //heron/simulator/tests/java:AllGroupingTest                             PASSED in 0.3s
   //heron/simulator/tests/java:CustomGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:FieldsGroupingTest                          PASSED in 0.7s
   //heron/simulator/tests/java:InstanceExecutorTest                        PASSED in 0.5s
   //heron/simulator/tests/java:LowestGroupingTest                          PASSED in 0.3s
   //heron/simulator/tests/java:RotatingMapTest                             PASSED in 0.3s
   //heron/simulator/tests/java:ShuffleGroupingTest                         PASSED in 0.3s
   //heron/simulator/tests/java:SimulatorTest                               PASSED in 0.4s
   //heron/simulator/tests/java:TopologyManagerTest                         PASSED in 0.4s
   //heron/simulator/tests/java:TupleCacheTest                              PASSED in 0.3s
   //heron/simulator/tests/java:XORManagerTest                              PASSED in 0.4s
   //heron/spi/tests/java:ConfigLoaderTest                                  PASSED in 1.3s
   //heron/spi/tests/java:ConfigTest                                        PASSED in 0.9s
   //heron/spi/tests/java:ContextTest                                       PASSED in 0.3s
   //heron/spi/tests/java:ExceptionInfoTest                                 PASSED in 0.2s
   //heron/spi/tests/java:KeysTest                                          PASSED in 0.2s
   //heron/spi/tests/java:MetricsInfoTest                                   PASSED in 0.2s
   //heron/spi/tests/java:MetricsRecordTest                                 PASSED in 0.2s
   //heron/spi/tests/java:NetworkUtilsTest                                  PASSED in 1.5s
   //heron/spi/tests/java:PackingPlanTest                                   PASSED in 0.3s
   //heron/spi/tests/java:ResourceTest                                      PASSED in 0.3s
   //heron/spi/tests/java:ShellUtilsTest                                    PASSED in 2.2s
   //heron/spi/tests/java:TokenSubTest                                      PASSED in 0.4s
   //heron/spi/tests/java:UploaderUtilsTest                                 PASSED in 0.5s
   //heron/statefulstorages/tests/java:DlogStorageTest                      PASSED in 1.9s
   //heron/statefulstorages/tests/java:HDFSStorageTest                      PASSED in 2.0s
   //heron/statefulstorages/tests/java:LocalFileSystemStorageTest           PASSED in 0.9s
   //heron/statemgrs/tests/cpp:zk-statemgr_unittest                         PASSED in 0.0s
   //heron/statemgrs/tests/java:CuratorStateManagerTest                     PASSED in 0.6s
   //heron/statemgrs/tests/java:LocalFileSystemStateManagerTest             PASSED in 1.2s
   //heron/statemgrs/tests/java:ZkUtilsTest                                 PASSED in 1.0s
   //heron/statemgrs/tests/python:configloader_unittest                     PASSED in 1.0s
   //heron/statemgrs/tests/python:statemanagerfactory_unittest              PASSED in 0.8s
   //heron/statemgrs/tests/python:zkstatemanager_unittest                   PASSED in 0.9s
   //heron/stmgr/tests/cpp/grouping:all-grouping_unittest                   PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:custom-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:fields-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:lowest-grouping_unittest                PASSED in 0.0s
   //heron/stmgr/tests/cpp/grouping:shuffle-grouping_unittest               PASSED in 0.0s
   //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest               PASSED in 1.2s
     WARNING: //heron/stmgr/tests/cpp/server:checkpoint-gateway_unittest: Test execution time (1.2s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stateful-restorer_unittest                PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/server:stateful-restorer_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest               PASSED in 0.0s
     WARNING: //heron/stmgr/tests/cpp/util:neighbour_calculator_unittest: Test execution time (0.0s excluding execution overhead) outside of range for MODERATE tests. Consider setting timeout="short" or size="small".
   //heron/stmgr/tests/cpp/util:rotating-map_unittest                       PASSED in 0.0s
   //heron/stmgr/tests/cpp/util:tuple-cache_unittest                        PASSED in 3.6s
   //heron/stmgr/tests/cpp/util:xor-manager_unittest                        PASSED in 4.0s
   //heron/tmanager/tests/cpp/server:stateful_checkpointer_unittest         PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:stateful_restorer_unittest             PASSED in 5.0s
   //heron/tmanager/tests/cpp/server:tcontroller_unittest                   PASSED in 0.0s
   //heron/tmanager/tests/cpp/server:tmanager_unittest                      PASSED in 26.1s
   //heron/tools/apiserver/tests/java:ConfigUtilsTests                      PASSED in 0.5s
   //heron/tools/apiserver/tests/java:TopologyResourceTests                 PASSED in 1.1s
   //heron/tools/cli/tests/python:client_command_unittest                   PASSED in 1.1s
   //heron/tools/cli/tests/python:opts_unittest                             PASSED in 0.8s
   //heron/tools/explorer/tests/python:explorer_unittest                    PASSED in 1.1s
   //heron/tools/tracker/tests/python:query_operator_unittest               PASSED in 1.4s
   //heron/tools/tracker/tests/python:query_unittest                        PASSED in 1.3s
   //heron/tools/tracker/tests/python:topology_unittest                     PASSED in 1.2s
   //heron/tools/tracker/tests/python:tracker_unittest                      PASSED in 1.4s
   //heron/uploaders/tests/java:DlogUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:GcsUploaderTests                            PASSED in 0.7s
   //heron/uploaders/tests/java:HdfsUploaderTest                            PASSED in 0.4s
   //heron/uploaders/tests/java:HttpUploaderTest                            PASSED in 0.6s
   //heron/uploaders/tests/java:LocalFileSystemConfigTest                   PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemContextTest                  PASSED in 0.3s
   //heron/uploaders/tests/java:LocalFileSystemUploaderTest                 PASSED in 0.4s
   //heron/uploaders/tests/java:S3UploaderTest                              PASSED in 0.9s
   //heron/uploaders/tests/java:ScpUploaderTest                             PASSED in 0.5s
   
   INFO: Build completed successfully, 1374 total actions
   ```
   
   </p>
   </details>
   
   <details><summary>//heron/stmgr/tests/cpp/server:stmgr_unittest - higher spec</summary>
   <p>
   
   ```bash
   ==================== Test output for //heron/stmgr/tests/cpp/server:stmgr_unittest:
   Current working directory (to find stmgr logs) /root/.cache/bazel/_bazel_root/f4ab758bd53020512013f7dfa13b6902/execroot/org_apache_heron/bazel-out/k8-fastbuild/bin/heron/stmgr/tests/cpp/server/stmgr_unittest.runfiles/org_apache_heron
   Using config file heron/config/src/yaml/conf/test/test_heron_internals.yaml
   [==========] Running 11 tests from 1 test case.
   [----------] Global test environment set-up.
   [----------] 11 tests from StMgr
   [ RUN      ] StMgr.test_pplan_decode
   [warn] Added a signal to event base 0x55c0c05e0840 with signals already added to event_base 0x55c0c05e0580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0b00 with signals already added to event_base 0x55c0c05e0840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e0dc0 with signals already added to event_base 0x55c0c05e0b00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1080 with signals already added to event_base 0x55c0c05e0dc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_pplan_decode (2013 ms)
   [ RUN      ] StMgr.test_tuple_route
   [warn] Added a signal to event base 0x55c0c072e100 with signals already added to event_base (nil).  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072de40 with signals already added to event_base 0x55c0c072e100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072db80 with signals already added to event_base 0x55c0c072de40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d8c0 with signals already added to event_base 0x55c0c072db80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d8c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e39c0 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tuple_route (2006 ms)
   [ RUN      ] StMgr.test_custom_grouping_route
   [warn] Added a signal to event base 0x55c0c072c2c0 with signals already added to event_base 0x55c0c05e39c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c840 with signals already added to event_base 0x55c0c072c2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cb00 with signals already added to event_base 0x55c0c072c840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072cb00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2680 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_custom_grouping_route (2009 ms)
   [ RUN      ] StMgr.test_back_pressure_instance
   [warn] Added a signal to event base 0x55c0c05e2c00 with signals already added to event_base 0x55c0c05e2680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3180 with signals already added to event_base 0x55c0c05e2c00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3700 with signals already added to event_base 0x55c0c05e3180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1b80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_instance (4002 ms)
   [ RUN      ] StMgr.test_spout_death_under_backpressure
   [warn] Added a signal to event base 0x55c0c05e2100 with signals already added to event_base 0x55c0c05e1b80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3c80 with signals already added to event_base 0x55c0c05e2100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1340 with signals already added to event_base 0x55c0c05e3c80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072cdc0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_spout_death_under_backpressure (6144 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr
   [warn] Added a signal to event base 0x55c0c05e2940 with signals already added to event_base 0x55c0c072cdc0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e23c0 with signals already added to event_base 0x55c0c05e2940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1e40 with signals already added to event_base 0x55c0c05e23c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e1600 with signals already added to event_base 0x55c0c05e1e40.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e18c0 with signals already added to event_base 0x55c0c05e1600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c05e18c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c05e3440 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr (5005 ms)
   [ RUN      ] StMgr.test_back_pressure_stmgr_reconnect
   [warn] Added a signal to event base 0x55c0c05e2ec0 with signals already added to event_base 0x55c0c05e3440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f700 with signals already added to event_base 0x55c0c05e2ec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f180 with signals already added to event_base 0x55c0c072f700.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f180.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072ec00 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_back_pressure_stmgr_reconnect (4035 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_new_address
   [warn] Added a signal to event base 0x55c0c072e940 with signals already added to event_base 0x55c0c072ec00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e680 with signals already added to event_base 0x55c0c072e940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f440 with signals already added to event_base 0x55c0c072e680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072e3c0 with signals already added to event_base 0x55c0c072f440.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072e3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d600 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072f9c0 with signals already added to event_base 0x55c0c072d600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_new_address (5015 ms)
   [ RUN      ] StMgr.test_tmanager_restart_on_same_address
   [warn] Added a signal to event base 0x55c0c072d340 with signals already added to event_base 0x55c0c072f9c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072d080 with signals already added to event_base 0x55c0c072d340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c580 with signals already added to event_base 0x55c0c072d080.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072eec0 with signals already added to event_base 0x55c0c072c580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072c000 with signals already added to event_base 0x55c0c072eec0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c072fc80 with signals already added to event_base 0x55c0c072c000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca000 with signals already added to event_base 0x55c0c072fc80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca2c0 with signals already added to event_base 0x55c0c1cca000.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca580 with signals already added to event_base 0x55c0c1cca2c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1cca840 with signals already added to event_base 0x55c0c1cca580.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_tmanager_restart_on_same_address (4008 ms)
   [ RUN      ] StMgr.test_metricsmgr_reconnect
   [warn] Added a signal to event base 0x55c0c1cccc00 with signals already added to event_base 0x55c0c1cca840.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc940 with signals already added to event_base 0x55c0c1cccc00.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc680 with signals already added to event_base 0x55c0c1ccc940.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc3c0 with signals already added to event_base 0x55c0c1ccc680.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccc100 with signals already added to event_base 0x55c0c1ccc3c0.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccbb80 with signals already added to event_base 0x55c0c1ccc100.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb600 with signals already added to event_base 0x55c0c1ccbb80.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb340 with signals already added to event_base 0x55c0c1ccb600.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [warn] Added a signal to event base 0x55c0c1ccb080 with signals already added to event_base 0x55c0c1ccb340.  Only one can have signals at a time with the epoll backend.  The base with the most recently added signal or the most recent event_base_loop() call gets preference; do not rely on this behavior in future Libevent versions.
   [       OK ] StMgr.test_metricsmgr_reconnect (4004 ms)
   [ RUN      ] StMgr.test_PatchPhysicalPlanWithHydratedTopology
   [       OK ] StMgr.test_PatchPhysicalPlanWithHydratedTopology (0 ms)
   [----------] 11 tests from StMgr (38241 ms total)
   
   [----------] Global test environment tear-down
   [==========] 11 tests from 1 test case ran. (38241 ms total)
   [  PASSED  ] 11 tests.
   ================================================================================
   Target //heron/stmgr/tests/cpp/server:stmgr_unittest up-to-date:
     bazel-bin/heron/stmgr/tests/cpp/server/stmgr_unittest
   INFO: Elapsed time: 39.218s, Critical Path: 38.29s
   INFO: 2 processes: 1 internal, 1 local.
   INFO: Build completed successfully, 2 total actions
   //heron/stmgr/tests/cpp/server:stmgr_unittest                            PASSED in 38.3s
   
   INFO: Build completed successfully, 2 total actions
   ```
   </p>
   </details>
   
   **_Edit:_** Build finally passed with all changes on Travis CI. Bringing in some tweaks which should not break the build... Travis CI please pass the build🤞🏼.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-919251511


   I will write up what I have found on the issue with testing `createStatefulSet`. It is achievable but will be rather involved and require some serious digging in the codebase. I think your judgement is sound to not convolve this issue with the other.
   
   On a side note, I think it is time to switch over from Travis CI to Github Actions. From personal experience, I feel that GH Actions are faster to run than Travis CI and it might speed things up to not rely on a third-party service.
   
   EDIT: I have created issue #3713 with my findings on the `V1Controller` test suite. If anyone has insights or comments please post there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-933893882


   Hi @nicknezis, I have spun up the K8s Heron locally but only have 4 cores and 4gb I can allocate. As such the `acking` topology remains in a `pending` state. I will test further later. The topology appears to be launching just fine, but I am unsure if the template is being used. No additional `Roles` or `RoleBindings` were required:
   
   <details>
     <summary>Config Map of Pod Template</summary>
   
   ```bash
   minikube kubectl -- get configmaps configmap-pod-template -o yaml
   ```
   
   ```yaml
   apiVersion: v1
   data:
     pod_template.yaml: |
       apiVersion: v1
       kind: PodTemplate
       metadata:
         name: pod-template-example
         namespace: default
       template:
         metadata:
           name: acking-pod-template-example
   kind: ConfigMap
   metadata:
     creationTimestamp: "2021-10-04T21:49:11Z"
     name: configmap-pod-template
     namespace: default
     resourceVersion: "1021"
     uid: da578cac-27cc-4378-8cf3-664a208fcd96
   ```
   </details>
   
   <details>
     <summary>Heron Submit</summary>
   
   ```bash
    kubernetes ~/.heron/examples/heron-api-examples.jar \
   --verbose \
   --config-property heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml \
   org.apache.heron.examples.api.AckingTopology acking
   ```
   
   ```bash
   [2021-10-04 17:54:40 -0400] [DEBUG]: Input Command Line Args: {'cluster/[role]/[env]': 'kubernetes', 'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': '', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Input Command Line Args: {'cluster/[role]/[env]': 'kubernetes', 'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': '', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Using cluster definition from file /home/saad/.config/heron/kubernetes/cli.yaml
   [2021-10-04 17:54:40 -0400] [DEBUG]: Processed Command Line Args: {'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': 'http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit', 'cluster': 'kubernetes', 'role': 'saad', 'environ': 'default', 'submit_user': 'saad', 'deploy_mode': 'server'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Submit Args {'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '', 'service_url': 'http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit', 'cluster': 'kubernetes', 'role': 'saad', 'environ': 'default', 'submit_user': 'saad', 'deploy_mode': 'server'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Invoking class using command: `/usr/bin/java -client -Xmx1g -cp '/home/saad/.heron/examples/heron-api-examples.jar:/home/saad/.heron/lib/third_party/*' org.apache.heron.examples.api.AckingTopology acking`
   [2021-10-04 17:54:40 -0400] [DEBUG]: Heron options: {cmdline.topologydefn.tmpdirectory=/tmp/tmpvj0pjzzc,cmdline.topology.initial.state=RUNNING,cmdline.topology.role=saad,cmdline.topology.environment=default,cmdline.topology.cluster=kubernetes,cmdline.topology.file_name=/home/saad/.heron/examples/heron-api-examples.jar,cmdline.topology.class_name=org.apache.heron.examples.api.AckingTopology,cmdline.topology.submit_user=saad}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Topology config: kvs {
     key: "topology.component.rammap"
     value: "word:1073741824,exclaim1:1073741824"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.team.environment"
     serialized_value: "\254\355\000\005t\000\007default"
     type: JAVA_SERIALIZED_VALUE
   }
   kvs {
     key: "topology.container.disk"
     value: "2147483648"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.container.ram"
     value: "4294967296"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.enable.message.timeouts"
     value: "true"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.serializer.classname"
     value: "org.apache.heron.api.serializer.KryoSerializer"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.debug"
     value: "true"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.max.spout.pending"
     value: "1000000000"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.container.cpu"
     value: "3.0"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.stateful.spill.state"
     value: "false"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.name"
     value: "acking"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.team.name"
     value: "saad"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.stateful.spill.state.location"
     value: "./spilled-state/"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.component.parallelism"
     value: "1"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.stmgrs"
     value: "2"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.worker.childopts"
     value: "-XX:+HeapDumpOnOutOfMemoryError"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.reliability.mode"
     value: "ATLEAST_ONCE"
     type: STRING_VALUE
   }
   kvs {
     key: "topology.message.timeout.secs"
     value: "10"
     type: STRING_VALUE
   }
   
   [2021-10-04 17:54:40 -0400] [DEBUG]: Component config:
   [2021-10-04 17:54:40 -0400] [DEBUG]: word => kvs {
     key: "topology.component.parallelism"
     value: "2"
     type: STRING_VALUE
   }
   
   [2021-10-04 17:54:40 -0400] [DEBUG]: exclaim1 => kvs {
     key: "topology.component.parallelism"
     value: "2"
     type: STRING_VALUE
   }
   
   [2021-10-04 17:54:40 -0400] [INFO]: Launching topology: 'acking'
   [2021-10-04 17:54:40 -0400] [INFO]: {'topology-file-name': '/home/saad/.heron/examples/heron-api-examples.jar', 'topology-class-name': 'org.apache.heron.examples.api.AckingTopology', 'config_path': '/home/saad/.heron/conf', 'config_property': ['heron.kubernetes.pod.template.configmap.name=configmap-pod-template.pod_template.yaml'], 'deploy_deactivated': False, 'dry_run': False, 'dry_run_format': 'colored_table', 'extra_launch_classpath': '', 'release_yaml_file': '/home/saad/.heron/release.yaml', 'service_url': 'http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy', 'topology_main_jvm_property': [], 'verbose': 'True', 'verbose_gc': False, 'subcommand': 'submit', 'cluster': 'kubernetes', 'role': 'saad', 'environ': 'default', 'submit_user': 'saad', 'deploy_mode': 'server'}
   [2021-10-04 17:54:40 -0400] [DEBUG]: Starting new HTTP connection (1): localhost
   [2021-10-04 17:54:43 -0400] [DEBUG]: http://localhost:8001 "POST /api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/topologies HTTP/1.1" 201 78
   [2021-10-04 17:54:43 -0400] [INFO]: Successfully launched topology 'acking' 
   [2021-10-04 17:54:43 -0400] [DEBUG]: Elapsed time: 2.732s.
   ```
   </details>
   
   <details>
     <summary>Describe Pods</summary>
   
   ```bash
   Name:           acking-0
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-566bc767b6
                   statefulset.kubernetes.io/pod-name=acking-0
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6002/TCP, 6003/TCP, 6001/TCP, 6009/TCP, 6008/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--3038063642361095900.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking56b40818-10c0-4cda-ac3a-2c27303fef23 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-0 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xfcf2 (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-xfcf2:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  58s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   
   
   Name:           acking-1
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-566bc767b6
                   statefulset.kubernetes.io/pod-name=acking-1
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6002/TCP, 6003/TCP, 6001/TCP, 6009/TCP, 6008/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--3038063642361095900.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking56b40818-10c0-4cda-ac3a-2c27303fef23 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-1 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7vwm (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-c7vwm:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  58s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   
   
   Name:           acking-2
   Namespace:      default
   Priority:       0
   Node:           <none>
   Labels:         app=heron
                   controller-revision-hash=acking-566bc767b6
                   statefulset.kubernetes.io/pod-name=acking-2
                   topology=acking
   Annotations:    prometheus.io/port: 8080
                   prometheus.io/scrape: true
   Status:         Pending
   IP:             
   IPs:            <none>
   Controlled By:  StatefulSet/acking
   Containers:
     executor:
       Image:       apache/heron:testbuild
       Ports:       6002/TCP, 6003/TCP, 6001/TCP, 6009/TCP, 6008/TCP, 6004/TCP, 6006/TCP, 6007/TCP, 6005/TCP
       Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
       Command:
         sh
         -c
         ./heron-core/bin/heron-downloader-config kubernetes && ./heron-core/bin/heron-downloader distributedlog://zookeeper:2181/heronbkdl/acking-saad-tag-0--3038063642361095900.tar.gz . && SHARD_ID=${POD_NAME##*-} && echo shardId=${SHARD_ID} && ./heron-core/bin/heron-executor --topology-name=acking --topology-id=acking56b40818-10c0-4cda-ac3a-2c27303fef23 --topology-defn-file=acking.defn --state-manager-connection=zookeeper:2181 --state-manager-root=/heron --state-manager-config-file=./heron-conf/statemgr.yaml --tmanager-binary=./heron-core/bin/heron-tmanager --stmgr-binary=./heron-core/bin/heron-stmgr --metrics-manager-classpath=./heron-core/lib/metricsmgr/* --instance-jvm-opts="LVhYOitIZWFwRHVtcE9uT3V0T2ZNZW1vcnlFcnJvcg(61)(61)" --classpath=heron-api-examples.jar --heron-internals-config-file=./heron-conf/heron_internals.yaml --override-config-file=./heron-conf/override.yaml --component-ram-map=exclaim1:1073741824,word:1073741824 --component-jvm-opts="" --pkg-type=jar --topology-b
 inary-file=heron-api-examples.jar --heron-java-home=$JAVA_HOME --heron-shell-binary=./heron-core/bin/heron-shell --cluster=kubernetes --role=saad --environment=default --instance-classpath=./heron-core/lib/instance/* --metrics-sinks-config-file=./heron-conf/metrics_sinks.yaml --scheduler-classpath=./heron-core/lib/scheduler/*:./heron-core/lib/packing/*:./heron-core/lib/statemgr/* --python-instance-binary=./heron-core/bin/heron-python-instance --cpp-instance-binary=./heron-core/bin/heron-cpp-instance --metricscache-manager-classpath=./heron-core/lib/metricscachemgr/* --metricscache-manager-mode=disabled --is-stateful=false --checkpoint-manager-classpath=./heron-core/lib/ckptmgr/*:./heron-core/lib/statefulstorage/*: --stateful-config-file=./heron-conf/stateful.yaml --checkpoint-manager-ram=1073741824 --health-manager-mode=disabled --health-manager-classpath=./heron-core/lib/healthmgr/* --shard=$SHARD_ID --server-port=6001 --tmanager-controller-port=6002 --tmanager-stats-port=6003 --sh
 ell-port=6004 --metrics-manager-port=6005 --scheduler-port=6006 --metricscache-manager-server-port=6007 --metricscache-manager-stats-port=6008 --checkpoint-manager-port=6009
       Limits:
         cpu:     3
         memory:  4Gi
       Requests:
         cpu:     3
         memory:  4Gi
       Environment:
         HOST:       (v1:status.podIP)
         POD_NAME:  acking-2 (v1:metadata.name)
       Mounts:
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snvvp (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     kube-api-access-snvvp:
       Type:                    Projected (a volume that contains injected data from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   Guaranteed
   Node-Selectors:              <none>
   Tolerations:                 node.alpha.kubernetes.io/notReady:NoExecute op=Exists for 10s
                                node.alpha.kubernetes.io/unreachable:NoExecute op=Exists for 10s
                                node.kubernetes.io/not-ready:NoExecute op=Exists for 10s
                                node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason            Age   From               Message
     ----     ------            ----  ----               -------
     Warning  FailedScheduling  58s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.
   ```
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-934010151


   > I suspect what I had in the ConfigMap was being replaced with `getPodSpec()` in the code. We may need to merge the code configured aspects with anything loaded from the PodTemplate.
   
   That makes perfect sense, thank you! I shall have a look at how the two `V1PodSpec` can be merged.
   
   > It is important to note that Spark is opinionated about certain pod configurations so there are values in the pod template that will always be overwritten by Spark. Therefore, users of this feature should note that specifying the pod template file only lets Spark start with a template pod instead of an empty pod during the pod-building process. For details, see the full list of pod template values that will be overwritten by spark.
   
   I think we will need to follow the same protocol and standards and document this information. Not all fields are lists or maps...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman edited a comment on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman edited a comment on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-920551592


   I think I have an idea of what needs to happen... time to iterate :wink:.
   
   @nicknezis You are correct, Spark's architecture has a driver/coordinator with a fleet of executors. Is there already a `loadPodFromTemplate` in Heron or do we need to put one together? I could not find anything in the `V1Controller` or K8S scheduler codebase in Heron. The [`getContainer`](https://github.com/apache/incubator-heron/blob/7322335a5f4e824a8af2797a3b0dc82e315e3dfb/heron/schedulers/src/java/org/apache/heron/scheduler/kubernetes/V1Controller.java#L510-L574) routine is configuring the Docker container for executor deployment, is this the routine you are referring to?
   
   There is also [this](https://github.com/apache/spark/blob/bbb33af2e4c90d679542298f56073a71de001fb7/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L105) which indicates there is a built-in function in the K8S client library (scala and this potentially java too) that should handle the parse and assembly of the Pod Template.
   
   Currently `createStatefulSet` relies on the `V1` K8S API to put together a default Pod Template in which most of the fields are simply set to `null`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-917719201


   @nicknezis thank you, I do not have a workload I can actually test this with.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] nwangtw commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
nwangtw commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728492711



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       sgtm. thx!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on a change in pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on a change in pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#discussion_r728445745



##########
File path: deploy/kubernetes/general/apiserver.yaml
##########
@@ -95,6 +95,7 @@ spec:
               -D heron.uploader.dlog.topologies.namespace.uri=distributedlog://zookeeper:2181/heron
               -D heron.statefulstorage.classname=org.apache.heron.statefulstorage.dlog.DlogStorage
               -D heron.statefulstorage.dlog.namespace.uri=distributedlog://zookeeper:2181/heron
+            # -D heron.kubernetes.pod.template.configmap.disabled=true

Review comment:
       > Another thought I just had is that we might want to update any other Kubernetes deployment yamls for API Server
   
   I shall look into this. I am currently working my way through testing some methods in the `V1Conttoller`.
   
   **_Edit:_** I am going to leave the Helm Charts to someone that has worked on them before. It is best not risk it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-951513314


   I did some research and found [`readNamespacedConfigmap`](https://javadoc.io/doc/io.kubernetes/client-java-api/11.0.0/io/kubernetes/client/openapi/apis/CoreV1Api.html#readNamespacedConfigMap-java.lang.String-java.lang.String-java.lang.String-java.lang.Boolean-java.lang.Boolean-). This means I can simplify the code a lot and get rid of `listNamespacedConfigmap`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] joshfischer1108 merged pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
joshfischer1108 merged pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-heron] surahman commented on pull request #3710: [HERON-3707] ConfigMap Pod Template Support

Posted by GitBox <gi...@apache.org>.
surahman commented on pull request #3710:
URL: https://github.com/apache/incubator-heron/pull/3710#issuecomment-954864671


   I realized whilst creating my slides that it makes more sense to let Heron's values for `limits` take precedence over those in the Pod Templates. The full battery of tests is passing locally, over to you TravisCI 🎲 🤞🏼 .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@heron.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org