You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2019/08/22 14:34:02 UTC

[GitHub] [hadoop] elek commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.

elek commented on a change in pull request #1331: HDDS-2002. Update documentation for 0.4.1 release.
URL: https://github.com/apache/hadoop/pull/1331#discussion_r316712661
 
 

 ##########
 File path: hadoop-hdds/docs/content/beyond/Containers.md
 ##########
 @@ -111,23 +117,32 @@ OZONE-SITE.XML_ozone.enabled=True
 #...
 ```
 
-As you can see we use naming convention. Based on the name of the environment variable, the appropariate hadoop config XML (`ozone-site.xml` in our case) will be generated by a [script](https://github.com/apache/hadoop/tree/docker-hadoop-runner-latest/scripts) which is included in the `hadoop-runner` base image.
+As you can see we use naming convention. Based on the name of the environment variable, the
+appropriate hadoop config XML (`ozone-site.xml` in our case) will be generated by a
+[script](https://github.com/apache/hadoop/tree/docker-hadoop-runner-latest/scripts) which is
+included in the `hadoop-runner` base image.
 
-The [entrypoint](https://github.com/apache/hadoop/blob/docker-hadoop-runner-latest/scripts/starter.sh) of the `hadoop-runner` image contains a helper shell script which triggers this transformation and cab do additional actions (eg. initialize scm/om storage, download required keytabs, etc.) based on environment variables.
+The [entrypoint](https://github.com/apache/hadoop/blob/docker-hadoop-runner-latest/scripts/starter.sh)
+of the `hadoop-runner` image contains a helper shell script which triggers this transformation and
+can do additional actions (eg. initialize scm/om storage, download required keytabs, etc.)
+based on environment variables.
 
 ## Test/Staging
 
-The `docker-compose` based approach is recommended only for local test not for multi node cluster. To use containers on a multi-node cluster we need a Container Orchestrator like Kubernetes.
+The `docker-compose` based approach is recommended only for local test, not for multi node cluster.
+To use containers on a multi-node cluster we need a Container Orchestrator like Kubernetes.
 
 Kubernetes example files are included in the `kubernetes` folder.
 
 Review comment:
   The documentation is included in the tar file where there is no target/dist. I think it's fine to use path relative to the distribution tar. We can be more specific with the `kubernetes/examples` directory name..

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org