You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/12/07 13:43:46 UTC

[GitHub] [ozone] adoroszlai opened a new pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

adoroszlai opened a new pull request #1667:
URL: https://github.com/apache/ozone/pull/1667


   ## What changes were proposed in this pull request?
   
   1. Replace shell functions and variables that were originally copied from Hadoop.
    * Deprecate `HADOOP_*` variables, but use their values, unless the corresponding `OZONE_` variable is also defined (which indicates "new code").
    * `HADOOP_CONF_DIR` is replaced by `OZONE_CONFIG_DIR` (instead of `OZONE_CONF_DIR`) to workaround a behavior `envtoconf` that cannot handle variables named `*_CONF_*`.
   2. Drop unmaintained Windows scripts
   3. Drop empty `mapreduce`, `yarn` etc. directories in final artifact, they are no longer necessary.
   4. Fix wrong accumulation of return codes in `start-ozone.sh`: it was incremented for the first and third commands and unconditionally set for the second one, so the result of the first command was lost.  It should be set for the first one instead.
   
   https://issues.apache.org/jira/browse/HDDS-4525
   
   ## How was this patch tested?
   
   Added Bats test (for deprecation) and acceptance tests (for `ozone classpath` and `ozone envvars` commands).
   
   Changed MR acceptance tests to globally define both `HADOOP_CLASSPATH` and `OZONE_CLASSPATH`.  This confirms that shaded Ozone FS jar being present in `HADOOP_CLASSPATH` does not break Ozone startup.  Previously `HADOOP_CLASSPATH` had to be limited to the Hadoop containers.
   
   Regular CI:
   https://github.com/adoroszlai/hadoop-ozone/actions/runs/405676505
   
   Manually tested `start-ozone.sh` and `stop-ozone.sh` in `ozonescripts` environment (see HDDS-4556 for automating the smoketest).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-745167615


   > Is it possible to instead changing to config and breaking the convention fix the envtoconf behaviour, I see that OZONE_CONFIG_DIR is added to docker-config files and that's how envtoconf came into the picture, previously we did not needed HADOOP_CONF_DIR there, we do we need OZONE_CONFIG_DIR in the docker-config files now?
   
   `HADOOP_CONF_DIR` is already defined in the docker images we use, hence it's not needed in `docker-config`, eg.:
   
   ```
   $ docker run -it --rm apache/ozone-runner:20200625-1 env | grep HADOOP
   HADOOP_LOG_DIR=/var/log/hadoop
   HADOOP_CONF_DIR=/etc/hadoop
   ```
   
   Also, `HADOOP_CONF_DIR` is an explicitly defined exception in `envtoconf`:
   
   https://github.com/apache/ozone/blob/09579756b0756fcaf1b22b860c1ee1eb927e82c2/hadoop-ozone/dist/src/main/dockerlibexec/envtoconf.py#L39
   
   We could fix this script, but it would only apply to `ozone-runner` containers, since it uses mounted Ozone binaries.  Both `ozone` and `hadoop` containers (used in `upgrade`, `ozone-mr`, etc. tests) come with `envtoconf.py` baked in.
   
   ```
   $ docker run -it --rm apache/hadoop:3 grep CONF_DIR /opt/envtoconf.py
       self.excluded_envs = ['HADOOP_CONF_DIR']
   $ docker run -it --rm apache/ozone:1.0.0 grep CONF_DIR /opt/hadoop/libexec/envtoconf.py
       self.excluded_envs = ['HADOOP_CONF_DIR']
   ```
   
   So we have 3 options:
   
   1. proper fix: update `envtoconf` in both Ozone and Hadoop and build new Docker images (`apache/ozone-runner`, `apache/hadoop`, `apache/ozone`, the latter two for multiple versions)
   2. hack: modify all `docker-compose.yaml` files to mount the fixed version of `envtoconf`
   3. workaround: use different variable name


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r551485425



##########
File path: hadoop-ozone/dist/src/main/smoketest/cli/classpath.robot
##########
@@ -0,0 +1,46 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test ozone classpath command
+Library             BuiltIn
+Resource            ../lib/os.robot
+Resource            ../ozone-lib/shell.robot
+Test Timeout        5 minutes
+Suite Setup         Find Jars Dir
+
+*** Test Cases ***
+Ignores HADOOP_CLASSPATH if OZONE_CLASSPATH is set

Review comment:
       awesome. thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543081227



##########
File path: hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml
##########
@@ -24,7 +24,7 @@ services:
     env_file:
       - docker-config
     environment:
-      HADOOP_OPTS: ${HADOOP_OPTS}
+      OZONE_OPTS:

Review comment:
       > Environment variables with only a key are resolved to their values on the machine Compose is running on
   
   https://docs.docker.com/compose/compose-file/#environment
   
   So repeating the variable name is unnecessary.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-745217645


   Hi Attila, thank you for addressing review comments of mine, and going further and find way more cases than I did!
   
   Regarding the CONF_DIR environment variables, I would say we should fix it properly, and I understand that updating the docker images would be a pain, and a bit of a tedious project, and this task is already tedious.
   
   On the other hand, we learned that the envtoconf.py may change over time, as requirements are changing, I don't think that option two is hacky. It is hacky in a way that we overwrite a file backed into the docker images, but on the other hand, I would argue that the file should not be in the docker image itself, as even if this does not happen often, the file changes from time to time, so we should preserve the possibility to change it easily.
   What do you think?
   
   Ofc, a properly done option two would require to modify all the images to leave out envtoconf.py, and to modify all the docker-compose file to mount the file into the image, which is again even more tedious.
   What if as part of this JIRA you add the mount to the docker compose files, and we create a follow up jira to remove the file from containers?
   In return I can offer to volunteer for doing the docker image update, though I might need some help with it, but I really don't want to break this convention ;)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543028516



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)

Review comment:
       We should refer to Ozone roles here.

##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)
+# Java property: hadoop.security.logger
+# export OZONE_SECURITY_LOGGER=INFO,NullAppender
+
+# Default process priority level
+# Note that sub-processes will also run at this level!
+# export OZONE_NICENESS=0
+
+# Default name for the service level authorization file
+# Java property: hadoop.policy.file
+# export OZONE_POLICYFILE="hadoop-policy.xml"
+
+#
+# NOTE: this is not used by default!  <-----
+# You can define variables right here and then re-use them later on.
+# For example, it is common to use the same garbage collection settings
+# for all the daemons.  So one could define:
+#
+# export OZONE_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
+#
+# .. and then use it as per the b option under the namenode.
+
+###
+# Secure/privileged execution
+###
+
+#
+# Out of the box, Ozone uses jsvc from Apache Commons to launch daemons
+# on privileged ports.  This functionality can be replaced by providing
+# custom functions.  See hadoop-functions.sh for more information.

Review comment:
       We should refer ozone-functions.sh here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543029000



##########
File path: hadoop-ozone/dist/src/shell/hdds/workers.sh
##########
@@ -20,40 +20,43 @@
 #
 # Environment Variables
 #
-#   HADOOP_WORKERS    File naming remote hosts.
-#     Default is ${HADOOP_CONF_DIR}/workers.
-#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
-#   HADOOP_WORKER_SLEEP Seconds to sleep between spawning remote commands.
-#   HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+#   OZONE_WORKERS    File naming remote hosts.
+#     Default is ${OZONE_CONFIG_DIR}/workers.
+#   OZONE_CONFIG_DIR  Alternate conf dir. Default is ${OZONE_HOME}/conf.
+#   OZONE_WORKER_SLEEP Seconds to sleep between spawning remote commands.
+#   OZONE_SSH_OPTS Options passed to ssh when running remote commands.
 ##
 
-function hadoop_usage
+function ozone_usage
 {
   echo "Usage: workers.sh [--config confdir] command..."
 }
 
-# let's locate libexec...
-if [[ -n "${HADOOP_HOME}" ]]; then
-  HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
-else
-  this="${BASH_SOURCE-$0}"
-  bin=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P)
-  HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
+# load functions
+for dir in "${OZONE_LIBEXEC_DIR}" "${OZONE_HOME}/libexec" "${HADOOP_LIBEXEC_DIR}" "${HADOOP_HOME}/libexec" "${bin}/../libexec"; do
+  if [[ -e "${dir}/ozone-functions.sh" ]]; then
+    . "${dir}/ozone-functions.sh"
+    if declare -F ozone_bootstrap >& /dev/null; then
+      break
+    fi
+  fi
+done
+
+if ! declare -F ozone_bootstrap >& /dev/null; then
+  echo "ERROR: Cannot find ozone-functions.sh." 2>&1
+  exit 1
 fi
 
-HADOOP_LIBEXEC_DIR="${HADOOP_LIBEXEC_DIR:-$HADOOP_DEFAULT_LIBEXEC_DIR}"
-# shellcheck disable=SC2034
-HADOOP_NEW_CONFIG=true
-if [[ -f "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]]; then
-  . "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh"
-else
-  echo "ERROR: Cannot execute ${HADOOP_LIBEXEC_DIR}/hadoop-config.sh." 2>&1
+if ! ozone_bootstrap; then

Review comment:
       This seems to be a strange one here...
   So in line 45 if ozone_bootstrap is not declared, we error out because ozone-functions.sh could not be loaded.
   
   As I understand here we check for the exit status of ozone_bootstrap function, and if it is false we exit because we can not find ozone-config.sh. Why we need this second check? As I see the ozone_bootstrap function is not doing anything that should fail, but maybe my eye slipped through something.
   
   This same we do in hadoop-ozone/dist/src/shell/ozone/ozone, in hadoop-ozone/dist/src/shell/ozone/start-ozone.sh and in hadoop-ozone/dist/src/shell/ozone/stop-ozone.sh files as well




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-747003660


   Thanks @smengcl for the review.
   
   > Do we have a `HADOOP_ROOT_LOGGER` alternative here?
   
   Yes, `OZONE_ROOT_LOGGER`.  Note that the java property is still `hadoop.root.logger` (since I didn't find a way to deprecate these).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543120902



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)
+# Java property: hadoop.security.logger
+# export OZONE_SECURITY_LOGGER=INFO,NullAppender
+
+# Default process priority level
+# Note that sub-processes will also run at this level!
+# export OZONE_NICENESS=0
+
+# Default name for the service level authorization file
+# Java property: hadoop.policy.file
+# export OZONE_POLICYFILE="hadoop-policy.xml"
+
+#
+# NOTE: this is not used by default!  <-----
+# You can define variables right here and then re-use them later on.
+# For example, it is common to use the same garbage collection settings
+# for all the daemons.  So one could define:
+#
+# export OZONE_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
+#
+# .. and then use it as per the b option under the namenode.
+
+###
+# Secure/privileged execution
+###
+
+#
+# Out of the box, Ozone uses jsvc from Apache Commons to launch daemons
+# on privileged ports.  This functionality can be replaced by providing
+# custom functions.  See hadoop-functions.sh for more information.
+#
+
+# The jsvc implementation to use. Jsvc is required to run secure datanodes
+# that bind to privileged ports to provide authentication of data transfer
+# protocol.  Jsvc is not required if SASL is configured for authentication of
+# data transfer protocol using non-privileged ports.
+# export JSVC_HOME=/usr/bin
+
+#
+# This directory contains pids for secure and privileged processes.
+#export OZONE_SECURE_PID_DIR=${OZONE_PID_DIR}
+
+#
+# This directory contains the logs for secure and privileged processes.
+# Java property: hadoop.log.dir
+# export OZONE_SECURE_LOG=${OZONE_LOG_DIR}
+
+#
+# When running a secure daemon, the default value of OZONE_IDENT_STRING
+# ends up being a bit bogus.  Therefore, by default, the code will
+# replace OZONE_IDENT_STRING with OZONE_xx_SECURE_USER.  If one wants
+# to keep OZONE_IDENT_STRING untouched, then uncomment this line.
+# export OZONE_SECURE_IDENT_PRESERVE="true"
+
+###
+# Ozone Manager specific parameters
+###
+# Specify the JVM options to be used when starting the Ozone Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_OM_OPTS=""
+
+###
+# Ozone DataNode specific parameters
+###
+# Specify the JVM options to be used when starting Ozone DataNodes.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_DATANODE_OPTS=""
+
+###
+# HDFS StorageContainerManager specific parameters
+###
+# Specify the JVM options to be used when starting the HDFS Storage Container Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_SCM_OPTS=""
+

Review comment:
       Agree, out of scope. ;)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-746451810


   Hi @adoroszlai, thank you for addressing my concern and revert back to OZONE_CONF_DIR.
   
   Let me ask as I did not have the time to go through and understand fully the transformation.py usage and how that effects the environment processing that is problematic with env vars with names like *CONF*, and to understand what is happening exactly as the consequence of your change?
   Now I am guessing, but based on your change envtoconf.py is using transformation.py and with adding it to the docker-compose.yaml file, we effectively overriding the original transformation.py that is backed into the image(?).
   The change itself in transformation.py is to go through the items instead of the collection means that we are transforming something different in those cases and we do it similarly as the one case you mentioned working well before this change also, I am not sure though, how this solves the problem.
   Can you please give me some pointers where should I see what makes the difference or can you please point out the difference for me to understand a bit more easily what is happening?
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543121015



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)
+# Java property: hadoop.security.logger
+# export OZONE_SECURITY_LOGGER=INFO,NullAppender
+
+# Default process priority level
+# Note that sub-processes will also run at this level!
+# export OZONE_NICENESS=0
+
+# Default name for the service level authorization file
+# Java property: hadoop.policy.file
+# export OZONE_POLICYFILE="hadoop-policy.xml"
+
+#
+# NOTE: this is not used by default!  <-----
+# You can define variables right here and then re-use them later on.
+# For example, it is common to use the same garbage collection settings
+# for all the daemons.  So one could define:
+#
+# export OZONE_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
+#
+# .. and then use it as per the b option under the namenode.
+
+###
+# Secure/privileged execution
+###
+
+#
+# Out of the box, Ozone uses jsvc from Apache Commons to launch daemons
+# on privileged ports.  This functionality can be replaced by providing
+# custom functions.  See hadoop-functions.sh for more information.
+#
+
+# The jsvc implementation to use. Jsvc is required to run secure datanodes
+# that bind to privileged ports to provide authentication of data transfer
+# protocol.  Jsvc is not required if SASL is configured for authentication of
+# data transfer protocol using non-privileged ports.
+# export JSVC_HOME=/usr/bin
+
+#
+# This directory contains pids for secure and privileged processes.
+#export OZONE_SECURE_PID_DIR=${OZONE_PID_DIR}
+
+#
+# This directory contains the logs for secure and privileged processes.
+# Java property: hadoop.log.dir
+# export OZONE_SECURE_LOG=${OZONE_LOG_DIR}
+
+#
+# When running a secure daemon, the default value of OZONE_IDENT_STRING
+# ends up being a bit bogus.  Therefore, by default, the code will
+# replace OZONE_IDENT_STRING with OZONE_xx_SECURE_USER.  If one wants
+# to keep OZONE_IDENT_STRING untouched, then uncomment this line.
+# export OZONE_SECURE_IDENT_PRESERVE="true"
+
+###
+# Ozone Manager specific parameters
+###
+# Specify the JVM options to be used when starting the Ozone Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_OM_OPTS=""
+
+###
+# Ozone DataNode specific parameters
+###
+# Specify the JVM options to be used when starting Ozone DataNodes.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_DATANODE_OPTS=""
+
+###
+# HDFS StorageContainerManager specific parameters
+###
+# Specify the JVM options to be used when starting the HDFS Storage Container Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_SCM_OPTS=""
+
+###
+# Advanced Users Only!
+###
+
+#
+# When building Ozone, one can add the class paths to the commands
+# via this special env var:
+# export OZONE_ENABLE_BUILD_PATHS="true"
+
+#
+# To prevent accidents, shell commands be (superficially) locked
+# to only allow certain users to execute certain subcommands.
+# It uses the format of (command)_(subcommand)_USER.
+#
+# For example, to limit who can execute the namenode command,
+# export HDFS_NAMENODE_USER=hdfs

Review comment:
       No, thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r541713272



##########
File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBConfigFromFile.java
##########
@@ -65,7 +65,7 @@ public static File getConfigLocation() throws IOException {
 
     if (StringUtil.isBlank(path)) {
       LOG.debug("Unable to find the configuration directory. "
-          + "Please make sure that HADOOP_CONF_DIR is setup correctly.");
+          + "Please make sure that " + CONFIG_DIR + " is setup correctly.");

Review comment:
       Should be `OZONE_CONFIG_DIR`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543120623



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)

Review comment:
       Thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543028881



##########
File path: hadoop-ozone/dist/src/main/smoketest/compatibility/scm.robot
##########
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test scm compatibility
+Library             BuiltIn
+Resource            ../lib/os.robot
+Test Timeout        5 minutes
+
+*** Test Cases ***
+Picks up command line options
+    Pass Execution If    '%{HDFS_STORAGECONTAINERMANAGER_OPTS}' == ''    Command-line option required for process check

Review comment:
       Should we rename this envvar also to OZONE_SCM_OPTS or OZONE_STORAGECONTAINERMANAGER_OPTS?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543079193



##########
File path: hadoop-ozone/dist/src/main/smoketest/compatibility/scm.robot
##########
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test scm compatibility
+Library             BuiltIn
+Resource            ../lib/os.robot
+Test Timeout        5 minutes
+
+*** Test Cases ***
+Picks up command line options
+    Pass Execution If    '%{HDFS_STORAGECONTAINERMANAGER_OPTS}' == ''    Command-line option required for process check

Review comment:
       Same as [above](https://github.com/apache/ozone/pull/1667#discussion_r543078959).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r544477763



##########
File path: hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-config
##########
@@ -18,4 +18,7 @@ CORE-SITE.xml_fs.AbstractFileSystem.o3fs.impl=org.apache.hadoop.fs.ozone.OzFs
 CORE-SITE.xml_fs.AbstractFileSystem.ofs.impl=org.apache.hadoop.fs.ozone.RootedOzFs
 MAPRED-SITE.XML_mapreduce.application.classpath=/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop2-@project.version@.jar
 
+HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop2-@project.version@.jar
+OZONE_CLASSPATH=

Review comment:
       Is `OZONE_CLASSPATH` a placeholder here? Or we are setting it to empty on purpose.

##########
File path: hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml
##########
@@ -24,7 +24,7 @@ services:
     env_file:
       - docker-config
     environment:
-      HADOOP_OPTS: ${HADOOP_OPTS}
+      OZONE_OPTS:

Review comment:
       neat!

##########
File path: hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop31/docker-config
##########
@@ -18,4 +18,7 @@ CORE-SITE.xml_fs.AbstractFileSystem.o3fs.impl=org.apache.hadoop.fs.ozone.OzFs
 CORE-SITE.xml_fs.AbstractFileSystem.ofs.impl=org.apache.hadoop.fs.ozone.RootedOzFs
 MAPRED-SITE.XML_mapreduce.application.classpath=/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop3-@project.version@.jar
 
+HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop3-@project.version@.jar
+OZONE_CLASSPATH=

Review comment:
       Same here.

##########
File path: hadoop-ozone/dist/src/main/smoketest/cli/classpath.robot
##########
@@ -0,0 +1,46 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test ozone classpath command
+Library             BuiltIn
+Resource            ../lib/os.robot
+Resource            ../ozone-lib/shell.robot
+Test Timeout        5 minutes
+Suite Setup         Find Jars Dir
+
+*** Test Cases ***
+Ignores HADOOP_CLASSPATH if OZONE_CLASSPATH is set

Review comment:
       nice. btw do we pick up `HADOOP_OPTS` when `OZONE_OPTS` is not set as well?

##########
File path: hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
##########
@@ -103,10 +93,8 @@ run cp -r "${ROOT}/hadoop-ozone/dist/src/main/dockerlibexec/." "libexec/"
 run cp "${ROOT}/hadoop-ozone/dist/src/shell/ozone/ozone" "bin/"
 
 
-run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-config.sh" "libexec/"
-run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-config.cmd" "libexec/"
-run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh" "libexec/"

Review comment:
       Does `hadoop-config.sh` and `hadoop-functions.sh` have any useful env variables? (e.g. `HADOOP_OPTS`)
   
   TODO: Check `ozone-config.sh` and `ozone-functions.sh` later.

##########
File path: hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop32/docker-config
##########
@@ -18,4 +18,7 @@ CORE-SITE.xml_fs.AbstractFileSystem.o3fs.impl=org.apache.hadoop.fs.ozone.OzFs
 CORE-SITE.xml_fs.AbstractFileSystem.ofs.impl=org.apache.hadoop.fs.ozone.RootedOzFs
 MAPRED-SITE.XML_mapreduce.application.classpath=/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop3-@project.version@.jar
 
+HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop3-@project.version@.jar
+OZONE_CLASSPATH=

Review comment:
       Same

##########
File path: hadoop-ozone/dist/src/test/shell/gc_opts.bats
##########
@@ -19,24 +19,32 @@
 # bats gc_opts.bats
 #
 
-load ../../shell/hdds/hadoop-functions.sh
-@test "Setting Hadoop GC parameters: add GC params for server" {
-  export HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true
-  export HADOOP_OPTS="Test"
-  hadoop_add_default_gc_opts
-  [[ "$HADOOP_OPTS" =~ "UseConcMarkSweepGC" ]]
+load ozone-functions_test_helper
+
+@test "Setting GC parameters: add GC params for server" {
+  export OZONE_SUBCMD_SUPPORTDAEMONIZATION=true
+  export OZONE_OPTS="Test"
+
+  ozone_add_default_gc_opts
+
+  echo $OZONE_OPTS
+  [[ "$OZONE_OPTS" =~ "UseConcMarkSweepGC" ]]
 }
 
-@test "Setting Hadoop GC parameters: disabled for client" {
-  export HADOOP_SUBCMD_SUPPORTDAEMONIZATION=false
-  export HADOOP_OPTS="Test"
-  hadoop_add_default_gc_opts
-  [[ ! "$HADOOP_OPTS" =~ "UseConcMarkSweepGC" ]]
+@test "Setting GC parameters: disabled for client" {
+  export OZONE_SUBCMD_SUPPORTDAEMONIZATION=false
+  export OZONE_OPTS="Test"
+
+  ozone_add_default_gc_opts
+
+  [[ ! "$OZONE_OPTS" =~ "UseConcMarkSweepGC" ]]
 }
 
-@test "Setting Hadoop GC parameters: disabled if GC params are customized" {
-  export HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true
-  export HADOOP_OPTS="-XX:++UseG1GC -Xmx512"
-  hadoop_add_default_gc_opts
-  [[ ! "$HADOOP_OPTS" =~ "UseConcMarkSweepGC" ]]
+@test "Setting GC parameters: disabled if GC params are customized" {
+  export OZONE_SUBCMD_SUPPORTDAEMONIZATION=true
+  export OZONE_OPTS="-XX:++UseG1GC -Xmx512"
+
+  ozone_add_default_gc_opts
+
+  [[ ! "$OZONE_OPTS" =~ "UseConcMarkSweepGC" ]]

Review comment:
       Unrelated to this patch but CMS is removed in JDK 14 and on and `UseConcMarkSweepGC` will be ignored by those higher version JVMs. We might want to come up with a new set of GC params soon.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-764900308


   Thanks @adoroszlai for rechecking. Looks like we got a good run. Will merge this shortly.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-764900308


   Thanks @adoroszlai for rechecking. Looks like we got a good run. Will merge this shortly.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543028462



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for

Review comment:
       start-dfs.sh is mentioned two times here, can you please rephrase this comment, and the next which mentions it to point to start-ozone.sh, and to mention Ozone roles?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543120764



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)
+# Java property: hadoop.security.logger
+# export OZONE_SECURITY_LOGGER=INFO,NullAppender
+
+# Default process priority level
+# Note that sub-processes will also run at this level!
+# export OZONE_NICENESS=0
+
+# Default name for the service level authorization file
+# Java property: hadoop.policy.file
+# export OZONE_POLICYFILE="hadoop-policy.xml"
+
+#
+# NOTE: this is not used by default!  <-----
+# You can define variables right here and then re-use them later on.
+# For example, it is common to use the same garbage collection settings
+# for all the daemons.  So one could define:
+#
+# export OZONE_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
+#
+# .. and then use it as per the b option under the namenode.
+
+###
+# Secure/privileged execution
+###
+
+#
+# Out of the box, Ozone uses jsvc from Apache Commons to launch daemons
+# on privileged ports.  This functionality can be replaced by providing
+# custom functions.  See hadoop-functions.sh for more information.

Review comment:
       Thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] codecov-io commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
codecov-io commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-745962538


   # [Codecov](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=h1) Report
   > Merging [#1667](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=desc) (dfd2aaf) into [master](https://codecov.io/gh/apache/ozone/commit/6112603aca864bf18243fcfceb1a330ae7ac0587?el=desc) (6112603) will **increase** coverage by `0.44%`.
   > The diff coverage is `82.63%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/ozone/pull/1667/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1667      +/-   ##
   ============================================
   + Coverage     75.32%   75.76%   +0.44%     
   - Complexity    10842    11310     +468     
   ============================================
     Files          1030     1048      +18     
     Lines         52429    53836    +1407     
     Branches       5142     5307     +165     
   ============================================
   + Hits          39490    40787    +1297     
   - Misses        10472    10531      +59     
   - Partials       2467     2518      +51     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [...java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zY20vU2NtQ29uZmlnS2V5cy5qYXZh) | `100.00% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [.../java/org/apache/hadoop/ozone/OzoneConfigKeys.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25maWdLZXlzLmphdmE=) | `100.00% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | [.../java/org/apache/hadoop/ozone/common/Checksum.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvY29tbW9uL0NoZWNrc3VtLmphdmE=) | `90.47% <ø> (-0.57%)` | `20.00 <0.00> (-1.00)` | |
   | [...oop/ozone/container/common/impl/ContainerData.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL2ltcGwvQ29udGFpbmVyRGF0YS5qYXZh) | `94.07% <0.00%> (ø)` | `65.00 <1.00> (ø)` | |
   | [...ozone/container/common/impl/ContainerDataYaml.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL2ltcGwvQ29udGFpbmVyRGF0YVlhbWwuamF2YQ==) | `73.55% <ø> (ø)` | `6.00 <0.00> (ø)` | |
   | [...ozone/container/common/report/ReportPublisher.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3JlcG9ydC9SZXBvcnRQdWJsaXNoZXIuamF2YQ==) | `86.36% <0.00%> (ø)` | `9.00 <0.00> (ø)` | |
   | [...doop/ozone/container/common/volume/HddsVolume.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3ZvbHVtZS9IZGRzVm9sdW1lLmphdmE=) | `84.61% <ø> (ø)` | `42.00 <0.00> (ø)` | |
   | [.../statemachine/background/BlockDeletingService.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIva2V5dmFsdWUvc3RhdGVtYWNoaW5lL2JhY2tncm91bmQvQmxvY2tEZWxldGluZ1NlcnZpY2UuamF2YQ==) | `76.33% <ø> (-1.21%)` | `13.00 <0.00> (ø)` | |
   | [.../ozone/container/replication/GrpcOutputStream.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvcmVwbGljYXRpb24vR3JwY091dHB1dFN0cmVhbS5qYXZh) | `82.45% <ø> (ø)` | `10.00 <0.00> (ø)` | |
   | [...m/container/IncrementalContainerReportHandler.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9JbmNyZW1lbnRhbENvbnRhaW5lclJlcG9ydEhhbmRsZXIuamF2YQ==) | `47.50% <0.00%> (-5.28%)` | `6.00 <0.00> (ø)` | |
   | ... and [274 more](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=footer). Last update [19dd94d...ca031fd](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r544543760



##########
File path: hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
##########
@@ -103,10 +93,8 @@ run cp -r "${ROOT}/hadoop-ozone/dist/src/main/dockerlibexec/." "libexec/"
 run cp "${ROOT}/hadoop-ozone/dist/src/shell/ozone/ozone" "bin/"
 
 
-run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-config.sh" "libexec/"
-run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-config.cmd" "libexec/"
-run cp "${ROOT}/hadoop-ozone/dist/src/shell/hdds/hadoop-functions.sh" "libexec/"

Review comment:
       Yes we have `OZONE_OPTS`, but not sure if `HADOOP_OPTS` will be picked up when `OZONE_OPTS` right now. Probably not?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543121307



##########
File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBConfigFromFile.java
##########
@@ -65,7 +65,7 @@ public static File getConfigLocation() throws IOException {
 
     if (StringUtil.isBlank(path)) {

Review comment:
       Not introduced by this change, but thanks for pointing out, let's fix it anyway.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543061256



##########
File path: hadoop-ozone/dist/src/shell/ozone/start-ozone.sh
##########
@@ -64,68 +63,53 @@ if [[ $# -ge 1 ]]; then
       dataStartOpt="$startOpt"
     ;;
     *)
-      hadoop_exit_with_usage 1
+      ozone_exit_with_usage 1
     ;;
   esac
 fi
 
 #Add other possible options
 nameStartOpt="$nameStartOpt $*"
 
-SECURITY_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getconf -confKey hadoop.security.authentication | tr '[:upper:]' '[:lower:]' 2>&-)
-SECURITY_AUTHORIZATION_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getconf -confKey hadoop.security.authorization | tr '[:upper:]' '[:lower:]' 2>&-)
-
-#if [[ ${SECURITY_ENABLED} == "kerberos" || ${SECURITY_AUTHORIZATION_ENABLED}
-# == "true" ]]; then
-#  echo "Ozone is not supported in a security enabled cluster."
-#  exit 1
-#fi
-
-#SECURITY_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getozoneconf -confKey hadoop.security.authentication | tr '[:upper:]' '[:lower:]' 2>&-)
-#SECURITY_AUTHORIZATION_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getozoneconf -confKey hadoop.security.authorization | tr '[:upper:]' '[:lower:]' 2>&-)
-#if [[ ${SECURITY_ENABLED} == "kerberos" || ${SECURITY_AUTHORIZATION_ENABLED} == "true" ]]; then
-#  echo "Ozone is not supported in a security enabled cluster."
-#  exit 1
-#fi
+SECURITY_ENABLED=$("${OZONE_HOME}/bin/ozone" getconf -confKey hadoop.security.authentication | tr '[:upper:]' '[:lower:]' 2>&-)
+SECURITY_AUTHORIZATION_ENABLED=$("${OZONE_HOME}/bin/ozone" getconf -confKey hadoop.security.authorization | tr '[:upper:]' '[:lower:]' 2>&-)
 
 # datanodes (using default workers file)
 
 echo "Starting datanodes"
-hadoop_uservar_su hdfs datanode "${HADOOP_HDFS_HOME}/bin/ozone" \
+ozone_uservar_su hdfs datanode "${OZONE_HOME}/bin/ozone" \
     --workers \
-    --config "${HADOOP_CONF_DIR}" \
+    --config "${OZONE_CONFIG_DIR}" \
     --daemon start \
     datanode ${dataStartOpt}
-(( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
+OZONE_JUMBO_RETCOUNTER=$?

Review comment:
       Yes, this is intentional, please see item 4 in PR description:
   
   > it was incremented for the first and third commands and unconditionally set for the second one, so the result of the first command was lost.  It should be set for the first one instead.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r544610169



##########
File path: hadoop-ozone/dist/src/main/compose/ozone-mr/hadoop27/docker-config
##########
@@ -18,4 +18,7 @@ CORE-SITE.xml_fs.AbstractFileSystem.o3fs.impl=org.apache.hadoop.fs.ozone.OzFs
 CORE-SITE.xml_fs.AbstractFileSystem.ofs.impl=org.apache.hadoop.fs.ozone.RootedOzFs
 MAPRED-SITE.XML_mapreduce.application.classpath=/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop2-@project.version@.jar
 
+HADOOP_CLASSPATH=/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-hadoop2-@project.version@.jar
+OZONE_CLASSPATH=

Review comment:
       > setting it to empty on purpose
   
   Exactly: without it `HADOOP_CLASSPATH` containing OzoneFS jar would be picked up, which is bad for Ozone's health.  Actually, this is the primary motivation for this entire change.  From description of HDDS-4525:
   
   > severe problem happens if we would like to access Ozone filesystem both via `ozone` and `hadoop` commands.  The latter needs shaded Ozone FS JAR in `HADOOP_CLASSPATH`.  The same `HADOOP_CLASSPATH` results in `ClassNotFound` for `ozone`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543078959



##########
File path: hadoop-ozone/dist/src/main/smoketest/compatibility/om.robot
##########
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test om compatibility
+Library             BuiltIn
+Resource            ../lib/os.robot
+Test Timeout        5 minutes
+
+*** Test Cases ***
+Picks up command line options
+    Pass Execution If    '%{HDFS_OM_OPTS}' == ''    Command-line option required for process check

Review comment:
       No, these tests intentionaly use old names to verify compatibility with old scripts.  Some of the variables (eg. `HDFS_OM_OPTS` and `HDFS_STORAGECONTAINERMANAGER_OPTS`) were already deprecated before this change, but I changed the function that deprecates them.
   
   https://github.com/apache/ozone/blob/dfd2aaf7fff4815e906e7201cee1739ad9776d97/hadoop-ozone/dist/src/shell/ozone/ozone#L160




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] codecov-io edited a comment on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
codecov-io edited a comment on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-745962538


   # [Codecov](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=h1) Report
   > Merging [#1667](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=desc) (dfd2aaf) into [master](https://codecov.io/gh/apache/ozone/commit/74315ac31230f19396572ba6117d4a652781f8ef?el=desc) (74315ac) will **increase** coverage by `0.06%`.
   > The diff coverage is `n/a`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/ozone/pull/1667/graphs/tree.svg?width=650&height=150&src=pr&token=5YeeptJMby)](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=tree)
   
   ```diff
   @@             Coverage Diff              @@
   ##             master    #1667      +/-   ##
   ============================================
   + Coverage     75.70%   75.76%   +0.06%     
   - Complexity    11301    11310       +9     
   ============================================
     Files          1048     1048              
     Lines         53898    53836      -62     
     Branches       5324     5307      -17     
   ============================================
   - Hits          40801    40787      -14     
   + Misses        10564    10531      -33     
   + Partials       2533     2518      -15     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=tree) | Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | [.../hadoop/ozone/s3/endpoint/MultiDeleteResponse.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL3MzZ2F0ZXdheS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL3MzL2VuZHBvaW50L011bHRpRGVsZXRlUmVzcG9uc2UuamF2YQ==) | `25.00% <0.00%> (-20.00%)` | `4.00% <0.00%> (-1.00%)` | |
   | [...hdds/scm/container/common/helpers/ExcludeList.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zY20vY29udGFpbmVyL2NvbW1vbi9oZWxwZXJzL0V4Y2x1ZGVMaXN0LmphdmE=) | `86.95% <0.00%> (-13.05%)` | `19.00% <0.00%> (-3.00%)` | |
   | [...e/hadoop/ozone/s3/endpoint/MultiDeleteRequest.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL3MzZ2F0ZXdheS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL3MzL2VuZHBvaW50L011bHRpRGVsZXRlUmVxdWVzdC5qYXZh) | `65.00% <0.00%> (-10.00%)` | `4.00% <0.00%> (-1.00%)` | |
   | [.../apache/hadoop/ozone/s3/endpoint/EndpointBase.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL3MzZ2F0ZXdheS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL3MzL2VuZHBvaW50L0VuZHBvaW50QmFzZS5qYXZh) | `64.10% <0.00%> (-7.90%)` | `10.00% <0.00%> (-4.00%)` | |
   | [...doop/hdds/scm/container/ContainerStateManager.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9Db250YWluZXJTdGF0ZU1hbmFnZXIuamF2YQ==) | `81.67% <0.00%> (-6.88%)` | `32.00% <0.00%> (-3.00%)` | |
   | [...e/commandhandler/CloseContainerCommandHandler.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3N0YXRlbWFjaGluZS9jb21tYW5kaGFuZGxlci9DbG9zZUNvbnRhaW5lckNvbW1hbmRIYW5kbGVyLmphdmE=) | `82.45% <0.00%> (-3.51%)` | `11.00% <0.00%> (ø%)` | |
   | [...ent/algorithms/SCMContainerPlacementRackAware.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9wbGFjZW1lbnQvYWxnb3JpdGhtcy9TQ01Db250YWluZXJQbGFjZW1lbnRSYWNrQXdhcmUuamF2YQ==) | `76.69% <0.00%> (-3.01%)` | `31.00% <0.00%> (-2.00%)` | |
   | [...hadoop/hdds/scm/container/SCMContainerManager.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9TQ01Db250YWluZXJNYW5hZ2VyLmphdmE=) | `72.60% <0.00%> (-1.83%)` | `40.00% <0.00%> (-1.00%)` | |
   | [.../java/org/apache/hadoop/ozone/debug/DBScanner.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLW96b25lL3Rvb2xzL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvZGVidWcvREJTY2FubmVyLmphdmE=) | `74.35% <0.00%> (-0.65%)` | `18.00% <0.00%> (ø%)` | |
   | [...mon/transport/server/ratis/XceiverServerRatis.java](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvcmF0aXMvWGNlaXZlclNlcnZlclJhdGlzLmphdmE=) | `86.97% <0.00%> (-0.27%)` | `64.00% <0.00%> (+1.00%)` | :arrow_down: |
   | ... and [34 more](https://codecov.io/gh/apache/ozone/pull/1667/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=footer). Last update [74315ac...a70def9](https://codecov.io/gh/apache/ozone/pull/1667?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r544614678



##########
File path: hadoop-ozone/dist/src/main/smoketest/cli/classpath.robot
##########
@@ -0,0 +1,46 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test ozone classpath command
+Library             BuiltIn
+Resource            ../lib/os.robot
+Resource            ../ozone-lib/shell.robot
+Test Timeout        5 minutes
+Suite Setup         Find Jars Dir
+
+*** Test Cases ***
+Ignores HADOOP_CLASSPATH if OZONE_CLASSPATH is set

Review comment:
       Yes.  There is a compatibility test for this.  We define some old `OPTS`:
   
   https://github.com/apache/ozone/blob/a70def928eca460dcf5abb3b0dfe87f01cec29e5/hadoop-ozone/dist/src/main/compose/compatibility/docker-config#L31-L34
   
   and then check if these are passed to the java process that `ozone om` command starts:
   
   https://github.com/apache/ozone/blob/e54c4390bcc8374970baa120b6c6d9b5a375ff1d/hadoop-ozone/dist/src/main/smoketest/compatibility/om.robot#L23-L27
   
   (Similar test exists for other components.)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-746560661


   Thanks @fapifta for taking another look, and also your initial review and nudging me to fix it properly.
   
   > Now I am guessing, but based on your change envtoconf.py is using transformation.py and with adding it to the docker-compose.yaml file, we effectively overriding the original transformation.py that is backed into the image(?).
   
   Exactly.  This change is necessary only for `apache/ozone` and `apache/hadoop` images, since `apache/ozone-runner` requires Ozone binaries to be mounted anyway.  We can remove these mounts if/when the images are rebuilt with fixed `envtoconf`.
   
   Answered the other part of your question about `transformation.py` in the separate PR where this is being fixed (should be merged before this one).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r541713272



##########
File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBConfigFromFile.java
##########
@@ -65,7 +65,7 @@ public static File getConfigLocation() throws IOException {
 
     if (StringUtil.isBlank(path)) {
       LOG.debug("Unable to find the configuration directory. "
-          + "Please make sure that HADOOP_CONF_DIR is setup correctly.");
+          + "Please make sure that " + CONFIG_DIR + " is setup correctly.");

Review comment:
       ~~Should be `OZONE_CONFIG_DIR`?~~ nvm




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-765234919


   Thanks @fapifta and @smengcl for the reviews, and @smengcl for merging it.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] adoroszlai commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
adoroszlai commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543118163



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for

Review comment:
       Thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl merged pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl merged pull request #1667:
URL: https://github.com/apache/ozone/pull/1667


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] smengcl merged pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
smengcl merged pull request #1667:
URL: https://github.com/apache/ozone/pull/1667


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#issuecomment-745036994


   Hi @adoroszlai, thank you for working on this change, please find a few comments from me inline.
   
   I have a concern about OZONE_CONFIG_DIR, we brake a convention here, as not just Hadoop, but HBase, or Hive as well for example uses the COMPONENT_CONF_DIR to specify the config dir.
   Is it possible to instead changing to config and breaking the convention fix the envtoconf behaviour, I see that OZONE_CONFIG_DIR is added to docker-config files and that's how envtoconf came into the picture, previously we did not needed HADOOP_CONF_DIR there, we do we need OZONE_CONFIG_DIR in the docker-config files now?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543029654



##########
File path: hadoop-ozone/dist/src/shell/ozone/start-ozone.sh
##########
@@ -64,68 +63,53 @@ if [[ $# -ge 1 ]]; then
       dataStartOpt="$startOpt"
     ;;
     *)
-      hadoop_exit_with_usage 1
+      ozone_exit_with_usage 1
     ;;
   esac
 fi
 
 #Add other possible options
 nameStartOpt="$nameStartOpt $*"
 
-SECURITY_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getconf -confKey hadoop.security.authentication | tr '[:upper:]' '[:lower:]' 2>&-)
-SECURITY_AUTHORIZATION_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getconf -confKey hadoop.security.authorization | tr '[:upper:]' '[:lower:]' 2>&-)
-
-#if [[ ${SECURITY_ENABLED} == "kerberos" || ${SECURITY_AUTHORIZATION_ENABLED}
-# == "true" ]]; then
-#  echo "Ozone is not supported in a security enabled cluster."
-#  exit 1
-#fi
-
-#SECURITY_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getozoneconf -confKey hadoop.security.authentication | tr '[:upper:]' '[:lower:]' 2>&-)
-#SECURITY_AUTHORIZATION_ENABLED=$("${HADOOP_HDFS_HOME}/bin/ozone" getozoneconf -confKey hadoop.security.authorization | tr '[:upper:]' '[:lower:]' 2>&-)
-#if [[ ${SECURITY_ENABLED} == "kerberos" || ${SECURITY_AUTHORIZATION_ENABLED} == "true" ]]; then
-#  echo "Ozone is not supported in a security enabled cluster."
-#  exit 1
-#fi
+SECURITY_ENABLED=$("${OZONE_HOME}/bin/ozone" getconf -confKey hadoop.security.authentication | tr '[:upper:]' '[:lower:]' 2>&-)
+SECURITY_AUTHORIZATION_ENABLED=$("${OZONE_HOME}/bin/ozone" getconf -confKey hadoop.security.authorization | tr '[:upper:]' '[:lower:]' 2>&-)
 
 # datanodes (using default workers file)
 
 echo "Starting datanodes"
-hadoop_uservar_su hdfs datanode "${HADOOP_HDFS_HOME}/bin/ozone" \
+ozone_uservar_su hdfs datanode "${OZONE_HOME}/bin/ozone" \
     --workers \
-    --config "${HADOOP_CONF_DIR}" \
+    --config "${OZONE_CONFIG_DIR}" \
     --daemon start \
     datanode ${dataStartOpt}
-(( HADOOP_JUMBO_RETCOUNTER=HADOOP_JUMBO_RETCOUNTER + $? ))
+OZONE_JUMBO_RETCOUNTER=$?

Review comment:
       Is this changed intentionally from accumulation to a set? Probably, but I wanted to be sure, and even so if this is the first place where we can safely set if this runs in just a pure context, this variable may have an initial value in some workflow which makes it worth to preserve the external value even at the first assignment?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


[GitHub] [ozone] fapifta commented on a change in pull request #1667: HDDS-4525. Replace Hadoop variables and functions in Ozone shell scripts with Ozone-specific ones

Posted by GitBox <gi...@apache.org>.
fapifta commented on a change in pull request #1667:
URL: https://github.com/apache/ozone/pull/1667#discussion_r543028585



##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)
+# Java property: hadoop.security.logger
+# export OZONE_SECURITY_LOGGER=INFO,NullAppender
+
+# Default process priority level
+# Note that sub-processes will also run at this level!
+# export OZONE_NICENESS=0
+
+# Default name for the service level authorization file
+# Java property: hadoop.policy.file
+# export OZONE_POLICYFILE="hadoop-policy.xml"
+
+#
+# NOTE: this is not used by default!  <-----
+# You can define variables right here and then re-use them later on.
+# For example, it is common to use the same garbage collection settings
+# for all the daemons.  So one could define:
+#
+# export OZONE_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
+#
+# .. and then use it as per the b option under the namenode.
+
+###
+# Secure/privileged execution
+###
+
+#
+# Out of the box, Ozone uses jsvc from Apache Commons to launch daemons
+# on privileged ports.  This functionality can be replaced by providing
+# custom functions.  See hadoop-functions.sh for more information.
+#
+
+# The jsvc implementation to use. Jsvc is required to run secure datanodes
+# that bind to privileged ports to provide authentication of data transfer
+# protocol.  Jsvc is not required if SASL is configured for authentication of
+# data transfer protocol using non-privileged ports.
+# export JSVC_HOME=/usr/bin
+
+#
+# This directory contains pids for secure and privileged processes.
+#export OZONE_SECURE_PID_DIR=${OZONE_PID_DIR}
+
+#
+# This directory contains the logs for secure and privileged processes.
+# Java property: hadoop.log.dir
+# export OZONE_SECURE_LOG=${OZONE_LOG_DIR}
+
+#
+# When running a secure daemon, the default value of OZONE_IDENT_STRING
+# ends up being a bit bogus.  Therefore, by default, the code will
+# replace OZONE_IDENT_STRING with OZONE_xx_SECURE_USER.  If one wants
+# to keep OZONE_IDENT_STRING untouched, then uncomment this line.
+# export OZONE_SECURE_IDENT_PRESERVE="true"
+
+###
+# Ozone Manager specific parameters
+###
+# Specify the JVM options to be used when starting the Ozone Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_OM_OPTS=""
+
+###
+# Ozone DataNode specific parameters
+###
+# Specify the JVM options to be used when starting Ozone DataNodes.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_DATANODE_OPTS=""
+
+###
+# HDFS StorageContainerManager specific parameters
+###
+# Specify the JVM options to be used when starting the HDFS Storage Container Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_SCM_OPTS=""
+

Review comment:
       It is maybe out of scope for this PR, but shouldn't we have OZONE_RECON_OPTS, and OZONE_S3GW_OPTS similarly?

##########
File path: hadoop-hdds/common/src/main/conf/ozone-env.sh
##########
@@ -0,0 +1,280 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Ozone-specific environment variables here.
+
+# Enable core dump when crash in C++
+ulimit -c unlimited
+
+# Many of the options here are built from the perspective that users
+# may want to provide OVERWRITING values on the command line.
+# For example:
+#
+#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
+#
+# Therefore, the vast majority (BUT NOT ALL!) of these defaults
+# are configured for substitution and not append.  If append
+# is preferable, modify this file accordingly.
+
+###
+# Generic settings
+###
+
+# Technically, the only required environment variable is JAVA_HOME.
+# All others are optional.  However, the defaults are probably not
+# preferred.  Many sites configure these options outside of Ozone,
+# such as in /etc/profile.d
+
+# The java implementation to use. By default, this environment
+# variable is REQUIRED on ALL platforms except OS X!
+# export JAVA_HOME=
+
+# Location of Ozone.  By default, Ozone will attempt to determine
+# this location based upon its execution path.
+# export OZONE_HOME=
+
+# Location of Ozone's configuration information.  i.e., where this
+# file is living. If this is not defined, Ozone will attempt to
+# locate it based upon its execution path.
+#
+# NOTE: It is recommend that this variable not be set here but in
+# /etc/profile.d or equivalent.  Some options (such as
+# --config) may react strangely otherwise.
+#
+# export OZONE_CONFIG_DIR=${OZONE_HOME}/etc/hadoop
+
+# The maximum amount of heap to use (Java -Xmx).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xmx setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MAX=
+
+# The minimum amount of heap to use (Java -Xms).  If no unit
+# is provided, it will be converted to MB.  Daemons will
+# prefer any Xms setting in their respective _OPT variable.
+# There is no default; the JVM will autoscale based upon machine
+# memory size.
+# export OZONE_HEAPSIZE_MIN=
+
+# Extra Java runtime options for all Ozone commands. We don't support
+# IPv6 yet/still, so by default the preference is set to IPv4.
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true"
+# For Kerberos debugging, an extended option set logs more information
+# export OZONE_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
+
+# Some parts of the shell code may do special things dependent upon
+# the operating system.  We have to set this here. See the next
+# section as to why....
+export OZONE_OS_TYPE=${OZONE_OS_TYPE:-$(uname -s)}
+
+# Extra Java runtime options for some Ozone commands
+# and clients (i.e., hdfs dfs -blah).  These get appended to OZONE_OPTS for
+# such commands.  In most cases, # this should be left empty and
+# let users supply it on the command line.
+# export OZONE_CLIENT_OPTS=""
+
+#
+# A note about classpaths.
+#
+# By default, Apache Ozone overrides Java's CLASSPATH
+# environment variable.  It is configured such
+# that it starts out blank with new entries added after passing
+# a series of checks (file/dir exists, not already listed aka
+# de-deduplication).  During de-deduplication, wildcards and/or
+# directories are *NOT* expanded to keep it simple. Therefore,
+# if the computed classpath has two specific mentions of
+# awesome-methods-1.0.jar, only the first one added will be seen.
+# If two directories are in the classpath that both contain
+# awesome-methods-1.0.jar, then Java will pick up both versions.
+
+# An additional, custom CLASSPATH. Site-wide configs should be
+# handled via the shellprofile functionality, utilizing the
+# ozone_add_classpath function for greater control and much
+# harder for apps/end-users to accidentally override.
+# Similarly, end users should utilize ${HOME}/.ozonerc .
+# This variable should ideally only be used as a short-cut,
+# interactive way for temporary additions on the command line.
+# export OZONE_CLASSPATH="/some/cool/path/on/your/machine"
+
+# Should OZONE_CLASSPATH be first in the official CLASSPATH?
+# export OZONE_USER_CLASSPATH_FIRST="yes"
+
+# If OZONE_USE_CLIENT_CLASSLOADER is set, OZONE_CLASSPATH and
+# OZONE_USER_CLASSPATH_FIRST are ignored.
+# export OZONE_USE_CLIENT_CLASSLOADER=true
+
+###
+# Options for remote shell connectivity
+###
+
+# There are some optional components of hadoop that allow for
+# command and control of remote hosts.  For example,
+# start-dfs.sh will attempt to bring up all NNs, DNS, etc.
+
+# Options to pass to SSH when one of the "log into a host and
+# start/stop daemons" scripts is executed
+# export OZONE_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"
+
+# The built-in ssh handler will limit itself to 10 simultaneous connections.
+# For pdsh users, this sets the fanout size ( -f )
+# Change this to increase/decrease as necessary.
+# export OZONE_SSH_PARALLEL=10
+
+# Filename which contains all of the hosts for any remote execution
+# helper scripts # such as workers.sh, start-dfs.sh, etc.
+# export OZONE_WORKERS="${OZONE_CONFIG_DIR}/workers"
+
+###
+# Options for all daemons
+###
+#
+
+#
+# Many options may also be specified as Java properties.  It is
+# very common, and in many cases, desirable, to hard-set these
+# in daemon _OPTS variables.  Where applicable, the appropriate
+# Java property is also identified.  Note that many are re-used
+# or set differently in certain contexts (e.g., secure vs
+# non-secure)
+#
+
+# Where (primarily) daemon log files are stored.
+# ${OZONE_HOME}/logs by default.
+# Java property: hadoop.log.dir
+# export OZONE_LOG_DIR=${OZONE_HOME}/logs
+
+# A string representing this instance of hadoop. $USER by default.
+# This is used in writing log and pid files, so keep that in mind!
+# Java property: hadoop.id.str
+# export OZONE_IDENT_STRING=$USER
+
+# How many seconds to pause after stopping a daemon
+# export OZONE_STOP_TIMEOUT=5
+
+# Where pid files are stored.  /tmp by default.
+# export OZONE_PID_DIR=/tmp
+
+# Default log4j setting for interactive commands
+# Java property: hadoop.root.logger
+# export OZONE_ROOT_LOGGER=INFO,console
+
+# Default log4j setting for daemons spawned explicitly by
+# --daemon option of hadoop, hdfs, mapred and yarn command.
+# Java property: hadoop.root.logger
+# export OZONE_DAEMON_ROOT_LOGGER=INFO,RFA
+
+# Default log level and output location for security-related messages.
+# You will almost certainly want to change this on a per-daemon basis via
+# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
+# defaults for the NN and 2NN override this by default.)
+# Java property: hadoop.security.logger
+# export OZONE_SECURITY_LOGGER=INFO,NullAppender
+
+# Default process priority level
+# Note that sub-processes will also run at this level!
+# export OZONE_NICENESS=0
+
+# Default name for the service level authorization file
+# Java property: hadoop.policy.file
+# export OZONE_POLICYFILE="hadoop-policy.xml"
+
+#
+# NOTE: this is not used by default!  <-----
+# You can define variables right here and then re-use them later on.
+# For example, it is common to use the same garbage collection settings
+# for all the daemons.  So one could define:
+#
+# export OZONE_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
+#
+# .. and then use it as per the b option under the namenode.
+
+###
+# Secure/privileged execution
+###
+
+#
+# Out of the box, Ozone uses jsvc from Apache Commons to launch daemons
+# on privileged ports.  This functionality can be replaced by providing
+# custom functions.  See hadoop-functions.sh for more information.
+#
+
+# The jsvc implementation to use. Jsvc is required to run secure datanodes
+# that bind to privileged ports to provide authentication of data transfer
+# protocol.  Jsvc is not required if SASL is configured for authentication of
+# data transfer protocol using non-privileged ports.
+# export JSVC_HOME=/usr/bin
+
+#
+# This directory contains pids for secure and privileged processes.
+#export OZONE_SECURE_PID_DIR=${OZONE_PID_DIR}
+
+#
+# This directory contains the logs for secure and privileged processes.
+# Java property: hadoop.log.dir
+# export OZONE_SECURE_LOG=${OZONE_LOG_DIR}
+
+#
+# When running a secure daemon, the default value of OZONE_IDENT_STRING
+# ends up being a bit bogus.  Therefore, by default, the code will
+# replace OZONE_IDENT_STRING with OZONE_xx_SECURE_USER.  If one wants
+# to keep OZONE_IDENT_STRING untouched, then uncomment this line.
+# export OZONE_SECURE_IDENT_PRESERVE="true"
+
+###
+# Ozone Manager specific parameters
+###
+# Specify the JVM options to be used when starting the Ozone Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_OM_OPTS=""
+
+###
+# Ozone DataNode specific parameters
+###
+# Specify the JVM options to be used when starting Ozone DataNodes.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_DATANODE_OPTS=""
+
+###
+# HDFS StorageContainerManager specific parameters
+###
+# Specify the JVM options to be used when starting the HDFS Storage Container Manager.
+# These options will be appended to the options specified as OZONE_OPTS
+# and therefore may override any similar flags set in OZONE_OPTS
+#
+# export OZONE_SCM_OPTS=""
+
+###
+# Advanced Users Only!
+###
+
+#
+# When building Ozone, one can add the class paths to the commands
+# via this special env var:
+# export OZONE_ENABLE_BUILD_PATHS="true"
+
+#
+# To prevent accidents, shell commands be (superficially) locked
+# to only allow certain users to execute certain subcommands.
+# It uses the format of (command)_(subcommand)_USER.
+#
+# For example, to limit who can execute the namenode command,
+# export HDFS_NAMENODE_USER=hdfs

Review comment:
       Does this remain here on purpose?

##########
File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBConfigFromFile.java
##########
@@ -65,7 +65,7 @@ public static File getConfigLocation() throws IOException {
 
     if (StringUtil.isBlank(path)) {

Review comment:
       We duplicate this if statement here, first to write a debug level log message second to return null, can we pull the two together, and also change the log message here, and state the reason why we fail by saying something like:
   CONFIG_DIR + " variable is empty, please make sure it is setup correctly!"

##########
File path: hadoop-ozone/dist/src/main/compose/ozone-csi/docker-compose.yaml
##########
@@ -24,7 +24,7 @@ services:
     env_file:
       - docker-config
     environment:
-      HADOOP_OPTS: ${HADOOP_OPTS}
+      OZONE_OPTS:

Review comment:
       For all these replacements, shouldn't we still provide ${OZONE_OPTS} to the environment as we did with HADOOP_OPTS before? We specify the coverage related options via this variable in test_all.sh for example.

##########
File path: hadoop-ozone/dist/src/main/smoketest/compatibility/om.robot
##########
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test om compatibility
+Library             BuiltIn
+Resource            ../lib/os.robot
+Test Timeout        5 minutes
+
+*** Test Cases ***
+Picks up command line options
+    Pass Execution If    '%{HDFS_OM_OPTS}' == ''    Command-line option required for process check

Review comment:
       Shouldn't we rename this envvar also to OZONE_OM_OPTS?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org