You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Elek, Marton (JIRA)" <ji...@apache.org> on 2018/08/16 08:58:00 UTC

[jira] [Created] (HDDS-352) Separate install and testing phases in acceptance tests.

Elek, Marton created HDDS-352:
---------------------------------

             Summary: Separate install and testing phases in acceptance tests.
                 Key: HDDS-352
                 URL: https://issues.apache.org/jira/browse/HDDS-352
             Project: Hadoop Distributed Data Store
          Issue Type: Improvement
            Reporter: Elek, Marton


In the current acceptance tests (hadoop-ozone/acceptance-test) the robot files contain two kind of commands:

1) starting and stopping clusters
2) testing the basic behaviour with client calls

It would be great to separate the two functionality and include only the testing part in the robot files.

1. Ideally the tests could be executed in any environment. After a kubernetes install I would like to do a smoke test. It could be a different environment but I would like to execute most of the tests (check ozone cli, rest api, etc.)

2. There could be multiple ozone environment (standlaone ozone cluster, hdfs + ozone cluster, etc.). We need to test all of them with all the tests.

3. With this approach we can collect the docker-compose files just in one place (hadoop-dist project). After a docker-compose up there should be a way to execute the tests with an existing cluster. Something like this:

{code}
docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh
{code}

4. It also means that we need to execute the tests from a separated container instance. We need a configuration parameter to define the cluster topology. Ideally it could be just one environment variables with the url of the scm and the scm could be used to discovery all of the required components + download the configuration files from there.

5. Until now we used the log output of the docker-compose files to do some readiness probes. They should be converted to poll the jmx endpoints and check if the cluster is up and running. If we need the log files for additional testing we can create multiple implementations for different type of environments (docker-compose/kubernetes) and include the right set of functions based on an external parameters.

6. Still we need a generic script under the ozone-acceptance test project to run all the tests (starting the docker-compose clusters, execute tests in a different container, stop the cluster) 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org