You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/06/01 23:39:26 UTC

[GitHub] [druid] maytasm commented on a change in pull request #9854: Integration Tests.

maytasm commented on a change in pull request #9854:
URL: https://github.com/apache/druid/pull/9854#discussion_r433544100



##########
File path: integration-tests/build_run_cluster.sh
##########
@@ -0,0 +1,43 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+echo $DRUID_INTEGRATION_TEST_OVERRIDE_CONFIG_PATH
+
+export DIR=$(cd $(dirname $0) && pwd)
+export HADOOP_DOCKER_DIR=$DIR/../examples/quickstart/tutorial/hadoop/docker
+export DOCKERDIR=$DIR/docker
+export SERVICE_SUPERVISORDS_DIR=$DOCKERDIR/service-supervisords
+export ENVIRONMENT_CONFIGS_DIR=$DOCKERDIR/environment-configs
+export SHARED_DIR=${HOME}/shared
+export SUPERVISORDIR=/usr/lib/druid/conf
+export RESOURCEDIR=$DIR/src/test/resources
+
+# so docker IP addr will be known during docker build
+echo ${DOCKER_IP:=127.0.0.1} > $DOCKERDIR/docker_ip
+
+if !($DRUID_INTEGRATION_TEST_SKIP_BUILD_DOCKER); then
+  sh ./script/copy_resources.sh
+  sh ./script/docker_build_containers.sh
+fi
+
+if !($DRUID_INTEGRATION_TEST_SKIP_RUN_DOCKER); then
+  sh ./stop_cluster.sh
+  sh ./script/docker_run_cluster.sh
+fi
+
+if ($DRUID_INTEGRATION_TEST_START_HADOOP_DOCKER); then
+  sh ./script/copy_hadoop_resources.sh

Review comment:
       In case anyone is looking at this,
   The hdfs-deep-storage test pass as is in this current patch.
   I suspect that while we do start Druid before the hadoop_xml files get created, we do not run the ingestion task yet.
   We do know where the files will be created so the classpath was set correctly when Druid starts. Hence, when we run the hadoop index ingestion task in the integration tests, those hadoop_xml files was created already.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org