You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/08/16 14:02:08 UTC

[GitHub] [hudi] nsivabalan commented on a change in pull request #3482: [HUDI-2267] update docs and infra test configs, add support for graphite

nsivabalan commented on a change in pull request #3482:
URL: https://github.com/apache/hudi/pull/3482#discussion_r689564878



##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
 #  See the License for the specific language governing permissions and
 # limitations under the License.
 
+usage="

Review comment:
       Have you tried using this script (generate_test_suite.sh) in EMR? 

##########
File path: hudi-integ-test/README.md
##########
@@ -177,20 +177,13 @@ cd /opt
 Copy the integration tests jar into the docker container
 
 ```
-docker cp packaging/hudi-integ-test-bundle/target/hudi-integ-test-bundle-0.8.0-SNAPSHOT.jar adhoc-2:/opt
+docker cp packaging/hudi-integ-test-bundle/target/hudi-integ-test-bundle-0.9.0-SNAPSHOT.jar adhoc-2:/opt
 ```
 
 ```
 docker exec -it adhoc-2 /bin/bash
 ```
 
-Clean the working directories before starting a new test:

Review comment:
       why removed this? 

##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
 #  See the License for the specific language governing permissions and
 # limitations under the License.
 
+usage="
+USAGE:
+$(basename "$0") [--help] [--all boolen] -- Script to generate the test suite according to arguments provided and run these test suites.

Review comment:
       Can we add some example commands. 

##########
File path: hudi-integ-test/README.md
##########
@@ -253,23 +254,119 @@ spark-submit \
 --conf spark.network.timeout=600s \
 --conf spark.yarn.max.executor.failures=10 \
 --conf spark.sql.catalogImplementation=hive \
+--conf spark.driver.extraClassPath=/var/demo/jars/* \
+--conf spark.executor.extraClassPath=/var/demo/jars/* \
 --class org.apache.hudi.integ.testsuite.HoodieTestSuiteJob \
-/opt/hudi-integ-test-bundle-0.8.0-SNAPSHOT.jar \
+/opt/hudi-integ-test-bundle-0.9.0-SNAPSHOT.jar \
 --source-ordering-field test_suite_source_ordering_field \
 --use-deltastreamer \
 --target-base-path /user/hive/warehouse/hudi-integ-test-suite/output \
 --input-base-path /user/hive/warehouse/hudi-integ-test-suite/input \
 --target-table table1 \
 --props file:/var/hoodie/ws/docker/demo/config/test-suite/test.properties \
---schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider \
+--schemaprovider-class org.apache.hudi.integ.testsuite.schema.TestSuiteFileBasedSchemaProvider \
 --source-class org.apache.hudi.utilities.sources.AvroDFSSource \
 --input-file-size 125829120 \
 --workload-yaml-path file:/var/hoodie/ws/docker/demo/config/test-suite/complex-dag-mor.yaml \
 --workload-generator-classname org.apache.hudi.integ.testsuite.dag.WorkflowDagGenerator \
 --table-type MERGE_ON_READ \
---compact-scheduling-minshare 1
+--compact-scheduling-minshare 1 \
+--hoodie-conf hoodie.metrics.on=true \
+--hoodie-conf hoodie.metrics.reporter.type=GRAPHITE \
+--hoodie-conf hoodie.metrics.graphite.host=graphite \
+--hoodie-conf hoodie.metrics.graphite.port=2003 \
+--clean-input \
+--clean-output
 ``` 
 
+## Visualize and inspect the hoodie metrics and performance (local)
+Graphite server is already setup (and up) in ```docker/setup_demo.sh```. 
+
+Open browser and access metrics at
+```
+http://localhost:80
+```
+Dashboard
+```
+http://localhost/dashboard
+
+```
+
+## Running test suite on an EMR cluster
+- Copy over the necessary files and jars that are required to your cluster.

Review comment:
       Can we call out the files that are required. 
   If running manually, we need below files:
   test.properties, 
   source and target schema,
   yaml file,
   integ-test-suite bundle jar. 
   
   but with generate_test_suite.sh, may not be clear as to what all needs to be copied over. 
   

##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
 #  See the License for the specific language governing permissions and
 # limitations under the License.
 
+usage="
+USAGE:
+$(basename "$0") [--help] [--all boolen] -- Script to generate the test suite according to arguments provided and run these test suites.
+
+where:
+    --help  show this help text
+    --all  set the seed value
+    --execute_test_suite  flag if test need to execute (DEFAULT- true)
+    --medium_num_iterations  number of medium iterations (DEFAULT- 20)
+    --long_num_iterations  number of long iterations (DEFAULT- 30)
+    --intermittent_delay_mins  delay after every test run (DEFAULT- 1)
+    --table_type  hoodie table type to test (DEFAULT COPY_ON_WRITE)
+    --include_long_test_suite_yaml  include long infra test suite (DEFAULT false)
+    --include_medium_test_suite_yaml  include medium infra test suite (DEFAULT false)
+    --cluster_num_itr  number of cluster iterations (DEFAULT 30)
+    --include_cluster_yaml  include cluster infra test suite (DEFAULT false)
+    --include_cluster_yaml  include cluster infra test suite (DEFAULT false)

Review comment:
       remove repeated lines
   --include_cluster_yaml

##########
File path: docker/compose/docker-compose_hadoop284_hive233_spark244.yml
##########
@@ -201,25 +201,34 @@ services:
     command: coordinator
 
   presto-worker-1:
-      container_name: presto-worker-1
-      hostname: presto-worker-1
-      image: apachehudi/hudi-hadoop_2.8.4-prestobase_0.217:latest
-      depends_on: ["presto-coordinator-1"]
-      environment:
-        - PRESTO_JVM_MAX_HEAP=512M
-        - PRESTO_QUERY_MAX_MEMORY=1GB
-        - PRESTO_QUERY_MAX_MEMORY_PER_NODE=256MB
-        - PRESTO_QUERY_MAX_TOTAL_MEMORY_PER_NODE=384MB
-        - PRESTO_MEMORY_HEAP_HEADROOM_PER_NODE=100MB
-        - TERM=xterm
-      links:
-        - "hivemetastore"
-        - "hiveserver"
-        - "hive-metastore-postgresql"
-        - "namenode"
-      volumes:
-        - ${HUDI_WS}:/var/hoodie/ws
-      command: worker
+    container_name: presto-worker-1

Review comment:
       can we revert the unintended changes.

##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
 #  See the License for the specific language governing permissions and
 # limitations under the License.
 
+usage="
+USAGE:
+$(basename "$0") [--help] [--all boolen] -- Script to generate the test suite according to arguments provided and run these test suites.

Review comment:
       you can assume some sample s3 path. 
   s3://hudi_test_bucket/




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org