You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2021/07/05 00:18:37 UTC

[spark] branch branch-3.2 updated: [SPARK-36007][INFRA] Failed to run benchmark in GA

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
     new 873f6b9  [SPARK-36007][INFRA] Failed to run benchmark in GA
873f6b9 is described below

commit 873f6b9d978a61a49c654e2009aa0e6031f75c2a
Author: Kevin Su <pi...@apache.org>
AuthorDate: Mon Jul 5 09:17:06 2021 +0900

    [SPARK-36007][INFRA] Failed to run benchmark in GA
    
    When I'm running the benchmark in GA, I met the below error.
    
    https://github.com/pingsutw/spark/runs/2867617238?check_suite_focus=true
    ```
    java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.j
    ava:1692)java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
    21/06/20 07:40:02 ERROR SparkContext: Error initializing SparkContext.java.lang.AssertionError: assertion failed:
    spark.test.home is not set! at scala.Predef$.assert(Predef.scala:223) at org.apache.spark.deploy.worker.Worker.<init>
    (Worker.scala:148) at org.apache.spark.deploy.worker.Worker$.startRpcEnvAndEndpoint(Worker.scala:954) at
    org.apache.spark.deploy.LocalSparkCluster.$anonfun$start$2(LocalSparkCluster.scala:68) at
    org.apache.spark.deploy.LocalSparkCluster.$anonfun$start$2$adapted(LocalSparkCluster.scala:65) at
    scala.collection.immutable.Range.foreach(Range.scala:158) at
    org.apache.spark.deploy.LocalSparkCluster.start(LocalSparkCluster.scala:65) at
    org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2954) at
    org.apache.spark.SparkContext.<init>(SparkContext.scala:559) at org.apache.spark.SparkContext.<init>
    (SparkContext.scala:137) at
    org.apache.spark.serializer.KryoSerializerBenchmark$.createSparkContext(KryoSerializerBenchmark.scala:86) at
    org.apache.spark.serializer.KryoSerializerBenchmark$.sc$lzycompute$1(KryoSerializerBenchmark.scala:58) at
    org.apache.spark.serializer.KryoSerializerBenchmark$.sc$1(KryoSerializerBenchmark.scala:58) at
    org.apache.spark.serializer.KryoSerializerBenchmark$.$anonfun$run$3(KryoSerializerBenchmark.scala:63)
    ```
    
    Set `spark.test.home` in the benchmark workflow.
    
    No
    
    Rerun the benchmark in my fork.
    https://github.com/pingsutw/spark/actions/runs/996067851
    
    Closes #33203 from pingsutw/SPARK-36007.
    
    Lead-authored-by: Kevin Su <pi...@apache.org>
    Co-authored-by: Kevin Su <pi...@gmail.com>
    Signed-off-by: Hyukjin Kwon <gu...@apache.org>
    (cherry picked from commit 11fcbc73cbcbb1bdf5ba5d90eba0aba1edebb15d)
    Signed-off-by: Hyukjin Kwon <gu...@apache.org>
---
 .github/workflows/benchmark.yml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.github/workflows/benchmark.yml b/.github/workflows/benchmark.yml
index 76ae152..9a599b9 100644
--- a/.github/workflows/benchmark.yml
+++ b/.github/workflows/benchmark.yml
@@ -48,6 +48,8 @@ jobs:
       SPARK_BENCHMARK_CUR_SPLIT: ${{ matrix.split }}
       SPARK_GENERATE_BENCHMARK_FILES: 1
       SPARK_LOCAL_IP: localhost
+      # To prevent spark.test.home not being set. See more detail in SPARK-36007.
+      SPARK_HOME: ${{ github.workspace }}
     steps:
     - name: Checkout Spark repository
       uses: actions/checkout@v2

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org