You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2021/11/14 09:03:17 UTC

[spark] branch master updated: [SPARK-37318][CORE][TESTS] Make `FallbackStorageSuite` robust in terms of DNS

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 41f9df9  [SPARK-37318][CORE][TESTS] Make `FallbackStorageSuite` robust in terms of DNS
41f9df9 is described below

commit 41f9df92061ab96ce7729f0e2a107a3569046c58
Author: Dongjoon Hyun <dh...@apple.com>
AuthorDate: Sun Nov 14 01:02:12 2021 -0800

    [SPARK-37318][CORE][TESTS] Make `FallbackStorageSuite` robust in terms of DNS
    
    ### What changes were proposed in this pull request?
    
    This PR aims to make `FallbackStorageSuite` robust in terms of DNS.
    
    ### Why are the changes needed?
    
    The test case expects the hostname doesn't exist and it actually doesn't exist.
    ```
    $ nslookup remote
    Server:		8.8.8.8
    Address:	8.8.8.8#53
    
    ** server can't find remote: NXDOMAIN
    
    $ ping remote
    ping: cannot resolve remote: Unknown host
    ```
    
    However, in some DNS environments, all hostnames including non-existent names seems to be handled like the existing hostnames.
    
    ```
    $ nslookup remote
    Server:		172.16.0.1
    Address:	172.16.0.1#53
    
    Non-authoritative answer:
    Name:	remote
    Address: 23.217.138.110
    
    $ ping remote
    PING remote (23.217.138.110): 56 data bytes
    64 bytes from 23.217.138.110: icmp_seq=0 ttl=57 time=8.660 ms
    
    $ build/sbt "core/testOnly *.FallbackStorageSuite"
    ...
    [info] Run completed in 2 minutes, 31 seconds.
    [info] Total number of tests run: 9
    [info] Suites: completed 1, aborted 0
    [info] Tests: succeeded 3, failed 6, canceled 0, ignored 0, pending 0
    [info] *** 6 TESTS FAILED ***
    [error] Failed tests:
    [error] 	org.apache.spark.storage.FallbackStorageSuite
    [error] (core / Test / testOnly) sbt.TestsFailedException: Tests unsuccessful
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    ```
    $ build/sbt "core/testOnly *.FallbackStorageSuite"
    ...
    [info] Run completed in 3 seconds, 322 milliseconds.
    [info] Total number of tests run: 3
    [info] Suites: completed 1, aborted 0
    [info] Tests: succeeded 3, failed 0, canceled 6, ignored 0, pending 0
    [info] All tests passed.
    [success] Total time: 22 s, completed Nov 13, 2021 7:11:31 PM
    ```
    
    Closes #34585 from dongjoon-hyun/SPARK-37318.
    
    Authored-by: Dongjoon Hyun <dh...@apple.com>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 .../scala/org/apache/spark/storage/FallbackStorageSuite.scala     | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala b/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala
index 88197b6..7d648c9 100644
--- a/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala
+++ b/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala
@@ -17,6 +17,7 @@
 package org.apache.spark.storage
 
 import java.io.{DataOutputStream, File, FileOutputStream, IOException}
+import java.net.{InetAddress, UnknownHostException}
 import java.nio.file.Files
 
 import scala.concurrent.duration._
@@ -41,6 +42,13 @@ import org.apache.spark.util.Utils.tryWithResource
 class FallbackStorageSuite extends SparkFunSuite with LocalSparkContext {
 
   def getSparkConf(initialExecutor: Int = 1, minExecutor: Int = 1): SparkConf = {
+    // Some DNS always replies for all hostnames including unknown host names
+    try {
+      InetAddress.getByName(FallbackStorage.FALLBACK_BLOCK_MANAGER_ID.host)
+      assume(false)
+    } catch {
+      case _: UnknownHostException =>
+    }
     new SparkConf(false)
       .setAppName(getClass.getName)
       .set(SPARK_MASTER, s"local-cluster[$initialExecutor,1,1024]")

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org