You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Thomas Dudziak <to...@gmail.com> on 2015/10/14 23:40:30 UTC

IPv6 regression in Spark 1.5.1

It looks like Spark 1.5.1 does not work with IPv6. When
adding -Djava.net.preferIPv6Addresses=true on my dual stack server, the
driver fails with:

15/10/14 14:36:01 ERROR SparkContext: Error initializing SparkContext.
java.lang.AssertionError: assertion failed: Expected hostname
at scala.Predef$.assert(Predef.scala:179)
at org.apache.spark.util.Utils$.checkHost(Utils.scala:805)
at org.apache.spark.storage.BlockManagerId.<init>(BlockManagerId.scala:48)
at org.apache.spark.storage.BlockManagerId$.apply(BlockManagerId.scala:107)
at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:190)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
at
org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)

Looking at the checkHost method, it clearly does not work for IPv6 as it
assumes : is not a valid part of the hostname. I think this code should use
Guava's HostAndPort or related classes to properly deal with IPv4 and IPv6
(and other parts of Utils already use Guava).

cheers,
Tom

Re: IPv6 regression in Spark 1.5.1

Posted by Thomas Dudziak <to...@gmail.com>.
Specifically, something like this should probably do the trick:

  def checkHost(host: String, message: String = "") {
    assert(!HostAndPort.fromString(host).hasPort, message)
  }

  def checkHostPort(hostPort: String, message: String = "") {
    assert(HostAndPort.fromString(hostPort).hasPort, message)
  }


On Wed, Oct 14, 2015 at 2:40 PM, Thomas Dudziak <to...@gmail.com> wrote:

> It looks like Spark 1.5.1 does not work with IPv6. When
> adding -Djava.net.preferIPv6Addresses=true on my dual stack server, the
> driver fails with:
>
> 15/10/14 14:36:01 ERROR SparkContext: Error initializing SparkContext.
> java.lang.AssertionError: assertion failed: Expected hostname
> at scala.Predef$.assert(Predef.scala:179)
> at org.apache.spark.util.Utils$.checkHost(Utils.scala:805)
> at org.apache.spark.storage.BlockManagerId.<init>(BlockManagerId.scala:48)
> at org.apache.spark.storage.BlockManagerId$.apply(BlockManagerId.scala:107)
> at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:190)
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
> at
> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
>
> Looking at the checkHost method, it clearly does not work for IPv6 as it
> assumes : is not a valid part of the hostname. I think this code should use
> Guava's HostAndPort or related classes to properly deal with IPv4 and IPv6
> (and other parts of Utils already use Guava).
>
> cheers,
> Tom
>