You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marco Capuccini (JIRA)" <ji...@apache.org> on 2016/06/14 09:56:01 UTC

[jira] [Created] (SPARK-15941) Spark executor address always bind to IP address when using Netty RPC implementation

Marco Capuccini created SPARK-15941:
---------------------------------------

             Summary: Spark executor address always bind to IP address when using Netty RPC implementation
                 Key: SPARK-15941
                 URL: https://issues.apache.org/jira/browse/SPARK-15941
             Project: Spark
          Issue Type: Bug
            Reporter: Marco Capuccini


When using Netty RPC implementation, which is the default one in Spark 1.6.x, the executor addresses that I see in the Spark application UI (the one on port 4040) are the IP addresses of the machines, even if I start the slaves with the -H option, in order to bind each slave to the hostname of the machine.

This is a big deal when using Spark with HDFS, as the executor addresses need to match the hostnames of the DataNodes, to achieve data locality.

When setting spark.rpc=akka everything works as expected, and the executor addresses in the Spark UI match the hostname, which the slaves are bound to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org