You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marco Capuccini (JIRA)" <ji...@apache.org> on 2016/06/20 11:30:05 UTC

[jira] [Commented] (SPARK-15941) Netty RPC implementation ignores the executor bind address

    [ https://issues.apache.org/jira/browse/SPARK-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15339367#comment-15339367 ] 

Marco Capuccini commented on SPARK-15941:
-----------------------------------------

[~srowen] is this a known issue? Or am I missing some configuration?

> Netty RPC implementation ignores the executor bind address
> ----------------------------------------------------------
>
>                 Key: SPARK-15941
>                 URL: https://issues.apache.org/jira/browse/SPARK-15941
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Marco Capuccini
>
> When using Netty RPC implementation, which is the default one in Spark 1.6.x, the executor addresses that I see in the Spark application UI (the one on port 4040) are the IP addresses of the machines, even if I start the slaves with the -H option, in order to bind each slave to the hostname of the machine.
> This is a big deal when using Spark with HDFS, as the executor addresses need to match the hostnames of the DataNodes, to achieve data locality.
> When setting spark.rpc=akka everything works as expected, and the executor addresses in the Spark UI match the hostname, which the slaves are bound to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org