You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Stavros Kontopoulos (JIRA)" <ji...@apache.org> on 2016/01/01 16:02:39 UTC

[jira] [Commented] (SPARK-11714) Make Spark on Mesos honor port restrictions

    [ https://issues.apache.org/jira/browse/SPARK-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15076309#comment-15076309 ] 

Stavros Kontopoulos commented on SPARK-11714:
---------------------------------------------

Before moving on with a PR i was thinking of the following concept:

Check if spark.executor.port is empty if not check if port is within the offered
port range, otherwise refuse offer.
If spark.executor.port it empty this means random port
(default 0). That is a system facility (OS) its not spark convention. For this case pick a random
port within the offered range.
MesosSchedulerBackend could pass along with other ExecutorInfo some info
about the allowed ports so that MesosExecutorBackend can initialize its port to the specified value.
We could pass that value (offered range of ports) in the data field (protobuf) of the ExecutorInfo structure where the exec command line arguments
for the executor are passed (not so clean), but then we could use it (actual initialization of the port deep down it is used by NettyRpcEnv ) and remove it from that list of arguments.


> Make Spark on Mesos honor port restrictions
> -------------------------------------------
>
>                 Key: SPARK-11714
>                 URL: https://issues.apache.org/jira/browse/SPARK-11714
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Charles Allen
>
> Currently the MesosSchedulerBackend does not make any effort to honor "ports" as a resource offer in Mesos. This ask is to have the ports which the executor binds to honor the limits of the "ports" resource of an offer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org