You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "DB Tsai (JIRA)" <ji...@apache.org> on 2018/07/11 22:50:00 UTC

[jira] [Assigned] (SPARK-24764) Add ServiceLoader implementation for SparkHadoopUtil

     [ https://issues.apache.org/jira/browse/SPARK-24764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

DB Tsai reassigned SPARK-24764:
-------------------------------

    Assignee: Shruti Gumma

> Add ServiceLoader implementation for SparkHadoopUtil
> ----------------------------------------------------
>
>                 Key: SPARK-24764
>                 URL: https://issues.apache.org/jira/browse/SPARK-24764
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.3.0, 2.3.1
>            Reporter: Shruti Gumma
>            Assignee: Shruti Gumma
>            Priority: Major
>
> Currently SparkHadoopUtil creation is static and cannot be changed. This proposal is to move the creation of SparkHadoopUtil to a ServiceLoader, so that external cluster managers can create their own SparkHadoopUtil.
> In the case of Yarn, a specific YarnSparkHadoopUtil is used in the Yarn packages whereas in other places, SparkHadoopUtil is used. SparkHadoopUtil has been changed in v2.3.0 to work with Yarn, leaving external cluster managers with no configurable way of doing the same. 
> The util classes such as SparkHadoopUtil should be made configurable, similar to how the ExternalClusterManager was made pluggable through a ServiceLoader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org