You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/09/17 06:55:33 UTC

[jira] [Comment Edited] (SPARK-3561) Native Hadoop/YARN integration for batch/ETL workloads

    [ https://issues.apache.org/jira/browse/SPARK-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14136777#comment-14136777 ] 

Patrick Wendell edited comment on SPARK-3561 at 9/17/14 4:54 AM:
-----------------------------------------------------------------

Hey [~ozhurakousky] - could you provide more of a complete of the proposal here? Traditionally, the SparkContext API itself has not been designed to be a pluggable component. Instead we have a pluggable resource management subsystem and that's which would be the natural place to add support for things like elastic scaling etc.

There is also work going on in SPARK-3174 in this regard relating to YARN elasticity.

It would be good to have an end-to-end proposal of what you are envisioning, and a comparison to the work in SPARK-3174 (that's an active JIRA btw, so keep an eye out for new designs there).


was (Author: pwendell):
Hey [~ozhurakousky] - could you provide more of a complete of the proposal here? Traditionally, the SparkContext API itself has not been designed to be a pluggable component. Instead we have a pluggable resource management subsystem and that's which would be the natural place to add support for things like elastic scaling etc.

There is also work going on in SPARK-3174 in this regard relating to YARN elasticity.

> Native Hadoop/YARN integration for batch/ETL workloads
> ------------------------------------------------------
>
>                 Key: SPARK-3561
>                 URL: https://issues.apache.org/jira/browse/SPARK-3561
>             Project: Spark
>          Issue Type: New Feature
>          Components: core
>    Affects Versions: 1.1.0
>            Reporter: Oleg Zhurakousky
>              Labels: features
>             Fix For: 1.2.0
>
>         Attachments: SPARK-3561.pdf
>
>
> Currently Spark provides integration with external resource-managers such as Apache Hadoop YARN, Mesos etc. Specifically in the context of YARN, the current architecture of Spark-on-YARN can be enhanced to provide significantly better utilization of cluster resources for large scale, batch and/or ETL applications when run alongside other applications (Spark and others) and services in YARN. 
> Proposal:
> The proposed approach would introduce a pluggable JobExecutionContext (trait) - a gateway and a delegate to Hadoop execution environment - as a non-public api (@DeveloperAPI) not exposed to end users of Spark.
> The trait will define 4 only operations:
> * hadoopFile
> * newAPIHadoopFile
> * broadcast
> * runJob
> Each method directly maps to the corresponding methods in current version of SparkContext. JobExecutionContext implementation will be accessed by SparkContext via master URL as "execution-context:foo.bar.MyJobExecutionContext" with default implementation containing the existing code from SparkContext, thus allowing current (corresponding) methods of SparkContext to delegate to such implementation. An integrator will now have an option to provide custom implementation of DefaultExecutionContext by either implementing it from scratch or extending form DefaultExecutionContext.
> Please see the attached design doc for more details.
> Pull Request will be posted shortly as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org