You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/01/27 11:12:40 UTC

[jira] [Resolved] (SPARK-6516) Coupling between default Hadoop versions in Spark build vs. ec2 scripts

     [ https://issues.apache.org/jira/browse/SPARK-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-6516.
------------------------------
    Resolution: Won't Fix

> Coupling between default Hadoop versions in Spark build vs. ec2 scripts
> -----------------------------------------------------------------------
>
>                 Key: SPARK-6516
>                 URL: https://issues.apache.org/jira/browse/SPARK-6516
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build, EC2
>    Affects Versions: 1.3.0
>            Reporter: Joseph K. Bradley
>            Priority: Minor
>
> When we change default Hadoop versions in the Spark build conf and/or in the EC2 scripts, we should keep the 2 in synch.  (When out of synch, users may be surprised if they create an EC2 cluster, compile Spark on it, and try to run that version of Spark.)  Making sure this is set in the same place would be great.
> An even better fix might be for Spark build to check for what is available and adjust the default based on that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org