You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Colin Patrick McCabe (JIRA)" <ji...@apache.org> on 2014/05/31 02:41:01 UTC
[jira] [Comment Edited] (SPARK-1954) Make it easier to get Spark on
YARN code to compile in IntelliJ
[ https://issues.apache.org/jira/browse/SPARK-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14014425#comment-14014425 ]
Colin Patrick McCabe edited comment on SPARK-1954 at 5/31/14 12:40 AM:
-----------------------------------------------------------------------
I've run into this too. Is there a way to get the {{yarn}} profile to set a default for {{hadoop.version}}? Since the yarn profile (unlike the yarn-alpha profile) requires hadoop 2.2+ to build, it seems like this is the behavior everyone would want.
was (Author: cmccabe):
I've run into this too. Is there a way to get the {{yarn}} profile to set a default for {{hadoop.version}}? Since the yarn profile (unless the yarn-alpha profile) requires hadoop 2.2+ to build, it seems like this is the behavior everyone would want.
> Make it easier to get Spark on YARN code to compile in IntelliJ
> ---------------------------------------------------------------
>
> Key: SPARK-1954
> URL: https://issues.apache.org/jira/browse/SPARK-1954
> Project: Spark
> Issue Type: Improvement
> Components: Build
> Affects Versions: 1.0.0
> Reporter: Sandy Ryza
>
> When loading a project through a Maven pom, IntelliJ allows switching on profiles, but, to my knowledge, doesn't provide a way to set arbitrary properties.
> To get Spark-on-YARN code to compile in IntelliJ, I need to manually change the hadoop.version in the root pom.xml to 2.2.0 or higher. This is very cumbersome when switching branches.
> It would be really helpful to add a profile that sets the Hadoop version that IntelliJ can switch on.
--
This message was sent by Atlassian JIRA
(v6.2#6252)