You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2015/04/14 00:12:12 UTC

[jira] [Comment Edited] (SPARK-6511) Publish "hadoop provided" build with instructions for different distros

    [ https://issues.apache.org/jira/browse/SPARK-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14493183#comment-14493183 ] 

Patrick Wendell edited comment on SPARK-6511 at 4/13/15 10:11 PM:
------------------------------------------------------------------

Just as an example I tried to wire Spark to work with stock Hadoop 2.6. Here is how I got it running after doing a hadoop-provided build. This is pretty clunky, so I wonder if we should just support setting HADOOP_HOME or something and we can automatically find and add the jar files present within that folder.

{code}
export SPARK_DIST_CLASSPATH=$(find /tmp/hadoop-2.6.0/ -name *.jar | tr "\n" ";")
./bin/spark-shell
{code}

[~vanzin] for your CDH packages, what do you end up setting SPARK_DIST_CLASSPATH to?

/cc [~srowen]


was (Author: pwendell):
Just as an example I tried to wire Spark to work with stock Hadoop 2.6. Here is how I got it running after doing a hadoop-provided build. This is pretty clunky, so I wonder if we should just support setting HADOOP_HOME or something and we can automatically find and add the jar files present within that folder.

{code}
export SPARK_DIST_CLASSPATH=$(find /tmp/hadoop-2.6.0/ -name *.jar | tr "\n" ";")
./bin/spark-shell
{code}

[~vanzin] for your CDH packages, what do you end up setting SPARK_DIST_CLASSPATH to?

> Publish "hadoop provided" build with instructions for different distros
> -----------------------------------------------------------------------
>
>                 Key: SPARK-6511
>                 URL: https://issues.apache.org/jira/browse/SPARK-6511
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>            Reporter: Patrick Wendell
>
> Currently we publish a series of binaries with different Hadoop client jars. This mostly works, but some users have reported compatibility issues with different distributions.
> One improvement moving forward might be to publish a binary build that simply asks you to set HADOOP_HOME to pick up the Hadoop client location. That way it would work across multiple distributions, even if they have subtle incompatibilities with upstream Hadoop.
> I think a first step for this would be to produce such a build for the community and see how well it works. One potential issue is that our fancy excludes and dependency re-writing won't work with the simpler "append Hadoop's classpath to Spark". Also, how we deal with the Hive dependency is unclear, i.e. should we continue to bundle Spark's Hive (which has some fixes for dependency conflicts) or do we allow for linking against vanilla Hive at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org