You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by tg...@apache.org on 2014/03/27 17:54:56 UTC

git commit: SPARK-1330 removed extra echo from comput_classpath.sh

Repository: spark
Updated Branches:
  refs/heads/master 5b2d863e3 -> 426042ad2


SPARK-1330 removed extra echo from comput_classpath.sh

remove the extra echo which prevents spark-class from working.  Note that I did not update the comment above it, which is also wrong because I'm not sure what it should do.

Should hive only be included if explicitly built with sbt hive/assembly or should sbt assembly build it?

Author: Thomas Graves <tg...@apache.org>

Closes #241 from tgravescs/SPARK-1330 and squashes the following commits:

b10d708 [Thomas Graves] SPARK-1330 removed extra echo from comput_classpath.sh


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/426042ad
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/426042ad
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/426042ad

Branch: refs/heads/master
Commit: 426042ad24a54b4b776085cbf4e1896464efc613
Parents: 5b2d863
Author: Thomas Graves <tg...@apache.org>
Authored: Thu Mar 27 11:54:43 2014 -0500
Committer: Thomas Graves <tg...@apache.org>
Committed: Thu Mar 27 11:54:43 2014 -0500

----------------------------------------------------------------------
 bin/compute-classpath.sh | 1 -
 1 file changed, 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/426042ad/bin/compute-classpath.sh
----------------------------------------------------------------------
diff --git a/bin/compute-classpath.sh b/bin/compute-classpath.sh
index d6f1ff9..bef42df 100755
--- a/bin/compute-classpath.sh
+++ b/bin/compute-classpath.sh
@@ -36,7 +36,6 @@ CLASSPATH="$SPARK_CLASSPATH:$FWDIR/conf"
 # Hopefully we will find a way to avoid uber-jars entirely and deploy only the needed packages in
 # the future.
 if [ -f "$FWDIR"/sql/hive/target/scala-$SCALA_VERSION/spark-hive-assembly-*.jar ]; then
-  echo "Hive assembly found, including hive support.  If this isn't desired run sbt hive/clean."
 
   # Datanucleus jars do not work if only included in the uberjar as plugin.xml metadata is lost.
   DATANUCLEUSJARS=$(JARS=("$FWDIR/lib_managed/jars"/datanucleus-*.jar); IFS=:; echo "${JARS[*]}")