You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by jaykatukuri <jk...@apple.com> on 2015/03/18 19:09:58 UTC

Using a different spark jars than the one on the cluster

Hi all,
I am trying to run my job which needs spark-sql_2.11-1.3.0.jar. 
The cluster that I am running on is still on spark-1.2.0.

I tried the following :

spark-submit --class class-name --num-executors 100 --master yarn 
application_jar--jars hdfs:///path/spark-sql_2.11-1.3.0.jar
hdfs:///input_data 

But, this did not work, I get an error that it is not able to find a
class/method that is in spark-sql_2.11-1.3.0.jar .

org.apache.spark.sql.SQLContext.implicits()Lorg/apache/spark/sql/SQLContext$implicits$

The question in general is how do we use a different version of spark jars
(spark-core, spark-sql, spark-ml etc) than the one's running on a cluster ?

Thanks,
Jay





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Using-a-different-spark-jars-than-the-one-on-the-cluster-tp22125.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Using a different spark jars than the one on the cluster

Posted by Denny Lee <de...@gmail.com>.
+1  - I currently am doing what Marcelo is suggesting as I have a CDH 5.2
cluster (with Spark 1.1) and I'm also running Spark 1.3.0+ side-by-side in
my cluster.

On Wed, Mar 18, 2015 at 1:23 PM Marcelo Vanzin <va...@cloudera.com> wrote:

> Since you're using YARN, you should be able to download a Spark 1.3.0
> tarball from Spark's website and use spark-submit from that
> installation to launch your app against the YARN cluster.
>
> So effectively you would have 1.2.0 and 1.3.0 side-by-side in your cluster.
>
> On Wed, Mar 18, 2015 at 11:09 AM, jaykatukuri <jk...@apple.com> wrote:
> > Hi all,
> > I am trying to run my job which needs spark-sql_2.11-1.3.0.jar.
> > The cluster that I am running on is still on spark-1.2.0.
> >
> > I tried the following :
> >
> > spark-submit --class class-name --num-executors 100 --master yarn
> > application_jar--jars hdfs:///path/spark-sql_2.11-1.3.0.jar
> > hdfs:///input_data
> >
> > But, this did not work, I get an error that it is not able to find a
> > class/method that is in spark-sql_2.11-1.3.0.jar .
> >
> > org.apache.spark.sql.SQLContext.implicits()Lorg/
> apache/spark/sql/SQLContext$implicits$
> >
> > The question in general is how do we use a different version of spark
> jars
> > (spark-core, spark-sql, spark-ml etc) than the one's running on a
> cluster ?
> >
> > Thanks,
> > Jay
> >
> >
> >
> >
> >
> > --
> > View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Using-a-different-spark-jars-than-the-
> one-on-the-cluster-tp22125.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> > For additional commands, e-mail: user-help@spark.apache.org
> >
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Using a different spark jars than the one on the cluster

Posted by Marcelo Vanzin <va...@cloudera.com>.
Since you're using YARN, you should be able to download a Spark 1.3.0
tarball from Spark's website and use spark-submit from that
installation to launch your app against the YARN cluster.

So effectively you would have 1.2.0 and 1.3.0 side-by-side in your cluster.

On Wed, Mar 18, 2015 at 11:09 AM, jaykatukuri <jk...@apple.com> wrote:
> Hi all,
> I am trying to run my job which needs spark-sql_2.11-1.3.0.jar.
> The cluster that I am running on is still on spark-1.2.0.
>
> I tried the following :
>
> spark-submit --class class-name --num-executors 100 --master yarn
> application_jar--jars hdfs:///path/spark-sql_2.11-1.3.0.jar
> hdfs:///input_data
>
> But, this did not work, I get an error that it is not able to find a
> class/method that is in spark-sql_2.11-1.3.0.jar .
>
> org.apache.spark.sql.SQLContext.implicits()Lorg/apache/spark/sql/SQLContext$implicits$
>
> The question in general is how do we use a different version of spark jars
> (spark-core, spark-sql, spark-ml etc) than the one's running on a cluster ?
>
> Thanks,
> Jay
>
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Using-a-different-spark-jars-than-the-one-on-the-cluster-tp22125.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org