You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Kali.tummala@gmail.com" <Ka...@gmail.com> on 2016/01/27 17:28:59 UTC

how to run latest version of spark in old version of spark in cloudera cluster ?

Hi All, 

Just realized cloudera version of spark on my cluster is 1.2, the jar which
I built using maven is version 1.6 which is causing issue.

Is there a way to run spark version 1.6 in 1.2 version of spark ?

Thanks
Sri 




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: how to run latest version of spark in old version of spark in cloudera cluster ?

Posted by sri hari kali charan Tummala <ka...@gmail.com>.
Thank you very much, well documented.

Thanks
Sri

On Wed, Jan 27, 2016 at 8:46 PM, Deenar Toraskar <de...@gmail.com>
wrote:

> Sri
>
> Look at the instructions here. They are for 1.5.1, but should also work
> for 1.6
>
>
> https://www.linkedin.com/pulse/running-spark-151-cdh-deenar-toraskar-cfa?trk=hp-feed-article-title-publish&trkSplashRedir=true&forceNoSplash=true
>
> Deenar
>
>
> On 27 January 2016 at 20:16, Koert Kuipers <ko...@tresata.com> wrote:
>
>> you need to build spark 1.6 for your hadoop distro, and put that on the
>> proxy node and configure it correctly to find your cluster (hdfs and yarn).
>> then use the spark-submit script for that spark 1.6 version to launch your
>> application on yarn
>>
>> On Wed, Jan 27, 2016 at 3:11 PM, sri hari kali charan Tummala <
>> kali.tummala@gmail.com> wrote:
>>
>>> Hi Koert,
>>>
>>> I am submitting my code (spark jar ) using spark-submit in proxy node ,
>>> I checked the version of the cluster and node its says 1.2 I dint really
>>> understand what you mean.
>>>
>>> can I ask yarn to use different version of spark ? or should I say
>>> override the spark_home variables to look at 1.6 spark jar ?
>>>
>>> Thanks
>>> Sri
>>>
>>> On Wed, Jan 27, 2016 at 7:45 PM, Koert Kuipers <ko...@tresata.com>
>>> wrote:
>>>
>>>> If you have yarn you can just launch your spark 1.6 job from a single
>>>> machine with spark 1.6 available on it and ignore the version of spark
>>>> (1.2) that is installed
>>>> On Jan 27, 2016 11:29, "Kali.tummala@gmail.com" <Ka...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> Just realized cloudera version of spark on my cluster is 1.2, the jar
>>>>> which
>>>>> I built using maven is version 1.6 which is causing issue.
>>>>>
>>>>> Is there a way to run spark version 1.6 in 1.2 version of spark ?
>>>>>
>>>>> Thanks
>>>>> Sri
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087.html
>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>> Nabble.com.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>>>> For additional commands, e-mail: user-help@spark.apache.org
>>>>>
>>>>>
>>>
>>>
>>> --
>>> Thanks & Regards
>>> Sri Tummala
>>>
>>>
>>
>


-- 
Thanks & Regards
Sri Tummala

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

Posted by Deenar Toraskar <de...@gmail.com>.
Sri

Look at the instructions here. They are for 1.5.1, but should also work for
1.6

https://www.linkedin.com/pulse/running-spark-151-cdh-deenar-toraskar-cfa?trk=hp-feed-article-title-publish&trkSplashRedir=true&forceNoSplash=true

Deenar


On 27 January 2016 at 20:16, Koert Kuipers <ko...@tresata.com> wrote:

> you need to build spark 1.6 for your hadoop distro, and put that on the
> proxy node and configure it correctly to find your cluster (hdfs and yarn).
> then use the spark-submit script for that spark 1.6 version to launch your
> application on yarn
>
> On Wed, Jan 27, 2016 at 3:11 PM, sri hari kali charan Tummala <
> kali.tummala@gmail.com> wrote:
>
>> Hi Koert,
>>
>> I am submitting my code (spark jar ) using spark-submit in proxy node , I
>> checked the version of the cluster and node its says 1.2 I dint really
>> understand what you mean.
>>
>> can I ask yarn to use different version of spark ? or should I say
>> override the spark_home variables to look at 1.6 spark jar ?
>>
>> Thanks
>> Sri
>>
>> On Wed, Jan 27, 2016 at 7:45 PM, Koert Kuipers <ko...@tresata.com> wrote:
>>
>>> If you have yarn you can just launch your spark 1.6 job from a single
>>> machine with spark 1.6 available on it and ignore the version of spark
>>> (1.2) that is installed
>>> On Jan 27, 2016 11:29, "Kali.tummala@gmail.com" <Ka...@gmail.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> Just realized cloudera version of spark on my cluster is 1.2, the jar
>>>> which
>>>> I built using maven is version 1.6 which is causing issue.
>>>>
>>>> Is there a way to run spark version 1.6 in 1.2 version of spark ?
>>>>
>>>> Thanks
>>>> Sri
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087.html
>>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>>> For additional commands, e-mail: user-help@spark.apache.org
>>>>
>>>>
>>
>>
>> --
>> Thanks & Regards
>> Sri Tummala
>>
>>
>

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

Posted by Koert Kuipers <ko...@tresata.com>.
you need to build spark 1.6 for your hadoop distro, and put that on the
proxy node and configure it correctly to find your cluster (hdfs and yarn).
then use the spark-submit script for that spark 1.6 version to launch your
application on yarn

On Wed, Jan 27, 2016 at 3:11 PM, sri hari kali charan Tummala <
kali.tummala@gmail.com> wrote:

> Hi Koert,
>
> I am submitting my code (spark jar ) using spark-submit in proxy node , I
> checked the version of the cluster and node its says 1.2 I dint really
> understand what you mean.
>
> can I ask yarn to use different version of spark ? or should I say
> override the spark_home variables to look at 1.6 spark jar ?
>
> Thanks
> Sri
>
> On Wed, Jan 27, 2016 at 7:45 PM, Koert Kuipers <ko...@tresata.com> wrote:
>
>> If you have yarn you can just launch your spark 1.6 job from a single
>> machine with spark 1.6 available on it and ignore the version of spark
>> (1.2) that is installed
>> On Jan 27, 2016 11:29, "Kali.tummala@gmail.com" <Ka...@gmail.com>
>> wrote:
>>
>>> Hi All,
>>>
>>> Just realized cloudera version of spark on my cluster is 1.2, the jar
>>> which
>>> I built using maven is version 1.6 which is causing issue.
>>>
>>> Is there a way to run spark version 1.6 in 1.2 version of spark ?
>>>
>>> Thanks
>>> Sri
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>>> For additional commands, e-mail: user-help@spark.apache.org
>>>
>>>
>
>
> --
> Thanks & Regards
> Sri Tummala
>
>

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

Posted by sri hari kali charan Tummala <ka...@gmail.com>.
Hi Koert,

I am submitting my code (spark jar ) using spark-submit in proxy node , I
checked the version of the cluster and node its says 1.2 I dint really
understand what you mean.

can I ask yarn to use different version of spark ? or should I say override
the spark_home variables to look at 1.6 spark jar ?

Thanks
Sri

On Wed, Jan 27, 2016 at 7:45 PM, Koert Kuipers <ko...@tresata.com> wrote:

> If you have yarn you can just launch your spark 1.6 job from a single
> machine with spark 1.6 available on it and ignore the version of spark
> (1.2) that is installed
> On Jan 27, 2016 11:29, "Kali.tummala@gmail.com" <Ka...@gmail.com>
> wrote:
>
>> Hi All,
>>
>> Just realized cloudera version of spark on my cluster is 1.2, the jar
>> which
>> I built using maven is version 1.6 which is causing issue.
>>
>> Is there a way to run spark version 1.6 in 1.2 version of spark ?
>>
>> Thanks
>> Sri
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>>


-- 
Thanks & Regards
Sri Tummala

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

Posted by Koert Kuipers <ko...@tresata.com>.
If you have yarn you can just launch your spark 1.6 job from a single
machine with spark 1.6 available on it and ignore the version of spark
(1.2) that is installed
On Jan 27, 2016 11:29, "Kali.tummala@gmail.com" <Ka...@gmail.com>
wrote:

> Hi All,
>
> Just realized cloudera version of spark on my cluster is 1.2, the jar which
> I built using maven is version 1.6 which is causing issue.
>
> Is there a way to run spark version 1.6 in 1.2 version of spark ?
>
> Thanks
> Sri
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

Posted by honaink <ho...@gmail.com>.
Hi Sri,

Each node on the cluster where spark can run will have 1.2 version of spark. 
If you can, you need to update the cluster to 1.6 spark. Otherwise, you
can't run 1.6 on those nodes.

-honain
 

Kali.tummala@gmail.com wrote
> Hi All, 
> 
> Just realized cloudera version of spark on my cluster is 1.2, the jar
> which I built using maven is version 1.6 which is causing issue.
> 
> Is there a way to run spark version 1.6 in 1.2 version of spark ?
> 
> Thanks
> Sri





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-latest-version-of-spark-in-old-version-of-spark-in-cloudera-cluster-tp26087p26088.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org