You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Abel Coronado Iruegas <ac...@gmail.com> on 2014/07/09 23:06:00 UTC

Understanding how to install in HDP

Hi everybody

We have hortonworks cluster with many nodes, we want to test a deployment
of Spark. Whats the recomended path to follow?

I mean we can compile the sources in the Name Node. But i don't really
understand how to pass the executable jar and configuration to the rest of
the nodes.

Thanks! !!

Abel

Re: Understanding how to install in HDP

Posted by Andrew Or <an...@databricks.com>.
Hi Abel and Krishna,

You shouldn't have to do any manual rsync'ing. If you're using HDP, then
you can just change the configs through Ambari. As for passing the assembly
jar to all executor nodes, the Spark on YARN code automatically uploads the
jar to a distributed cache (HDFS) and all executors pull the jar from
there. That is, all you need to do is install Spark on one of the nodes and
launch your application from there. (Alternatively you can launch your
application from outside of the cluster, in which case you should use
yarn-cluster mode)

Andrew


2014-07-09 15:24 GMT-07:00 Krishna Sankar <ks...@gmail.com>:

> Abel,
>    I rsync the spark-1.0.1 directory to all the nodes. Then whenever the
> configuration changes, rsync the conf directory.
> Cheers
> <k/>
>
>
> On Wed, Jul 9, 2014 at 2:06 PM, Abel Coronado Iruegas <
> acoronadoiruegas@gmail.com> wrote:
>
>> Hi everybody
>>
>> We have hortonworks cluster with many nodes, we want to test a deployment
>> of Spark. Whats the recomended path to follow?
>>
>> I mean we can compile the sources in the Name Node. But i don't really
>> understand how to pass the executable jar and configuration to the rest of
>> the nodes.
>>
>> Thanks! !!
>>
>> Abel
>>
>
>

Re: Understanding how to install in HDP

Posted by Krishna Sankar <ks...@gmail.com>.
Abel,
   I rsync the spark-1.0.1 directory to all the nodes. Then whenever the
configuration changes, rsync the conf directory.
Cheers
<k/>


On Wed, Jul 9, 2014 at 2:06 PM, Abel Coronado Iruegas <
acoronadoiruegas@gmail.com> wrote:

> Hi everybody
>
> We have hortonworks cluster with many nodes, we want to test a deployment
> of Spark. Whats the recomended path to follow?
>
> I mean we can compile the sources in the Name Node. But i don't really
> understand how to pass the executable jar and configuration to the rest of
> the nodes.
>
> Thanks! !!
>
> Abel
>