You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@livy.apache.org by kant kodali <ka...@gmail.com> on 2017/12/01 22:44:47 UTC

How to start livy in spark standalone mode?

Hi All,

I am wondering how to start livy server using spark standalone mode?
Meaning, I currently done use yarn or mesos and also dont plan to use them
anytime soon. so I am wondering if it is possible to start livy server in
spark stand alone mode. and if so what is that I need to do ? I also dont
use HDFS or hadoop. I just run spark applications using stand alone mode
and a local file system

What should the following be set to in my case?

export SPARK_HOME=/usr/lib/spark

export HADOOP_CONF_DIR=/etc/hadoop/conf

Re: How to start livy in spark standalone mode?

Posted by kant kodali <ka...@gmail.com>.
Perfect! It worked for me as well after I whitelisted
livy.file.local-dir-whitelist
= ~/.livy-sessions/ (doesn't seem to be in documentation)

On Sat, Dec 2, 2017 at 7:09 AM, Stefan Miklosovic <mi...@gmail.com>
wrote:

> I am using spark master instance and two slaves and livy points to that
> master so when I submit the jar, the job will be started on master and
> distributed to slaves. I am not using hdfs nor hadoop.
>
>
> On Friday, December 1, 2017, kant kodali <ka...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am wondering how to start livy server using spark standalone mode?
>> Meaning, I currently done use yarn or mesos and also dont plan to use them
>> anytime soon. so I am wondering if it is possible to start livy server in
>> spark stand alone mode. and if so what is that I need to do ? I also dont
>> use HDFS or hadoop. I just run spark applications using stand alone mode
>> and a local file system
>>
>> What should the following be set to in my case?
>>
>> export SPARK_HOME=/usr/lib/spark
>>
>> export HADOOP_CONF_DIR=/etc/hadoop/conf
>>
>>
>>
>
> --
> Stefan Miklosovic
>

Re: How to start livy in spark standalone mode?

Posted by Stefan Miklosovic <mi...@gmail.com>.
I am using spark master instance and two slaves and livy points to that
master so when I submit the jar, the job will be started on master and
distributed to slaves. I am not using hdfs nor hadoop.

On Friday, December 1, 2017, kant kodali <ka...@gmail.com> wrote:

> Hi All,
>
> I am wondering how to start livy server using spark standalone mode?
> Meaning, I currently done use yarn or mesos and also dont plan to use them
> anytime soon. so I am wondering if it is possible to start livy server in
> spark stand alone mode. and if so what is that I need to do ? I also dont
> use HDFS or hadoop. I just run spark applications using stand alone mode
> and a local file system
>
> What should the following be set to in my case?
>
> export SPARK_HOME=/usr/lib/spark
>
> export HADOOP_CONF_DIR=/etc/hadoop/conf
>
>
>

-- 
Stefan Miklosovic