You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by "Kessler, Stephan" <st...@sap.com> on 2015/03/30 14:29:34 UTC

Passing "--conf" to Spark

Hi Zeppelin List,

i try to execute zeppelin as a yarn-client for my cluster. However, in order to get spark running, I need to pass this parameter "--conf hdp.version=2.2.0.0-2041" to Spark. How can I do that?

Best,
Stephan


RE: Passing "--conf" to Spark

Posted by "Kessler, Stephan" <st...@sap.com>.
Thanks, that did the trick!

Best,
Stephan

From: RJ Nowling [mailto:rnowling@gmail.com]
Sent: Montag, 30. März 2015 16:26
To: users@zeppelin.incubator.apache.org
Subject: Re: Passing "--conf" to Spark

Just do:

export ZEPPELIN_INTP_JAVA_OPTS="-Dhdp.version=2.2.0.0-2041"


On Mon, Mar 30, 2015 at 9:18 AM, Kessler, Stephan <st...@sap.com>> wrote:
Unfortunately this does not make any difference.

I did something like this:
export ZEPPELIN_INTP_JAVA_OPTS="-Dconf='hdp.version=2.2.0.0-2041'"

Do you have any other idea?

Thanks a lot.
Best,
Stephan

From: RJ Nowling [mailto:rnowling@gmail.com<ma...@gmail.com>]
Sent: Montag, 30. März 2015 15:21
To: users@zeppelin.incubator.apache.org<ma...@zeppelin.incubator.apache.org>
Subject: Re: Passing "--conf" to Spark

In conf/zeppelin-env.sh, you can add options in the form "-Dvariable=value", I believe they will be passed to Spark as if you did --conf.

On Mon, Mar 30, 2015 at 7:29 AM, Kessler, Stephan <st...@sap.com>> wrote:
Hi Zeppelin List,

i try to execute zeppelin as a yarn-client for my cluster. However, in order to get spark running, I need to pass this parameter “--conf hdp.version=2.2.0.0-2041” to Spark. How can I do that?

Best,
Stephan




Re: Passing "--conf" to Spark

Posted by RJ Nowling <rn...@gmail.com>.
Just do:

export ZEPPELIN_INTP_JAVA_OPTS="-Dhdp.version=2.2.0.0-2041"


On Mon, Mar 30, 2015 at 9:18 AM, Kessler, Stephan <st...@sap.com>
wrote:

>  Unfortunately this does not make any difference.
>
>
>
> I did something like this:
>
> export ZEPPELIN_INTP_JAVA_OPTS="-Dconf='hdp.version=2.2.0.0-2041'"
>
>
>
> Do you have any other idea?
>
>
>
> Thanks a lot.
>
> Best,
>
> Stephan
>
>
>
> *From:* RJ Nowling [mailto:rnowling@gmail.com]
> *Sent:* Montag, 30. März 2015 15:21
> *To:* users@zeppelin.incubator.apache.org
> *Subject:* Re: Passing "--conf" to Spark
>
>
>
> In conf/zeppelin-env.sh, you can add options in the form
> "-Dvariable=value", I believe they will be passed to Spark as if you did
> --conf.
>
>
>
> On Mon, Mar 30, 2015 at 7:29 AM, Kessler, Stephan <st...@sap.com>
> wrote:
>
>  Hi Zeppelin List,
>
>
>
> i try to execute zeppelin as a yarn-client for my cluster. However, in
> order to get spark running, I need to pass this parameter “--conf
> hdp.version=2.2.0.0-2041” to Spark. How can I do that?
>
>
>
> Best,
>
> Stephan
>
>
>
>
>

RE: Passing "--conf" to Spark

Posted by "Kessler, Stephan" <st...@sap.com>.
Unfortunately this does not make any difference.

I did something like this:
export ZEPPELIN_INTP_JAVA_OPTS="-Dconf='hdp.version=2.2.0.0-2041'"

Do you have any other idea?

Thanks a lot.
Best,
Stephan

From: RJ Nowling [mailto:rnowling@gmail.com]
Sent: Montag, 30. März 2015 15:21
To: users@zeppelin.incubator.apache.org
Subject: Re: Passing "--conf" to Spark

In conf/zeppelin-env.sh, you can add options in the form "-Dvariable=value", I believe they will be passed to Spark as if you did --conf.

On Mon, Mar 30, 2015 at 7:29 AM, Kessler, Stephan <st...@sap.com>> wrote:
Hi Zeppelin List,

i try to execute zeppelin as a yarn-client for my cluster. However, in order to get spark running, I need to pass this parameter “--conf hdp.version=2.2.0.0-2041” to Spark. How can I do that?

Best,
Stephan



Re: Passing "--conf" to Spark

Posted by RJ Nowling <rn...@gmail.com>.
In conf/zeppelin-env.sh, you can add options in the form
"-Dvariable=value", I believe they will be passed to Spark as if you did
--conf.

On Mon, Mar 30, 2015 at 7:29 AM, Kessler, Stephan <st...@sap.com>
wrote:

>  Hi Zeppelin List,
>
> i try to execute zeppelin as a yarn-client for my cluster. However, in
> order to get spark running, I need to pass this parameter “--conf
> hdp.version=2.2.0.0-2041” to Spark. How can I do that?
>
> Best,
> Stephan
>
>