You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Brandon White <bw...@gmail.com> on 2015/07/04 08:11:02 UTC

Not Blocking Other Notebooks when running all on a notebook

Hello there,

Whenever I run all for one notebook, it blocks all queries and execution
for all the other notebooks. How do I turn this off? I need to be able to
run all and still have the other notebooks work.

I set zeppelin.spark.concurrentSQL to true but it is still blocking.

Is there any other field I need to set to true?

Brandon

RE: Spark jobs on Yarn is using 3 virtual cores

Posted by "Sambit Tripathy (RBEI/EDS1)" <Sa...@in.bosch.com>.
My bad,


Adding  export -DZEPPELIN_INTP_JAVA_OPTS="-Dspark.executor.instances=100" this to zeppelin-env.sh did work for me.


From: Sambit Tripathy (RBEI/EDS1) [mailto:Sambit.Tripathy@in.bosch.com]
Sent: Monday, July 06, 2015 2:48 PM
To: users@zeppelin.incubator.apache.org
Subject: Spark jobs on Yarn is using 3 virtual cores

Hi,

I upgraded to Spark 1.3.1 and after running the job on the interpreter I see only 3 virtual cores are used, even after specifying the number of cores in zeppelin-env.sh as “export ZEPPELIN_JAVA_OPTS “

Is there a place where I can set the number correctly? I am using Spark on Yarn.


Thanks in advance for your pointers.


Regards,
Sambit.


Spark jobs on Yarn is using 3 virtual cores

Posted by "Sambit Tripathy (RBEI/EDS1)" <Sa...@in.bosch.com>.
Hi,

I upgraded to Spark 1.3.1 and after running the job on the interpreter I see only 3 virtual cores are used, even after specifying the number of cores in zeppelin-env.sh as “export ZEPPELIN_JAVA_OPTS “

Is there a place where I can set the number correctly? I am using Spark on Yarn.


Thanks in advance for your pointers.


Regards,
Sambit.


Re: Not Blocking Other Notebooks when running all on a notebook

Posted by moon soo Lee <mo...@apache.org>.
Hi,

Code execution (scala/python) will always be blocked while other notebook
uses the same interpreter.

With zeppelin.spark.concurrentSQL set true, spark sql(%sql) interpreter
will run your sql without blocking.

So, to run your notebook in parallel, you need either create multiple
interpreter settings or set zeppelin.spark.concurrentSQL true for %sql
statement.

Hope this helps

Best,
moon
On 2015년 7월 3일 (금) at 오후 11:11 Brandon White <bw...@gmail.com>
wrote:

> Hello there,
>
> Whenever I run all for one notebook, it blocks all queries and execution
> for all the other notebooks. How do I turn this off? I need to be able to
> run all and still have the other notebooks work.
>
> I set zeppelin.spark.concurrentSQL to true but it is still blocking.
>
> Is there any other field I need to set to true?
>
> Brandon
>