You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by Michael Marconi <mm...@du.co> on 2017/04/20 13:17:25 UTC

Access to Spark Context from a Custom Interpreter

Hi \u2013 I'd like to know if it's possible to access the SparkContext from a Custom Interpreter I am developing.  I need access to DataFrames created in earlier paragraphs, so I can transform them and make the results available to subsequent paragraphs.

I was expecting to find the Spark Context on the Interpreter Context that is handed to my Custom Interpreter but this is not the case.

I've come across this similar request StackOverflow but it doesn't answer the question:  http://stackoverflow.com/questions/37099590/zeppelin-how-to-create-dataframe-from-within-custom-interpreter

Can you advise whether this is possible with an externally-contributed Custom Interpreter or would I need to fork the Zeppelin codebase and pull a new interpreter into the Spark package?

Many thanks,
Michael

Re: Access to Spark Context from a Custom Interpreter

Posted by moon soo Lee <mo...@apache.org>.
Hi Michael,

Interpreter does not close / open between paragraphs. That means you can
just keep any object as member variable in your interpreter implementation.

For example SparkInterpreter [1] keeps IMain as a member variable.

Interpreter shipped in Zeppelin codebase is developed with the same API
that custom interpreter uses. They have no differences.

Hope this helps!

Thanks,
moon

[1]
https://github.com/apache/zeppelin/blob/v0.7.1/spark/src/main/java/org/apache/zeppelin/spark/SparkInterpreter.java#L102

On Thu, Apr 20, 2017 at 9:42 AM Michael Marconi <mm...@du.co> wrote:

> Hi – I'd like to know if it's possible to access the SparkContext from a
> Custom Interpreter I am developing.  I need access to DataFrames created in
> earlier paragraphs, so I can transform them and make the results available
> to subsequent paragraphs.
>
> I was expecting to find the Spark Context on the Interpreter Context that
> is handed to my Custom Interpreter but this is not the case.
>
> I've come across this similar request StackOverflow but it doesn't answer
> the question:
> http://stackoverflow.com/questions/37099590/zeppelin-how-to-create-dataframe-from-within-custom-interpreter
>
> Can you advise whether this is possible with an externally-contributed
> Custom Interpreter or would I need to fork the Zeppelin codebase and pull a
> new interpreter into the Spark package?
>
> Many thanks,
> Michael
>