You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Deenar Toraskar <de...@gmail.com> on 2017/01/17 08:15:25 UTC

Accessing Zeppelin context in Pyspark interpreter

Hi

Is it possible to access Zeppelin context via the Pyspark interpreter. Not
all the method available via the Spark Scala interpreter seem to be
available in the Pyspark one (unless i am doing something wrong). I would
like to do something like this from the Pyspark interpreter.

z.show(df, 100)

or

z.run(z.listParagraphs.indexOf(z.getInterpreterContext().getParagraphId())+1)

Re: Accessing Zeppelin context in Pyspark interpreter

Posted by Andres Koitmäe <an...@gmail.com>.
Hi Deenar,

It is possible to use Zeppelin Context  via Pyspark interpreter.

Example (based on Zeppelin 0.6.0)

paragraph1
---------------
%spark

# do some stuff and store result (dataframe) into Zeppelin context. In this
case as sql dataframe
...
z.put("scala_df", scala_df: org.apache.spark.sql.DataFrame)

paragraph2
---------------

%spark.pyspark

from pyspark.sql import DataFrame

# take dataframe from Zeppelin context
df_pyspark = DataFrame(z.get("scala_df"), sqlContext)

# display first 5 rows
df_pyspark.show(5)

Regards,

Andres Koitmäe


On 17 January 2017 at 10:15, Deenar Toraskar <de...@gmail.com>
wrote:

> Hi
>
> Is it possible to access Zeppelin context via the Pyspark interpreter. Not
> all the method available via the Spark Scala interpreter seem to be
> available in the Pyspark one (unless i am doing something wrong). I would
> like to do something like this from the Pyspark interpreter.
>
> z.show(df, 100)
>
> or
>
> z.run(z.listParagraphs.indexOf(z.getInterpreterContext().
> getParagraphId())+1)
>
>