You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@toree.apache.org by "Gino Bustelo (JIRA)" <ji...@apache.org> on 2016/04/04 21:21:25 UTC

[jira] [Resolved] (TOREE-166) sqlContext not shared with PySpark and sparkR

     [ https://issues.apache.org/jira/browse/TOREE-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Gino Bustelo resolved TOREE-166.
--------------------------------
    Resolution: Fixed

Resolved in PR #15(https://github.com/apache/incubator-toree/pull/15) and PR #25(https://github.com/apache/incubator-toree/pull/25)

> sqlContext not shared with PySpark and sparkR
> ---------------------------------------------
>
>                 Key: TOREE-166
>                 URL: https://issues.apache.org/jira/browse/TOREE-166
>             Project: TOREE
>          Issue Type: Bug
>            Reporter: nimbusgo
>            Assignee: Gino Bustelo
>             Fix For: 0.1.0
>
>
> The scala interpreter and sql interpreter appear to share the same sqlContext and you can select tables in the sql interpreter that were registered in the scala interpreter. However, It appears that the PySpark and SparkR interpreters each create their own sqlContext on construction, and dataframes registered on those sqlContext will not be shared with the sqlContext in other interpreters. Would it be possible to change it so that the python and R interpreters were instantiated with the same sqlContext as the scala interpreter?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)