You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jeff Zhang (JIRA)" <ji...@apache.org> on 2015/10/20 11:01:27 UTC

[jira] [Commented] (SPARK-11205) Delegate to scala DataFrame API rather than print in python

    [ https://issues.apache.org/jira/browse/SPARK-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964821#comment-14964821 ] 

Jeff Zhang commented on SPARK-11205:
------------------------------------

Will create PR soon. 

> Delegate to scala DataFrame API rather than print in python
> -----------------------------------------------------------
>
>                 Key: SPARK-11205
>                 URL: https://issues.apache.org/jira/browse/SPARK-11205
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 1.5.1
>            Reporter: Jeff Zhang
>            Priority: Minor
>
> When I use DataFrame#explain(), I found the output is a little different from scala API. Here's one example.
> {noformat}
> == Physical Plan ==    // this line is removed in pyspark API
> Scan JSONRelation[file:/Users/hadoop/github/spark/examples/src/main/resources/people.json][age#0L,name#1]
> {noformat}
> After looking at the code, I found that pyspark will print the output by itself rather than delegate it to spark-sql. This cause the difference between scala api and python api. I think both python api and scala api try to print it to standard out, so the python api can be delegated to scala api. Here's some api I found that can be delegated to scala api directly:
> * printSchema()
> * explain()
> * show()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org