You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2015/10/20 18:41:27 UTC
[jira] [Resolved] (SPARK-11204) Delegate to scala DataFrame API
rather than print in python
[ https://issues.apache.org/jira/browse/SPARK-11204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Rosen resolved SPARK-11204.
--------------------------------
Resolution: Duplicate
> Delegate to scala DataFrame API rather than print in python
> -----------------------------------------------------------
>
> Key: SPARK-11204
> URL: https://issues.apache.org/jira/browse/SPARK-11204
> Project: Spark
> Issue Type: Improvement
> Components: PySpark
> Affects Versions: 1.5.1
> Reporter: Jeff Zhang
> Priority: Minor
>
> When I use DataFrame#explain(), I found the output is a little different from scala API. Here's one example.
> {noformat}
> == Physical Plan == // this line is removed in pyspark API
> Scan JSONRelation[file:/Users/hadoop/github/spark/examples/src/main/resources/people.json][age#0L,name#1]
> {noformat}
> After looking at the code, I found that pyspark will print the output by itself rather than delegate it to spark-sql. This cause the difference between scala api and python api. I think both python api and scala api try to print it to standard out, so the python api can be deleted to scala api. Here's some api I found that can be delegated to scala api directly:
> * printSchema()
> * explain()
> * show()
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org