You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Maciej Bryński (JIRA)" <ji...@apache.org> on 2015/12/22 20:44:47 UTC

[jira] [Comment Edited] (SPARK-11437) createDataFrame shouldn't .take() when provided schema

    [ https://issues.apache.org/jira/browse/SPARK-11437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15068627#comment-15068627 ] 

Maciej Bryński edited comment on SPARK-11437 at 12/22/15 7:44 PM:
------------------------------------------------------------------

[~davies]
Are you sure that this patch is OK ?

Right now if I'm creating DataFrame from RDD of Rows there is no schema validation.
So we can create schema with wrong types.

{code}
schema = StructType([StructField("id", IntegerType()), StructField("name", IntegerType())])
sqlCtx.createDataFrame(sc.parallelize([Row(id=1, name="abc")]), schema).collect()

[Row(id=1, name=None)]
{code}

Even better. Column can change places.
{code}
schema = StructType([StructField("name", IntegerType())]) 
sqlCtx.createDataFrame(sc.parallelize([Row(id=1, name="abc")]), schema).collect()

[Row(name=1)]
{code}


was (Author: maver1ck):
[~davies]
Are you sure that this patch is OK ?

Right now if I'm creating DataFrame from RDD of Rows there is no schema validation.
So we can create schema with wrong types.

{code}
from pyspark.sql.types import *
schema = StructType([StructField("id", IntegerType()), StructField("name", IntegerType())])
sqlCtx.createDataFrame(sc.parallelize([Row(id=1, name="abc")]), schema).collect()

[Row(id=1, name=None)]
{code}

Even better. Column can change places.
{code}
from pyspark.sql.types import *
schema = StructType([StructField("name", IntegerType())]) 
sqlCtx.createDataFrame(sc.parallelize([Row(id=1, name="abc")]), schema).collect()

[Row(name=1)]
{code}

> createDataFrame shouldn't .take() when provided schema
> ------------------------------------------------------
>
>                 Key: SPARK-11437
>                 URL: https://issues.apache.org/jira/browse/SPARK-11437
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: Jason White
>            Assignee: Jason White
>             Fix For: 1.6.0
>
>
> When creating a DataFrame from an RDD in PySpark, `createDataFrame` calls `.take(10)` to verify the first 10 rows of the RDD match the provided schema. Similar to https://issues.apache.org/jira/browse/SPARK-8070, but that issue affected cases where a schema was not provided.
> Verifying the first 10 rows is of limited utility and causes the DAG to be executed non-lazily. If necessary, I believe this verification should be done lazily on all rows. However, since the caller is providing a schema to follow, I think it's acceptable to simply fail if the schema is incorrect.
> https://github.com/apache/spark/blob/master/python/pyspark/sql/context.py#L321-L325



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org