You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/05/15 14:13:56 UTC

[GitHub] [spark] viirya commented on issue #24614: [SPARK-27712][PySpark][SQL] Returns correct schema even under different column order when creating dataframe

viirya commented on issue #24614: [SPARK-27712][PySpark][SQL] Returns correct schema even under different column order when creating dataframe
URL: https://github.com/apache/spark/pull/24614#issuecomment-492672560
 
 
   This is more interesting, as we allow something like:
   
   ```python
   data = [Row(key=i, value=str(i)) for i in range(100)]
   rdd = spark.sparkContext.parallelize(data, 5)
   # field names can differ.
   df = rdd.toDF(" a: int, b: string ")
   ```
   
   So, the question is, in `createDataFrame`, should we respect original Row's schema in the RDD?
   
   Currently,
   
   * In case creating dataframe from local list of Row, we respect the Row's schema.
   * In case from RDD of Row, we don't respect it, as shown in the example in the PR description.
   
   It is inconsistent in two cases, obviously.
   
   This difference is also seen in following case. Field names can't differ, if from local list of Row.
   
   ```python
   >>> spark.createDataFrame([Row(A="1", B="2")], "B string, a string").first()
   Traceback (most recent call last):
     File "/Users/viirya/repos/spark-1/python/pyspark/sql/types.py", line 1527, in __getitem__
       idx = self.__fields__.index(item)
   ValueError: 'a' is not in list
   ```
   
   cc @HyukjinKwon @cloud-fan 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org