You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/05/28 09:49:52 UTC
[GitHub] [spark] HyukjinKwon edited a comment on issue #24724: User friendly
dataset, dataframe generation for csv datasources without explicit StructType
definitions.
HyukjinKwon edited a comment on issue #24724: User friendly dataset, dataframe generation for csv datasources without explicit StructType definitions.
URL: https://github.com/apache/spark/pull/24724#issuecomment-496446607
Why don't we just call
```scala
import org.apache.spark.sql.Encoders
val schema = Encoders.product[Person].schema
spark.read.schema(schema).csv("/tmp/csv").as[Person]
```
?
Once we allow, we have to consider allowing this all the ways. `createDataFrame`, `from_json`, `DataFrame[Stream]Reader.schema`, UDFs, etc. Is this something really worthy?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org