You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/04/27 15:04:15 UTC

[GitHub] [spark] gengliangwang commented on a change in pull request #24327: [SPARK-27418][SQL] Migrate Parquet to File Data Source V2

gengliangwang commented on a change in pull request #24327: [SPARK-27418][SQL] Migrate Parquet to File Data Source V2
URL: https://github.com/apache/spark/pull/24327#discussion_r279157958
 
 

 ##########
 File path: sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##########
 @@ -1494,7 +1494,7 @@ object SQLConf {
       " register class names for which data source V2 write paths are disabled. Writes from these" +
       " sources will fall back to the V1 sources.")
     .stringConf
-    .createWithDefault("csv,json,orc,text")
+    .createWithDefault("csv,json,orc,text,parquet")
 
 Review comment:
   The current way is simpler. We also use the short name in `df.write.format(format)` or `df.read.format(format)`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org