You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/03/05 14:53:42 UTC

[GitHub] [spark] haiboself edited a comment on issue #23944: [SPARK-26932][DOC]Hive 2.1.1 cannot read ORC table created by Spark 2.4.0 in default

haiboself edited a comment on issue #23944: [SPARK-26932][DOC]Hive 2.1.1 cannot read ORC table created by Spark 2.4.0 in default
URL: https://github.com/apache/spark/pull/23944#issuecomment-469167749
 
 
   @dongjoon-hyun Thanks for your proposal, I would like to change to following content, is it right? 
   
   ```
   - Since Spark 2.4, Spark maximizes the usage of a vectorized ORC reader for ORC files by default. To do that, `spark.sql.orc.impl` and `spark.sql.orc.filterPushdown` change their default values to `native` and `true` respectively. ORC tables created by Spark 2.4 native ORC writer cannot be read by Hive 2.1.1. Use `spark.sql.orc.impl=hive` will restores the previous behavior.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org