You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/12/11 02:32:07 UTC

[GitHub] [spark] ulysses-you edited a comment on issue #26831: [SPARK-30201][SQL] HiveOutputWriter standardOI should use ObjectInspectorCopyOption.DEFAULT

ulysses-you edited a comment on issue #26831: [SPARK-30201][SQL] HiveOutputWriter standardOI should use ObjectInspectorCopyOption.DEFAULT
URL: https://github.com/apache/spark/pull/26831#issuecomment-564349554
 
 
   > This isn't really my area, but I don't quite understand how this arises in practice? what value does this hex string encode, and how would Spark write it? Spark and Hadoop generally always encode strings as UTF-8
   
   The hex string is just to make reproduce easily.
   
   This is my scene:
   1. We write file using hadoop api to write bytes which is not utf-8 code and craete hive table with that location. 
   2. We use spark sql read the table using hive format and write into an another table.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org