You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/09/22 09:46:37 UTC

[GitHub] [hudi] JoshuaZhuCN edited a comment on issue #3647: [SUPPORT] Failed to read parquet file during upsert

JoshuaZhuCN edited a comment on issue #3647:
URL: https://github.com/apache/hudi/issues/3647#issuecomment-924755424


   > This is because the Spark bulk insert use Spark's parquet writer which has its own decimal encode, how do you set up the parameter writelegacyformat ?
   
   @danny0405  This parameter I set in spark-default.conf does not work even if it is set in scala code at the same time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org