You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/08/03 14:17:40 UTC

[GitHub] [hudi] dude0001 commented on issue #2409: [SUPPORT] Spark structured Streaming writes to Hudi and synchronizes Hive to create only read-optimized tables without creating real-time tables

dude0001 commented on issue #2409:
URL: https://github.com/apache/hudi/issues/2409#issuecomment-891885608


   @nsivabalan that is one difference in my duplication steps. I am currently not using a Spark Streaming job. I'm reading from our raw zone in S3 that contains parquet files containing change data capture events from transactional databases. I'm trying to upset our cleansed zone also in S3 so that it contains the latest version of each row. If I turn off sync, it works fine otherwise.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org