You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/11/05 11:55:32 UTC

[GitHub] [hudi] nsivabalan commented on issue #3913: [SUPPORT] Hudi deltastreamer deployment Model

nsivabalan commented on issue #3913:
URL: https://github.com/apache/hudi/issues/3913#issuecomment-961834734


   My understanding is that, you run pyspark job in spark-submt?
   https://www.tutorialkart.com/apache-spark/submit-spark-application-python-example/
   or am I getting your requirement wrong.
   If you have multiple tables, yeah, each one has to be different. But hudi does provide a [MultiTableDetlastreamer](https://hudi.apache.org/blog/2020/08/22/ingest-multiple-tables-using-hudi).  Please check it out if it will meet your needs. 
   
   If you can go over your requirements, we can chat through how to go about it. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org