You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2021/08/17 04:08:24 UTC

[GitHub] [hudi] ziudu commented on issue #3344: [SUPPORT]Best way to ingest a large number of tables

ziudu commented on issue #3344:
URL: https://github.com/apache/hudi/issues/3344#issuecomment-899977801


   @nsivabalan Yes, we have 1000 tables, and each of them map to separate hudi table. 1000 db tables -> 1000 hudi tables.
   
   We're writing a custom spark application with scala, which listens to a number of topics and writes to a number of hudi tables. Something similar to 
   
   https://github.com/apache/hudi/issues/2175
   
   The testing result is satisfactory so far and we need to do some performance testing later on. I just want to know if it is the recommended way.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org