You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "newsbreak-tonglin (via GitHub)" <gi...@apache.org> on 2023/04/27 08:00:53 UTC
[GitHub] [hudi] newsbreak-tonglin opened a new issue, #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
newsbreak-tonglin opened a new issue, #8586:
URL: https://github.com/apache/hudi/issues/8586
use Flink Mongo CDC fetch data from mongo to Hudi MOR table, sync ro table success, but sync rt table failed
with error message:
2023-04-27 07:46:27,967 INFO org.apache.hadoop.hive.metastore.HiveMetaStoreClient [] - Trying to connect to metastore with URI thrift://ip-xxx-xx-xxx-xxx:9083
2023-04-27 07:46:27,992 INFO org.apache.hadoop.hive.metastore.HiveMetaStoreClient [] - Opened a connection to metastore, current connections: 1
2023-04-27 07:46:28,001 INFO org.apache.hadoop.hive.metastore.HiveMetaStoreClient [] - Connected to metastore.
2023-04-27 07:46:28,001 INFO org.apache.hadoop.hive.metastore.RetryingMetaStoreClient [] - RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hadoop (auth:SIMPLE) retries=1 delay=1 lifetime=0
2023-04-27 07:46:28,176 INFO org.apache.hudi.hive.HiveSyncTool [] - Syncing target hoodie table with hive table(default.mongo_cdc_hudi_xxxx_test25). Hive metastore URL :thrift://xxx:9083, basePath :s3://xxxx/hudi_test25
2023-04-27 07:46:28,176 INFO org.apache.hudi.hive.HiveSyncTool [] - Trying to sync hoodie table mongo_cdc_hudi_xxx_test25_ro with base path s3://xxxx/hudi_test25 of type MERGE_ON_READ
2023-04-27 07:46:28,206 ERROR org.apache.hudi.hive.ddl.HMSDDLExecutor [] - Failed to create database default
org.apache.hadoop.hive.metastore.api.AlreadyExistsException: Database default already exists
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39325) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39311) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result.read(ThriftHiveMetastore.java:39245) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_database(ThriftHiveMetastore.java:1106) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_database(ThriftHiveMetastore.java:1093) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at com.sun.proxy.$Proxy121.createDatabase(Unknown Source) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at com.sun.proxy.$Proxy121.createDatabase(Unknown Source) ~[?:?]
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createDatabase(HMSDDLExecutor.java:95) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HoodieHiveSyncClient.createDatabase(HoodieHiveSyncClient.java:224) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:187) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:158) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:142) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.sink.StreamWriteOperatorCoordinator.doSyncHive(StreamWriteOperatorCoordinator.java:335) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:0.12.1]
at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:0.12.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_352]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_352]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_352]
2023-04-27 07:46:28,207 WARN org.apache.hudi.hive.HiveSyncTool [] - Unable to create database
org.apache.hudi.hive.HoodieHiveSyncException: Failed to create database default
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createDatabase(HMSDDLExecutor.java:98) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HoodieHiveSyncClient.createDatabase(HoodieHiveSyncClient.java:224) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:187) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:158) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:142) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.sink.StreamWriteOperatorCoordinator.doSyncHive(StreamWriteOperatorCoordinator.java:335) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:0.12.1]
at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:0.12.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_352]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_352]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_352]
Caused by: org.apache.hadoop.hive.metastore.api.AlreadyExistsException: Database default already exists
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39325) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39311) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result.read(ThriftHiveMetastore.java:39245) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_database(ThriftHiveMetastore.java:1106) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_database(ThriftHiveMetastore.java:1093) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at com.sun.proxy.$Proxy121.createDatabase(Unknown Source) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at com.sun.proxy.$Proxy121.createDatabase(Unknown Source) ~[?:?]
at org.apache.hudi.hive.ddl.HMSDDLExecutor.createDatabase(HMSDDLExecutor.java:95) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
... 9 more
2023-04-27 07:46:28,244 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem [] - Opening 's3://xxxx/hudi_test25/.hoodie/20230427074259124.deltacommit' for reading
2023-04-27 07:46:28,263 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem [] - Opening 's3://xxxx/hudi_test25/.hoodie/20230427074259124.deltacommit' for reading
2023-04-27 07:46:28,331 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem [] - Opening 's3://xxxx/hudi_test25/.245cbf6d-8d54-40ed-908a-6b28503cee9f_20230427074259124.log.3_0-5-0' for reading
2023-04-27 07:46:28,612 INFO com.ververica.cdc.connectors.base.source.enumerator.IncrementalSourceEnumerator [] - The enumerator receives finished split offsets FinishedSnapshotSplitsReportEvent{finishedOffsets={xxxx.xxxx:49={resumeToken={"_data": "82644A2839000000022B0229296E04"}, timestamp=7226632777347629058}}} from subtask 3.
2023-04-27 07:46:28,613 INFO org.apache.flink.runtime.source.coordinator.SourceCoordinator [] - Source Source: mongo_cdc_test[1] received split request from parallel task 3
2023-04-27 07:46:28,613 INFO com.ververica.cdc.connectors.base.source.enumerator.IncrementalSourceEnumerator [] - Assign split SnapshotSplit{tableId=xxxx.xxxx, splitId='xxxx.xxxx:54', splitKeyType=[`_id` INT], splitStart=[{"_id": 1.0}, {"_id": "xxxx"}], splitEnd=[{"_id": 1.0}, {"_id": "xxxx"}], highWatermark=null} to subtask 3
2023-04-27 07:46:28,614 INFO org.apache.hudi.hive.HiveSyncTool [] - Hive table mongo_cdc_hudi_xxxx_test25_ro is not found. Creating it
2023-04-27 07:46:28,823 INFO org.apache.hudi.hive.HiveSyncTool [] - Schema sync complete. Syncing partitions for mongo_cdc_hudi_xxxx_test25_ro
2023-04-27 07:46:28,823 INFO org.apache.hudi.hive.HiveSyncTool [] - Last commit time synced was found to be null
2023-04-27 07:46:28,823 INFO org.apache.hudi.sync.common.HoodieSyncClient [] - Last commit time synced is not known, listing all partitions in s3://xxxx/hudi_test25,FS :com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem@5e4e3478
2023-04-27 07:46:28,957 INFO org.apache.hudi.hive.HiveSyncTool [] - Storage partitions scan complete. Found 1
2023-04-27 07:46:28,970 INFO org.apache.hadoop.hive.metastore.HiveMetaStoreClient [] - Closed a connection to metastore, current connections: 0
2023-04-27 07:46:28,971 ERROR org.apache.hudi.sink.StreamWriteOperatorCoordinator [] - Executor executes action [sync hive metadata for instant 20230427074627270] error
org.apache.hudi.exception.HoodieException: Got runtime exception when hive syncing mongo_cdc_hudi_xxxx_test25
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:145) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.sink.StreamWriteOperatorCoordinator.doSyncHive(StreamWriteOperatorCoordinator.java:335) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:0.12.1]
at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:0.12.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_352]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_352]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_352]
Caused by: org.apache.hudi.hive.HoodieHiveSyncException: Failed to sync partitions for table mongo_cdc_hudi_xxxx_test25_ro
at org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:341) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:232) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:158) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:142) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
... 5 more
Caused by: org.apache.hudi.hive.HoodieHiveSyncException: Failed to get all partitions for table default.mongo_cdc_hudi_xxxx_test25_ro
at org.apache.hudi.hive.HoodieHiveSyncClient.getAllPartitions(HoodieHiveSyncClient.java:180) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:317) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:232) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:158) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:142) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
... 5 more
Caused by: org.apache.hadoop.hive.metastore.api.NoSuchObjectException: @hive#default.mongo_cdc_hudi_xxxx_test25_ro table not found
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:2958) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:2943) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1368) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1362) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at com.sun.proxy.$Proxy121.listPartitions(Unknown Source) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_352]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_352]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_352]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_352]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2773) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at com.sun.proxy.$Proxy121.listPartitions(Unknown Source) ~[?:?]
at org.apache.hudi.hive.HoodieHiveSyncClient.getAllPartitions(HoodieHiveSyncClient.java:175) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:317) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:232) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:158) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:142) ~[blob_p-b71662d6c81e3e8943644ff10285f67a0c201af0-ff618a8f3d6ea6018fee8fd3b95760be:?]
... 5 more
2023-04-27 07:46:29,306 INFO com.ververica.cdc.connectors.base.source.enumerator.IncrementalSourceEnumerator [] - The enumerator receives finished split offsets FinishedSnapshotSplitsReportEvent{finishedOffsets={user_feature.user_secondcat_emb_v2:51={resumeToken={"_data": "82644A2848000000022B0229296E04"}, timestamp=7226632841772138498}}} from subtask 0.
**Environment Description**
* Hudi version : 0.12.1
* Spark version : use Flink 15.2
* Hive version : 2.3.7
* Hadoop version : use s3
* Storage (HDFS/S3/GCS..) : S3
* Running on Docker? (yes/no) : no
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [hudi] danny0405 commented on issue #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
Posted by "danny0405 (via GitHub)" <gi...@apache.org>.
danny0405 commented on issue #8586:
URL: https://github.com/apache/hudi/issues/8586#issuecomment-1541204330
You can see the sync info in the JM logging, guess there were errors thrown while syncing.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [hudi] danny0405 commented on issue #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
Posted by "danny0405 (via GitHub)" <gi...@apache.org>.
danny0405 commented on issue #8586:
URL: https://github.com/apache/hudi/issues/8586#issuecomment-1527505766
> @hive#default.mongo_cdc_hudi_xxxx_test25_ro table not found
The log shows ro table can not be found, did you mean the ro can not sync here? What catalog did you use for Flink Sql, is it the Hive catalog or Hudi catalog?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [hudi] danny0405 commented on issue #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
Posted by "danny0405 (via GitHub)" <gi...@apache.org>.
danny0405 commented on issue #8586:
URL: https://github.com/apache/hudi/issues/8586#issuecomment-1540016984
The rt and ro table are only queriable by Hive or Spark.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [hudi] newsbreak-tonglin commented on issue #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
Posted by "newsbreak-tonglin (via GitHub)" <gi...@apache.org>.
newsbreak-tonglin commented on issue #8586:
URL: https://github.com/apache/hudi/issues/8586#issuecomment-1541198587
@danny0405 I use configs below to sync table to hive metastore, but can only found ro table in hive metastore
" 'hive_sync.enable' = 'true',\n" +
" 'hive_sync.mode' = 'hms',\n" +
" 'hive_sync.metastore.uris' = 'thrift://xxxx:9083',\n" +
" 'hive_sync.table'='mongo_cdc_hudi_xxxx_test25',\n" +
" 'hive_sync.auto_create_database'='false',\n"+
" 'hive_sync.table.strategy'='ALL',\n"+
" 'hive_sync.db'='default',\n" +
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [hudi] danny0405 commented on issue #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
Posted by "danny0405 (via GitHub)" <gi...@apache.org>.
danny0405 commented on issue #8586:
URL: https://github.com/apache/hudi/issues/8586#issuecomment-1540017636
The rt and ro table are only queriable by Hive or Spark.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [hudi] newsbreak-tonglin commented on issue #8586: [SUPPORT] Hudi MOR with Flink SQL, sync ro table success, but sync rt table failed
Posted by "newsbreak-tonglin (via GitHub)" <gi...@apache.org>.
newsbreak-tonglin commented on issue #8586:
URL: https://github.com/apache/hudi/issues/8586#issuecomment-1539582076
@danny0405 use default GenericInMemoryCatalog
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org