You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/07/10 13:04:37 UTC

[GitHub] [hudi] ToBeFinder opened a new issue, #6074: [SUPPORT] Flink Table Sql writes hudi data duplication in same partition

ToBeFinder opened a new issue, #6074:
URL: https://github.com/apache/hudi/issues/6074

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
   
   - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   A clear and concise description of the problem.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   flink table sql will query a source table of kafka, partition it into two tables according to two different time fields, the data of the first table (rt table, ro table) is normal, and the data of the second table (rt table,ro table) is duplicate in same partition (This job is run in a flink job),
   If this job is split into two flink job, the data of the two tables (rt table, ro table) is normal
   
   
   `querySql:
   
   CREATE TABLE kafka_source(
     id      VARCHAR(20),
     date1   STRING,
     date2   STRING,
     ts      TIMESTAMP(3) METADATA FROM 'timestamp'
   )
   PARTITIONED BY (date1)
   WITH (
        'connector' = 'kafka'
       ,'topic-pattern' = 'xx[a-z]'
       ,'scan.startup.mode' = 'earliest-offset'
       ,'value.format' = 'json'
       ,'json.ignore-parse-errors' = 'true'
       ,'properties.fetch.message.max.bytes' = '10485760'
       ,'properties.socket.receive.buffer.bytes' = '1048576'
       ,'properties.request.timeout.ms' = '60000'
       ,'properties.group.id' = 'xx'
       ,'properties.bootstrap.servers'='xx'
   );
   
   
   sinkSql1:
   
   CREATE TABLE t1(
     id      VARCHAR(20) PRIMARY KEY NOT ENFORCED,
     date1   STRING,
     date2   STRING,
     ts      TIMESTAMP(3)
   )
   PARTITIONED BY (date1)
   WITH (
         'hoodie.table.type' = 'COPY_ON_WRITE'
        ,'hoodie.datasource.write.recordkey.field' = 'id'
        ,'hoodie.datasource.write.precombine.field' = 'ts'
        ,'hoodie.datasource.write.partitionpath.field' = 'date1'
        ,'hoodie.parquet.compression.codec'= 'snappy'
        ,'connector' = 'hudi'
        ,'path' = '$hdfsPath'
        ,'hive_sync.partition_fields' = 'date1'
        ,'hive_sync.metastore.uris' = '$thrift://xxx'
        ,'hive_sync.db' = '$hiveDatabaseName'
        ,'hive_sync.table' = '$hiveTableName'
        ,'hive_sync.enable' = 'true'
        ,'hive_sync.use_jdbc' = 'false'
        ,'hive_sync.mode' = 'hms'
        ,'write.tasks'='20'
        ,'compaction.async.enabled'='true'
        ,'compaction.trigger.strategy'='num_commits'
        ,'compaction.delta_commits'='2'
        ,'write.precombine.field' = 'ts'
        ,'hoodie.datasource.write.partitionpath.field' = 'date1'
        ,'hive_sync.partition_extractor_class' = 'org.apache.hudi.hive.MultiPartKeysValueExtractor'
   );
   
   
   
   sinkSql2:
   
   CREATE TABLE t2(
     id      VARCHAR(20) PRIMARY KEY NOT ENFORCED,
     date1   STRING,
     date2   STRING,
     ts      TIMESTAMP(3)
   )
   PARTITIONED BY (date2)
   WITH (
         'hoodie.table.type' = 'COPY_ON_WRITE'
        ,'hoodie.datasource.write.recordkey.field' = 'id'
        ,'hoodie.datasource.write.precombine.field' = 'ts'
        ,'hoodie.datasource.write.partitionpath.field' = 'date2'
        ,'hoodie.parquet.compression.codec'= 'snappy'
        ,'connector' = 'hudi'
        ,'path' = '$hdfsPath'
        ,'hive_sync.partition_fields' = 'date2'
        ,'hive_sync.metastore.uris' = '$thrift://xxx'
        ,'hive_sync.db' = '$hiveDatabaseName'
        ,'hive_sync.table' = '$hiveTableName'
        ,'hive_sync.enable' = 'true'
        ,'hive_sync.use_jdbc' = 'false'
        ,'hive_sync.mode' = 'hms'
        ,'write.tasks'='20'
        ,'compaction.async.enabled'='true'
        ,'compaction.trigger.strategy'='num_commits'
        ,'compaction.delta_commits'='2'
        ,'write.precombine.field' = 'ts'
        ,'hoodie.datasource.write.partitionpath.field' = 'date2'
        ,'hive_sync.partition_extractor_class' = 'org.apache.hudi.hive.MultiPartKeysValueExtractor'
   
   );
   
   
    insertSql1:
   
     'insert into t1 select * from source_table';
   
    insertSql2:
   
     'insert into t2 select * from source_table';
   
   
     CASE1:
   
     one Flink Job
   
     TableEnv.executeSql(querySql)
     TableEnv.executeSql(sinkSql1)
     TableEnv.executeSql(sinkSql2)
   
   
   
     TableEnv.createStatementSet().addInsertSql(insertSql1).addInsertSql(insertSql2).execute()
     
     
     
   
     CASE2:
   
     two Flink Job(kafka consumer group id is changed)
   
     Flink Job1
   
     TableEnv.executeSql(querySql)
     TableEnv.executeSql(sinkSql1)
     TableEnv.executeSql(insertSql1)
   
     Flink Job2
     TableEnv.executeSql(querySql)
     TableEnv.executeSql(sinkSql2)
     TableEnv.executeSql(insertSql2)
   
      As shown above
      
      when query use hive;
      
      case1 table t2 data in same partition   is duplicate , table t1 data is normal
      
      case2 table t1, table t2 data  is normal`
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :1.11.1 and 1.11.0
   
   * Spark version :
   
   * Hive version :2.11cdh
   
   * Hadoop version :3.0.0cdh
   
   * Storage (HDFS/S3/GCS..) :hdfs
   
   * Running on Docker? (yes/no) :no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan commented on issue #6074: [SUPPORT] Flink Table Sql write hudi , data duplication in same partition

Posted by GitBox <gi...@apache.org>.
nsivabalan commented on issue #6074:
URL: https://github.com/apache/hudi/issues/6074#issuecomment-1229348554

   @yuzhaojing : any follow ups here. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] ToBeFinder commented on issue #6074: [SUPPORT] Flink Table Sql write hudi , data duplication in same partition

Posted by GitBox <gi...@apache.org>.
ToBeFinder commented on issue #6074:
URL: https://github.com/apache/hudi/issues/6074#issuecomment-1184410251

   I found that the problem is when the partition date is '1990-01-01'  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] yuzhaojing commented on issue #6074: [SUPPORT] Flink Table Sql write hudi , data duplication in same partition

Posted by GitBox <gi...@apache.org>.
yuzhaojing commented on issue #6074:
URL: https://github.com/apache/hudi/issues/6074#issuecomment-1189196958

   Do you mean that data duplication only occurs if there is a partition date of '1900-01-01'?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] ToBeFinder commented on issue #6074: [SUPPORT] Flink Table Sql write hudi , data duplication in same partition

Posted by GitBox <gi...@apache.org>.
ToBeFinder commented on issue #6074:
URL: https://github.com/apache/hudi/issues/6074#issuecomment-1190163733

   > Do you mean that data duplication only occurs if there is a partition date of '1900-01-01'?
   yes , I tested


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org