You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Ethan Guo (Jira)" <ji...@apache.org> on 2022/06/08 05:52:00 UTC
[jira] [Updated] (HUDI-3660) config hoodie.logfile.max.size not work
[ https://issues.apache.org/jira/browse/HUDI-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ethan Guo updated HUDI-3660:
----------------------------
Fix Version/s: 0.12.0
(was: 0.11.1)
> config hoodie.logfile.max.size not work
> ---------------------------------------
>
> Key: HUDI-3660
> URL: https://issues.apache.org/jira/browse/HUDI-3660
> Project: Apache Hudi
> Issue Type: Bug
> Components: configs
> Reporter: YuAngZhang
> Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: log.jpg
>
>
> log file not rollover when file size more than 10M
> it seems the method HoodieLogFormatWriter.rolloverIfNeeded not work, the
> file system wrapped to HoodieWrapperFileSystem,the pos of
> FSDataInputStream always set to 0
> {code:java}
> SET 'execution.checkpointing.interval' = '30min';
> CREATE TABLE sink(
> role_id VARCHAR(20),
> log_id VARCHAR(10),
> origin_json string,
> ts TIMESTAMP(3),
> ds date,
> `ds` date
> )
> PARTITIONED BY (`ds`)
> WITH (
> 'connector' = 'hudi',
> 'path' = 'hdfs:///user/dl/hudi_nsh/',
> 'table.type' = 'MERGE_ON_READ',
> 'compaction.trigger.strategy'='num_commits',
> 'compaction.delta_commits'='5',
> 'hoodie.cleaner.commits.retained'='1',
> 'hoodie.datasource.write.recordkey.field'='role_id,log_id,ts',
> 'write.batch.size'='10',
> 'hoodie.logfile.max.size'='10',
> 'hive_sync.enable'='true',
> 'hive_sync.mode' = 'hms',
> 'hive_sync.metastore.uris' = 'thrift://fuxi-luoge-148:9083',
> 'hive_sync.jdbc_url'='jdbc:hive2://',
> 'hive_sync.table'='sink5',
> 'hive_sync.db'='test',
> 'hive_sync.username'='',
> 'hive_sync.password'=''
> ); {code}
>
--
This message was sent by Atlassian Jira
(v8.20.7#820007)