You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/02/21 08:44:38 UTC

[GitHub] [iceberg] yittg commented on issue #2575: Flakey flink unit tests TestFlinkTableSink#testHashDistributeMode

yittg commented on issue #2575:
URL: https://github.com/apache/iceberg/issues/2575#issuecomment-1046609038


   I encountered this failed test case twice, and i think i reproduced it locally.
   
   Before diving it deeply, i think it's better to share the log here, 
   and at first glance it looks like it is different from [the conclusion](https://github.com/apache/iceberg/pull/4117#issuecomment-1042701849).
   
   The following is the detail:
   
   ```
   [Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0 (c3d03556514594e8aff0175bbd12d35e) switched from INITIALIZING to RUNNING.
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1) (c3d03556514594e8aff0175bbd12d35e) switched from INITIALIZING to RUNNING.
   [IcebergStreamWriter (1/1)#0] INFO org.apache.flink.runtime.state.heap.HeapKeyedStateBackend - Initializing heap keyed state backend with stream factory.
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.BaseMetastoreTableOperations - Refreshing table metadata from new version: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/metadata/00000-e773c6cb-c67a-422d-9c56-8c68d3d2d64b.metadata.json
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.BaseMetastoreCatalog - Table loaded by catalog: testhive.db.test_hash_distribution_mode
   [IcebergStreamWriter (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - IcebergStreamWriter (1/1)#0 (e863df4f3c93498a6d45488a9898774b) switched from INITIALIZING to RUNNING.
   [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IcebergStreamWriter (1/1) (e863df4f3c93498a6d45488a9898774b) switched from INITIALIZING to RUNNING.
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0 (4154094a26a2ffc60623d0ec10172143) switched from INITIALIZING to RUNNING.
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1) (4154094a26a2ffc60623d0ec10172143) switched from INITIALIZING to RUNNING.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 1 (type=CHECKPOINT) @ 1645431273362 for job 32b28d9a2d686b0cb1ed6efb940781b5.
   [Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0 (c3d03556514594e8aff0175bbd12d35e) switched from RUNNING to FINISHED.
   [Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0 (c3d03556514594e8aff0175bbd12d35e).
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FINISHED to JobManager for task Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1)#0 c3d03556514594e8aff0175bbd12d35e.
   [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Values(tuples=[[{ 1, _UTF-16LE'aaa' }, { 1, _UTF-16LE'bbb' }, { 1, _UTF-16LE'ccc' }, { 2, _UTF-16LE'aaa' }, { 2, _UTF-16LE'bbb' }, { 2, _UTF-16LE'ccc' }, { 3, _UTF-16LE'aaa' }, { 3, _UTF-16LE'bbb' }, { 3, _UTF-16LE'ccc' }]]) (1/1) (c3d03556514594e8aff0175bbd12d35e) switched from RUNNING to FINISHED.
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.HadoopShimsPre2_7 - Can't get KeyProvider for ORC encryption from hadoop.security.key.provider.path.
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.PhysicalFsWriter - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=aaa/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00001.orc with stripeSize: 67108864 blockSize: 268435456 compression: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.WriterImpl - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=aaa/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00001.orc with stripeSize: 67108864 options: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.PhysicalFsWriter - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=bbb/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00002.orc with stripeSize: 67108864 blockSize: 268435456 compression: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.WriterImpl - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=bbb/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00002.orc with stripeSize: 67108864 options: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.PhysicalFsWriter - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=ccc/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00003.orc with stripeSize: 67108864 blockSize: 268435456 compression: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.WriterImpl - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=ccc/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00003.orc with stripeSize: 67108864 options: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.PhysicalFsWriter - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=aaa/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00004.orc with stripeSize: 67108864 blockSize: 268435456 compression: Compress: ZLIB buffer: 262144
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.flink.sink.IcebergFilesCommitter - Start to flush snapshot state to state backend, table: testhive.db.test_hash_distribution_mode, checkpointId: 1
   [IcebergStreamWriter (1/1)#0] INFO org.apache.orc.impl.WriterImpl - ORC writer created for path: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/data/data=aaa/00000-0-0e650bad-9e4f-4953-b2f6-3a99868aa38a-00004.orc with stripeSize: 67108864 options: Compress: ZLIB buffer: 262144
   [IcebergStreamWriter (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - IcebergStreamWriter (1/1)#0 (e863df4f3c93498a6d45488a9898774b) switched from RUNNING to FINISHED.
   [IcebergStreamWriter (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for IcebergStreamWriter (1/1)#0 (e863df4f3c93498a6d45488a9898774b).
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FINISHED to JobManager for task IcebergStreamWriter (1/1)#0 e863df4f3c93498a6d45488a9898774b.
   [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IcebergStreamWriter (1/1) (e863df4f3c93498a6d45488a9898774b) switched from RUNNING to FINISHED.
   [jobmanager-io-thread-5] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 1 for job 32b28d9a2d686b0cb1ed6efb940781b5 (3397 bytes, checkpointDuration=908 ms, finalizationTime=4 ms).
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.flink.sink.IcebergFilesCommitter - Committing append with 4 data files and 0 delete files to table testhive.db.test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore - 1: source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit - ugi=tangyi	ip=127.0.0.1	cmd=source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode	
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore - 1: source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit - ugi=tangyi	ip=127.0.0.1	cmd=source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode	
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore - 1: source:127.0.0.1 alter_table: db=db tbl=test_hash_distribution_mode newtbl=test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit - ugi=tangyi	ip=127.0.0.1	cmd=source:127.0.0.1 alter_table: db=db tbl=test_hash_distribution_mode newtbl=test_hash_distribution_mode	
   [Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Failed to trigger checkpoint for job 32b28d9a2d686b0cb1ed6efb940781b5 because Some tasks of the job have already finished and checkpointing with finished tasks is not enabled. Failure reason: Not all required tasks are currently running.
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.BaseMetastoreTableOperations - Successfully committed to table testhive.db.test_hash_distribution_mode in 105 ms
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.SnapshotProducer - Committed snapshot 1987697929173874507 (MergeAppend)
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore - 1: source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit - ugi=tangyi	ip=127.0.0.1	cmd=source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode	
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.BaseMetastoreTableOperations - Refreshing table metadata from new version: file:/var/folders/t6/8l83wbcd52g29cp78v66n7rc0000gn/T/junit9733325594032949849/db.db/test_hash_distribution_mode/metadata/00001-6d7501d3-28cb-4ec2-bdb8-51c3b948589e.metadata.json
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore - 1: source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit - ugi=tangyi	ip=127.0.0.1	cmd=source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode	
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.iceberg.flink.sink.IcebergFilesCommitter - Committed in 225 ms
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0 (4154094a26a2ffc60623d0ec10172143) switched from RUNNING to FINISHED.
   [IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0 (4154094a26a2ffc60623d0ec10172143).
   [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FINISHED to JobManager for task IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1)#0 4154094a26a2ffc60623d0ec10172143.
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - IcebergFilesCommitter -> Sink: IcebergSink testhive.db.test_hash_distribution_mode (1/1) (4154094a26a2ffc60623d0ec10172143) switched from RUNNING to FINISHED.
   [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager - Clearing resource requirements of job 32b28d9a2d686b0cb1ed6efb940781b5
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job insert-into_testhive.db.test_hash_distribution_mode (32b28d9a2d686b0cb1ed6efb940781b5) switched from state RUNNING to FINISHED.
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping checkpoint coordinator for job 32b28d9a2d686b0cb1ed6efb940781b5.
   [flink-akka.actor.default-dispatcher-7] INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Job 32b28d9a2d686b0cb1ed6efb940781b5 reached terminal state FINISHED.
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore - 1: source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode
   [pool-11-thread-1] INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit - ugi=tangyi	ip=127.0.0.1	cmd=source:127.0.0.1 get_table : db=db tbl=test_hash_distribution_mode	
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.jobmaster.JobMaster - Stopping the JobMaster for job 'insert-into_testhive.db.test_hash_distribution_mode' (32b28d9a2d686b0cb1ed6efb940781b5).
   [flink-akka.actor.default-dispatcher-8] INFO org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore - Shutting down
   ```
   
   The result as expected
   ```
   There should be 1 data file in partition 'aaa' expected:<1> but was:<2>
   Expected :1
   Actual   :2
   <Click to see difference>
   
   java.lang.AssertionError: There should be 1 data file in partition 'aaa' expected:<1> but was:<2>
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org