You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/06/17 10:16:09 UTC

[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #157: [FLINK-28035] Support rescale overwrite

JingsongLi commented on code in PR #157:
URL: https://github.com/apache/flink-table-store/pull/157#discussion_r899968686


##########
flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/sink/FlinkSinkBuilder.java:
##########
@@ -90,11 +104,13 @@ private Map<String, String> getCompactPartSpec() {
         return JsonSerdeUtil.fromJson(json, Map.class);
     }
 
-    public DataStreamSink<?> build() {
-        int numBucket = conf.get(FileStoreOptions.BUCKET);
+    private boolean rescaleBucket() {
+        return !isContinuous && conf.get(OVERWRITE_RESCALE_BUCKET) && overwritePartition != null;
+    }
 
+    public DataStreamSink<?> build() {
         BucketStreamPartitioner partitioner =
-                new BucketStreamPartitioner(numBucket, table.schema());
+                new BucketStreamPartitioner(getLatestNumOfBucket(), table.schema());

Review Comment:
   It seems that `numBucket` is already OK?
   The check for numBucket conflicts should be placed when the runtime creates the Writer for the bucket, because the insert may be a dynamic partition write and you cannot identify the partition to be checked at compile time.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org