You are viewing a plain text version of this content. The canonical link for it is here.
Posted to gitbox@hive.apache.org by GitBox <gi...@apache.org> on 2021/12/03 14:19:37 UTC

[GitHub] [hive] szlta opened a new pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

szlta opened a new pull request #2842:
URL: https://github.com/apache/hive/pull/2842


   Currently Hive relies on PartitionedFanoutWriter to write records to Iceberg tables. This has a big disadvantage when it comes to writing many-many partitions, as it keeps a file handle open to each of the partitions.
   
   For some file systems like S3, there can be a big resource waste due to keeping bytebuffers for each of the handles, that may result in OOM scenarios.
   
   ClusteredWriter will only write one file at a time, and will not keep other file handles open, but it will expect that the input data is sorted by partition keys.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] marton-bod commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
marton-bod commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r798738861



##########
File path: iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
##########
@@ -0,0 +1,220 @@
+PREHOOK: query: drop table if exists tbl_src
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table if exists tbl_src
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table if exists tbl_target_identity
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table if exists tbl_target_identity
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table if exists tbl_target_bucket
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table if exists tbl_target_bucket
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: create external table tbl_src (a int, b string) stored by iceberg stored as orc
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl_src
+POSTHOOK: query: create external table tbl_src (a int, b string) stored by iceberg stored as orc
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl_src
+PREHOOK: query: insert into tbl_src values (1, 'EUR'), (2, 'EUR'), (3, 'USD'), (4, 'EUR'), (5, 'HUF'), (6, 'USD'), (7, 'USD'), (8, 'PLN'), (9, 'PLN'), (10, 'CZK')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tbl_src
+POSTHOOK: query: insert into tbl_src values (1, 'EUR'), (2, 'EUR'), (3, 'USD'), (4, 'EUR'), (5, 'HUF'), (6, 'USD'), (7, 'USD'), (8, 'PLN'), (9, 'PLN'), (10, 'CZK')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tbl_src
+PREHOOK: query: insert into tbl_src values (10, 'EUR'), (20, 'EUR'), (30, 'USD'), (40, 'EUR'), (50, 'HUF'), (60, 'USD'), (70, 'USD'), (80, 'PLN'), (90, 'PLN'), (100, 'CZK')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tbl_src
+POSTHOOK: query: insert into tbl_src values (10, 'EUR'), (20, 'EUR'), (30, 'USD'), (40, 'EUR'), (50, 'HUF'), (60, 'USD'), (70, 'USD'), (80, 'PLN'), (90, 'PLN'), (100, 'CZK')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tbl_src
+PREHOOK: query: create external table tbl_target_identity (a int) partitioned by (ccy string) stored by iceberg stored as orc
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl_target_identity
+POSTHOOK: query: create external table tbl_target_identity (a int) partitioned by (ccy string) stored by iceberg stored as orc
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl_target_identity
+PREHOOK: query: explain insert overwrite table tbl_target_identity select * from tbl_src
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_src
+PREHOOK: Output: default@tbl_target_identity
+POSTHOOK: query: explain insert overwrite table tbl_target_identity select * from tbl_src
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_src
+POSTHOOK: Output: default@tbl_target_identity
+Plan optimized by CBO.
+
+Vertex dependency in root stage
+Reducer 2 <- Map 1 (SIMPLE_EDGE)
+Reducer 3 <- Map 1 (CUSTOM_SIMPLE_EDGE)
+
+Stage-3
+  Stats Work{}
+    Stage-0
+      Move Operator
+        table:{"name:":"default.tbl_target_identity"}
+        Stage-2
+          Dependency Collection{}
+            Stage-1
+              Reducer 2 vectorized
+              File Output Operator [FS_18]
+                table:{"name:":"default.tbl_target_identity"}
+                Select Operator [SEL_17]
+                  Output:["_col0","_col1"]
+                <-Map 1 [SIMPLE_EDGE] vectorized
+                  PARTITION_ONLY_SHUFFLE [RS_13]
+                    PartitionCols:_col1
+                    Select Operator [SEL_12] (rows=20 width=91)
+                      Output:["_col0","_col1"]
+                      TableScan [TS_0] (rows=20 width=91)
+                        default@tbl_src,tbl_src,Tbl:COMPLETE,Col:COMPLETE,Output:["a","b"]
+              Reducer 3 vectorized
+              File Output Operator [FS_21]
+                Select Operator [SEL_20] (rows=1 width=530)
+                  Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11"]
+                  Group By Operator [GBY_19] (rows=1 width=332)
+                    Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"],aggregations:["min(VALUE._col0)","max(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","max(VALUE._col5)","avg(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)"]
+                  <-Map 1 [CUSTOM_SIMPLE_EDGE] vectorized
+                    PARTITION_ONLY_SHUFFLE [RS_16]
+                      Group By Operator [GBY_15] (rows=1 width=400)
+                        Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"],aggregations:["min(a)","max(a)","count(1)","count(a)","compute_bit_vector_hll(a)","max(length(ccy))","avg(COALESCE(length(ccy),0))","count(ccy)","compute_bit_vector_hll(ccy)"]
+                        Select Operator [SEL_14] (rows=20 width=91)
+                          Output:["a","ccy"]
+                           Please refer to the previous Select Operator [SEL_12]
+
+PREHOOK: query: insert overwrite table tbl_target_identity select * from tbl_src
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_src
+PREHOOK: Output: default@tbl_target_identity
+POSTHOOK: query: insert overwrite table tbl_target_identity select * from tbl_src
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_src
+POSTHOOK: Output: default@tbl_target_identity
+PREHOOK: query: select * from tbl_target_identity
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_target_identity
+PREHOOK: Output: hdfs://### HDFS PATH ###
+POSTHOOK: query: select * from tbl_target_identity
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_target_identity
+POSTHOOK: Output: hdfs://### HDFS PATH ###
+100	CZK
+10	CZK
+10	EUR
+20	EUR
+40	EUR
+2	EUR
+4	EUR
+1	EUR
+50	HUF
+5	HUF
+90	PLN
+8	PLN
+9	PLN
+80	PLN
+30	USD
+3	USD
+6	USD
+7	USD
+60	USD
+70	USD
+PREHOOK: query: create external table tbl_target_bucket (a int, ccy string) partitioned by spec (bucket (2, ccy)) stored by iceberg stored as orc
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl_target_bucket
+POSTHOOK: query: create external table tbl_target_bucket (a int, ccy string) partitioned by spec (bucket (2, ccy)) stored by iceberg stored as orc
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl_target_bucket
+PREHOOK: query: explain insert into table tbl_target_bucket select * from tbl_src
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_src
+PREHOOK: Output: default@tbl_target_bucket
+POSTHOOK: query: explain insert into table tbl_target_bucket select * from tbl_src
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_src
+POSTHOOK: Output: default@tbl_target_bucket
+Plan optimized by CBO.
+
+Vertex dependency in root stage
+Reducer 2 <- Map 1 (SIMPLE_EDGE)
+Reducer 3 <- Map 1 (CUSTOM_SIMPLE_EDGE)
+
+Stage-3

Review comment:
       just out of curiosity: which operator in the explain output is the one related to sorting by the partition values?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] marton-bod commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
marton-bod commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r798741568



##########
File path: iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/io/ClusteredWriter.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.io;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.Comparator;
+import java.util.Set;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.StructLike;
+import org.apache.iceberg.encryption.EncryptedOutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types.StructType;
+import org.apache.iceberg.util.StructLikeSet;
+
+/**
+ * Copied over from Iceberg 0.13.0 and has a difference of allowing bucket transform partitions to arrive without
+ * any ordering. This is a temporary convenience solution until a permanent one probably using UDFs is done.
+ *
+ * Original description:
+ *
+ * A writer capable of writing to multiple specs and partitions that requires the incoming records
+ * to be clustered by partition spec and by partition within each spec.
+ * <p>
+ * As opposed to {@link FanoutWriter}, this writer keeps at most one file open to reduce
+ * the memory consumption. Prefer using this writer whenever the incoming records can be clustered
+ * by spec/partition.
+ */
+abstract class ClusteredWriter<T, R> implements PartitioningWriter<T, R> {
+
+  private static final Class<?> BUCKET_TRANSFORM_CLAZZ;
+
+  static {
+    try {
+      BUCKET_TRANSFORM_CLAZZ = Class.forName("org.apache.iceberg.transforms.Bucket");
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Couldn't find Bucket transform class", e);
+    }
+  }
+
+  private static final String NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE =
+      "Incoming records violate the writer assumption that records are clustered by spec and " +
+          "by partition within each spec. Either cluster the incoming records or switch to fanout writers.\n" +
+          "Encountered records that belong to already closed files:\n";
+
+  private final Set<Integer> completedSpecIds = Sets.newHashSet();
+
+  private PartitionSpec currentSpec = null;
+  private Comparator<StructLike> partitionComparator = null;
+  private Set<StructLike> completedPartitions = null;
+  private StructLike currentPartition = null;
+  private FileWriter<T, R> currentWriter = null;
+
+  private boolean closed = false;
+
+  protected abstract FileWriter<T, R> newWriter(PartitionSpec spec, StructLike partition);
+
+  protected abstract void addResult(R result);
+
+  protected abstract R aggregatedResult();
+
+  @Override
+  public void write(T row, PartitionSpec spec, StructLike partition) {
+    if (!spec.equals(currentSpec)) {
+      if (currentSpec != null) {
+        closeCurrentWriter();
+        completedSpecIds.add(currentSpec.specId());
+        completedPartitions.clear();
+      }
+
+      if (completedSpecIds.contains(spec.specId())) {
+        String errorCtx = String.format("spec %s", spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      StructType partitionType = spec.partitionType();
+
+      this.currentSpec = spec;
+      this.partitionComparator = Comparators.forType(partitionType);
+      this.completedPartitions = StructLikeSet.create(partitionType);
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+
+    } else if (partition != currentPartition && partitionComparator.compare(partition, currentPartition) != 0) {
+      closeCurrentWriter();
+      completedPartitions.add(currentPartition);
+
+      if (completedPartitions.contains(partition) && !hasBucketTransform(currentSpec)) {

Review comment:
       `!hasBucketTransform(currentSpec)`: this means that if the table has a spec like `(a identity, date(b), bucket(16, c))`, then even if the out-of-order record comes for `a` or `b`, we will still act leniently?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] szlta merged pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
szlta merged pull request #2842:
URL: https://github.com/apache/hive/pull/2842


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] szlta commented on pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
szlta commented on pull request #2842:
URL: https://github.com/apache/hive/pull/2842#issuecomment-985558067


   For testing only at this time...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] marton-bod commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
marton-bod commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r798739932



##########
File path: iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/io/ClusteredWriter.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.io;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.Comparator;
+import java.util.Set;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.StructLike;
+import org.apache.iceberg.encryption.EncryptedOutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types.StructType;
+import org.apache.iceberg.util.StructLikeSet;
+
+/**
+ * Copied over from Iceberg 0.13.0 and has a difference of allowing bucket transform partitions to arrive without
+ * any ordering. This is a temporary convenience solution until a permanent one probably using UDFs is done.
+ *
+ * Original description:
+ *
+ * A writer capable of writing to multiple specs and partitions that requires the incoming records
+ * to be clustered by partition spec and by partition within each spec.
+ * <p>
+ * As opposed to {@link FanoutWriter}, this writer keeps at most one file open to reduce
+ * the memory consumption. Prefer using this writer whenever the incoming records can be clustered
+ * by spec/partition.
+ */
+abstract class ClusteredWriter<T, R> implements PartitioningWriter<T, R> {
+
+  private static final Class<?> BUCKET_TRANSFORM_CLAZZ;
+
+  static {
+    try {
+      BUCKET_TRANSFORM_CLAZZ = Class.forName("org.apache.iceberg.transforms.Bucket");
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Couldn't find Bucket transform class", e);
+    }
+  }
+
+  private static final String NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE =
+      "Incoming records violate the writer assumption that records are clustered by spec and " +
+          "by partition within each spec. Either cluster the incoming records or switch to fanout writers.\n" +
+          "Encountered records that belong to already closed files:\n";
+
+  private final Set<Integer> completedSpecIds = Sets.newHashSet();
+
+  private PartitionSpec currentSpec = null;
+  private Comparator<StructLike> partitionComparator = null;
+  private Set<StructLike> completedPartitions = null;
+  private StructLike currentPartition = null;
+  private FileWriter<T, R> currentWriter = null;
+
+  private boolean closed = false;
+
+  protected abstract FileWriter<T, R> newWriter(PartitionSpec spec, StructLike partition);
+
+  protected abstract void addResult(R result);
+
+  protected abstract R aggregatedResult();
+
+  @Override
+  public void write(T row, PartitionSpec spec, StructLike partition) {
+    if (!spec.equals(currentSpec)) {
+      if (currentSpec != null) {
+        closeCurrentWriter();
+        completedSpecIds.add(currentSpec.specId());
+        completedPartitions.clear();
+      }
+
+      if (completedSpecIds.contains(spec.specId())) {
+        String errorCtx = String.format("spec %s", spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      StructType partitionType = spec.partitionType();
+
+      this.currentSpec = spec;
+      this.partitionComparator = Comparators.forType(partitionType);
+      this.completedPartitions = StructLikeSet.create(partitionType);
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+
+    } else if (partition != currentPartition && partitionComparator.compare(partition, currentPartition) != 0) {
+      closeCurrentWriter();
+      completedPartitions.add(currentPartition);
+
+      if (completedPartitions.contains(partition) && !hasBucketTransform(currentSpec)) {
+        String errorCtx = String.format("partition '%s' in spec %s", spec.partitionToPath(partition), spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+    }
+
+    currentWriter.write(row);
+  }
+
+  @Override
+  public void close() throws IOException {
+    if (!closed) {
+      closeCurrentWriter();
+      this.closed = true;
+    }
+  }
+
+  private static boolean hasBucketTransform(PartitionSpec spec) {
+    return spec.fields().stream().filter(f ->
+        BUCKET_TRANSFORM_CLAZZ.isAssignableFrom(f.transform().getClass())).findAny().isPresent();

Review comment:
       nit: you can also use `anyMatch`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] marton-bod commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
marton-bod commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r798751310



##########
File path: iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/io/ClusteredWriter.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.io;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.Comparator;
+import java.util.Set;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.StructLike;
+import org.apache.iceberg.encryption.EncryptedOutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types.StructType;
+import org.apache.iceberg.util.StructLikeSet;
+
+/**
+ * Copied over from Iceberg 0.13.0 and has a difference of allowing bucket transform partitions to arrive without
+ * any ordering. This is a temporary convenience solution until a permanent one probably using UDFs is done.
+ *
+ * Original description:
+ *
+ * A writer capable of writing to multiple specs and partitions that requires the incoming records
+ * to be clustered by partition spec and by partition within each spec.
+ * <p>
+ * As opposed to {@link FanoutWriter}, this writer keeps at most one file open to reduce
+ * the memory consumption. Prefer using this writer whenever the incoming records can be clustered
+ * by spec/partition.
+ */
+abstract class ClusteredWriter<T, R> implements PartitioningWriter<T, R> {
+
+  private static final Class<?> BUCKET_TRANSFORM_CLAZZ;
+
+  static {
+    try {
+      BUCKET_TRANSFORM_CLAZZ = Class.forName("org.apache.iceberg.transforms.Bucket");
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Couldn't find Bucket transform class", e);
+    }
+  }
+
+  private static final String NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE =
+      "Incoming records violate the writer assumption that records are clustered by spec and " +
+          "by partition within each spec. Either cluster the incoming records or switch to fanout writers.\n" +
+          "Encountered records that belong to already closed files:\n";
+
+  private final Set<Integer> completedSpecIds = Sets.newHashSet();
+
+  private PartitionSpec currentSpec = null;
+  private Comparator<StructLike> partitionComparator = null;
+  private Set<StructLike> completedPartitions = null;
+  private StructLike currentPartition = null;
+  private FileWriter<T, R> currentWriter = null;
+
+  private boolean closed = false;
+
+  protected abstract FileWriter<T, R> newWriter(PartitionSpec spec, StructLike partition);
+
+  protected abstract void addResult(R result);
+
+  protected abstract R aggregatedResult();
+
+  @Override
+  public void write(T row, PartitionSpec spec, StructLike partition) {
+    if (!spec.equals(currentSpec)) {
+      if (currentSpec != null) {
+        closeCurrentWriter();
+        completedSpecIds.add(currentSpec.specId());
+        completedPartitions.clear();
+      }
+
+      if (completedSpecIds.contains(spec.specId())) {
+        String errorCtx = String.format("spec %s", spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      StructType partitionType = spec.partitionType();
+
+      this.currentSpec = spec;
+      this.partitionComparator = Comparators.forType(partitionType);
+      this.completedPartitions = StructLikeSet.create(partitionType);
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+
+    } else if (partition != currentPartition && partitionComparator.compare(partition, currentPartition) != 0) {
+      closeCurrentWriter();
+      completedPartitions.add(currentPartition);
+
+      if (completedPartitions.contains(partition) && !hasBucketTransform(currentSpec)) {

Review comment:
       If so, is there a way to verify the problem is with `c` being out of order, using partition object?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] github-actions[bot] commented on pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on pull request #2842:
URL: https://github.com/apache/hive/pull/2842#issuecomment-1027414666


   This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the dev@hive.apache.org list if the patch is in need of reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] szlta commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
szlta commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r799286107



##########
File path: iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/io/ClusteredWriter.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.io;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.Comparator;
+import java.util.Set;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.StructLike;
+import org.apache.iceberg.encryption.EncryptedOutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types.StructType;
+import org.apache.iceberg.util.StructLikeSet;
+
+/**
+ * Copied over from Iceberg 0.13.0 and has a difference of allowing bucket transform partitions to arrive without
+ * any ordering. This is a temporary convenience solution until a permanent one probably using UDFs is done.
+ *
+ * Original description:
+ *
+ * A writer capable of writing to multiple specs and partitions that requires the incoming records
+ * to be clustered by partition spec and by partition within each spec.
+ * <p>
+ * As opposed to {@link FanoutWriter}, this writer keeps at most one file open to reduce
+ * the memory consumption. Prefer using this writer whenever the incoming records can be clustered
+ * by spec/partition.
+ */
+abstract class ClusteredWriter<T, R> implements PartitioningWriter<T, R> {
+
+  private static final Class<?> BUCKET_TRANSFORM_CLAZZ;
+
+  static {
+    try {
+      BUCKET_TRANSFORM_CLAZZ = Class.forName("org.apache.iceberg.transforms.Bucket");
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Couldn't find Bucket transform class", e);
+    }
+  }
+
+  private static final String NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE =
+      "Incoming records violate the writer assumption that records are clustered by spec and " +
+          "by partition within each spec. Either cluster the incoming records or switch to fanout writers.\n" +
+          "Encountered records that belong to already closed files:\n";
+
+  private final Set<Integer> completedSpecIds = Sets.newHashSet();
+
+  private PartitionSpec currentSpec = null;
+  private Comparator<StructLike> partitionComparator = null;
+  private Set<StructLike> completedPartitions = null;
+  private StructLike currentPartition = null;
+  private FileWriter<T, R> currentWriter = null;
+
+  private boolean closed = false;
+
+  protected abstract FileWriter<T, R> newWriter(PartitionSpec spec, StructLike partition);
+
+  protected abstract void addResult(R result);
+
+  protected abstract R aggregatedResult();
+
+  @Override
+  public void write(T row, PartitionSpec spec, StructLike partition) {
+    if (!spec.equals(currentSpec)) {
+      if (currentSpec != null) {
+        closeCurrentWriter();
+        completedSpecIds.add(currentSpec.specId());
+        completedPartitions.clear();
+      }
+
+      if (completedSpecIds.contains(spec.specId())) {
+        String errorCtx = String.format("spec %s", spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      StructType partitionType = spec.partitionType();
+
+      this.currentSpec = spec;
+      this.partitionComparator = Comparators.forType(partitionType);
+      this.completedPartitions = StructLikeSet.create(partitionType);
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+
+    } else if (partition != currentPartition && partitionComparator.compare(partition, currentPartition) != 0) {
+      closeCurrentWriter();
+      completedPartitions.add(currentPartition);
+
+      if (completedPartitions.contains(partition) && !hasBucketTransform(currentSpec)) {

Review comment:
       I think if there's any bucket transform in the partition spec then we cannot rely on any kind of sorting done by the DP sort optimization. Unfortunately we have to let these records through here and yes, act leniently..
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] szlta commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
szlta commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r799282617



##########
File path: iceberg/iceberg-handler/src/test/results/positive/dynamic_partition_writes.q.out
##########
@@ -0,0 +1,220 @@
+PREHOOK: query: drop table if exists tbl_src
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table if exists tbl_src
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table if exists tbl_target_identity
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table if exists tbl_target_identity
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: drop table if exists tbl_target_bucket
+PREHOOK: type: DROPTABLE
+POSTHOOK: query: drop table if exists tbl_target_bucket
+POSTHOOK: type: DROPTABLE
+PREHOOK: query: create external table tbl_src (a int, b string) stored by iceberg stored as orc
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl_src
+POSTHOOK: query: create external table tbl_src (a int, b string) stored by iceberg stored as orc
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl_src
+PREHOOK: query: insert into tbl_src values (1, 'EUR'), (2, 'EUR'), (3, 'USD'), (4, 'EUR'), (5, 'HUF'), (6, 'USD'), (7, 'USD'), (8, 'PLN'), (9, 'PLN'), (10, 'CZK')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tbl_src
+POSTHOOK: query: insert into tbl_src values (1, 'EUR'), (2, 'EUR'), (3, 'USD'), (4, 'EUR'), (5, 'HUF'), (6, 'USD'), (7, 'USD'), (8, 'PLN'), (9, 'PLN'), (10, 'CZK')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tbl_src
+PREHOOK: query: insert into tbl_src values (10, 'EUR'), (20, 'EUR'), (30, 'USD'), (40, 'EUR'), (50, 'HUF'), (60, 'USD'), (70, 'USD'), (80, 'PLN'), (90, 'PLN'), (100, 'CZK')
+PREHOOK: type: QUERY
+PREHOOK: Input: _dummy_database@_dummy_table
+PREHOOK: Output: default@tbl_src
+POSTHOOK: query: insert into tbl_src values (10, 'EUR'), (20, 'EUR'), (30, 'USD'), (40, 'EUR'), (50, 'HUF'), (60, 'USD'), (70, 'USD'), (80, 'PLN'), (90, 'PLN'), (100, 'CZK')
+POSTHOOK: type: QUERY
+POSTHOOK: Input: _dummy_database@_dummy_table
+POSTHOOK: Output: default@tbl_src
+PREHOOK: query: create external table tbl_target_identity (a int) partitioned by (ccy string) stored by iceberg stored as orc
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl_target_identity
+POSTHOOK: query: create external table tbl_target_identity (a int) partitioned by (ccy string) stored by iceberg stored as orc
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl_target_identity
+PREHOOK: query: explain insert overwrite table tbl_target_identity select * from tbl_src
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_src
+PREHOOK: Output: default@tbl_target_identity
+POSTHOOK: query: explain insert overwrite table tbl_target_identity select * from tbl_src
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_src
+POSTHOOK: Output: default@tbl_target_identity
+Plan optimized by CBO.
+
+Vertex dependency in root stage
+Reducer 2 <- Map 1 (SIMPLE_EDGE)
+Reducer 3 <- Map 1 (CUSTOM_SIMPLE_EDGE)
+
+Stage-3
+  Stats Work{}
+    Stage-0
+      Move Operator
+        table:{"name:":"default.tbl_target_identity"}
+        Stage-2
+          Dependency Collection{}
+            Stage-1
+              Reducer 2 vectorized
+              File Output Operator [FS_18]
+                table:{"name:":"default.tbl_target_identity"}
+                Select Operator [SEL_17]
+                  Output:["_col0","_col1"]
+                <-Map 1 [SIMPLE_EDGE] vectorized
+                  PARTITION_ONLY_SHUFFLE [RS_13]
+                    PartitionCols:_col1
+                    Select Operator [SEL_12] (rows=20 width=91)
+                      Output:["_col0","_col1"]
+                      TableScan [TS_0] (rows=20 width=91)
+                        default@tbl_src,tbl_src,Tbl:COMPLETE,Col:COMPLETE,Output:["a","b"]
+              Reducer 3 vectorized
+              File Output Operator [FS_21]
+                Select Operator [SEL_20] (rows=1 width=530)
+                  Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11"]
+                  Group By Operator [GBY_19] (rows=1 width=332)
+                    Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"],aggregations:["min(VALUE._col0)","max(VALUE._col1)","count(VALUE._col2)","count(VALUE._col3)","compute_bit_vector_hll(VALUE._col4)","max(VALUE._col5)","avg(VALUE._col6)","count(VALUE._col7)","compute_bit_vector_hll(VALUE._col8)"]
+                  <-Map 1 [CUSTOM_SIMPLE_EDGE] vectorized
+                    PARTITION_ONLY_SHUFFLE [RS_16]
+                      Group By Operator [GBY_15] (rows=1 width=400)
+                        Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"],aggregations:["min(a)","max(a)","count(1)","count(a)","compute_bit_vector_hll(a)","max(length(ccy))","avg(COALESCE(length(ccy),0))","count(ccy)","compute_bit_vector_hll(ccy)"]
+                        Select Operator [SEL_14] (rows=20 width=91)
+                          Output:["a","ccy"]
+                           Please refer to the previous Select Operator [SEL_12]
+
+PREHOOK: query: insert overwrite table tbl_target_identity select * from tbl_src
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_src
+PREHOOK: Output: default@tbl_target_identity
+POSTHOOK: query: insert overwrite table tbl_target_identity select * from tbl_src
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_src
+POSTHOOK: Output: default@tbl_target_identity
+PREHOOK: query: select * from tbl_target_identity
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_target_identity
+PREHOOK: Output: hdfs://### HDFS PATH ###
+POSTHOOK: query: select * from tbl_target_identity
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_target_identity
+POSTHOOK: Output: hdfs://### HDFS PATH ###
+100	CZK
+10	CZK
+10	EUR
+20	EUR
+40	EUR
+2	EUR
+4	EUR
+1	EUR
+50	HUF
+5	HUF
+90	PLN
+8	PLN
+9	PLN
+80	PLN
+30	USD
+3	USD
+6	USD
+7	USD
+60	USD
+70	USD
+PREHOOK: query: create external table tbl_target_bucket (a int, ccy string) partitioned by spec (bucket (2, ccy)) stored by iceberg stored as orc
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@tbl_target_bucket
+POSTHOOK: query: create external table tbl_target_bucket (a int, ccy string) partitioned by spec (bucket (2, ccy)) stored by iceberg stored as orc
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@tbl_target_bucket
+PREHOOK: query: explain insert into table tbl_target_bucket select * from tbl_src
+PREHOOK: type: QUERY
+PREHOOK: Input: default@tbl_src
+PREHOOK: Output: default@tbl_target_bucket
+POSTHOOK: query: explain insert into table tbl_target_bucket select * from tbl_src
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@tbl_src
+POSTHOOK: Output: default@tbl_target_bucket
+Plan optimized by CBO.
+
+Vertex dependency in root stage
+Reducer 2 <- Map 1 (SIMPLE_EDGE)
+Reducer 3 <- Map 1 (CUSTOM_SIMPLE_EDGE)
+
+Stage-3

Review comment:
       If there were no sort, you would only see one Reducer task. I believe in this case it's Reducer 2.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] marton-bod commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
marton-bod commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r799413716



##########
File path: iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/io/ClusteredWriter.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.io;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.Comparator;
+import java.util.Set;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.StructLike;
+import org.apache.iceberg.encryption.EncryptedOutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types.StructType;
+import org.apache.iceberg.util.StructLikeSet;
+
+/**
+ * Copied over from Iceberg 0.13.0 and has a difference of allowing bucket transform partitions to arrive without
+ * any ordering. This is a temporary convenience solution until a permanent one probably using UDFs is done.
+ *
+ * Original description:
+ *
+ * A writer capable of writing to multiple specs and partitions that requires the incoming records
+ * to be clustered by partition spec and by partition within each spec.
+ * <p>
+ * As opposed to {@link FanoutWriter}, this writer keeps at most one file open to reduce
+ * the memory consumption. Prefer using this writer whenever the incoming records can be clustered
+ * by spec/partition.
+ */
+abstract class ClusteredWriter<T, R> implements PartitioningWriter<T, R> {
+
+  private static final Class<?> BUCKET_TRANSFORM_CLAZZ;
+
+  static {
+    try {
+      BUCKET_TRANSFORM_CLAZZ = Class.forName("org.apache.iceberg.transforms.Bucket");
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Couldn't find Bucket transform class", e);
+    }
+  }
+
+  private static final String NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE =
+      "Incoming records violate the writer assumption that records are clustered by spec and " +
+          "by partition within each spec. Either cluster the incoming records or switch to fanout writers.\n" +
+          "Encountered records that belong to already closed files:\n";
+
+  private final Set<Integer> completedSpecIds = Sets.newHashSet();
+
+  private PartitionSpec currentSpec = null;
+  private Comparator<StructLike> partitionComparator = null;
+  private Set<StructLike> completedPartitions = null;
+  private StructLike currentPartition = null;
+  private FileWriter<T, R> currentWriter = null;
+
+  private boolean closed = false;
+
+  protected abstract FileWriter<T, R> newWriter(PartitionSpec spec, StructLike partition);
+
+  protected abstract void addResult(R result);
+
+  protected abstract R aggregatedResult();
+
+  @Override
+  public void write(T row, PartitionSpec spec, StructLike partition) {
+    if (!spec.equals(currentSpec)) {
+      if (currentSpec != null) {
+        closeCurrentWriter();
+        completedSpecIds.add(currentSpec.specId());
+        completedPartitions.clear();
+      }
+
+      if (completedSpecIds.contains(spec.specId())) {
+        String errorCtx = String.format("spec %s", spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      StructType partitionType = spec.partitionType();
+
+      this.currentSpec = spec;
+      this.partitionComparator = Comparators.forType(partitionType);
+      this.completedPartitions = StructLikeSet.create(partitionType);
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+
+    } else if (partition != currentPartition && partitionComparator.compare(partition, currentPartition) != 0) {
+      closeCurrentWriter();
+      completedPartitions.add(currentPartition);
+
+      if (completedPartitions.contains(partition) && !hasBucketTransform(currentSpec)) {
+        String errorCtx = String.format("partition '%s' in spec %s", spec.partitionToPath(partition), spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+    }
+
+    currentWriter.write(row);
+  }
+
+  @Override
+  public void close() throws IOException {
+    if (!closed) {
+      closeCurrentWriter();
+      this.closed = true;
+    }
+  }
+
+  private static boolean hasBucketTransform(PartitionSpec spec) {
+    return spec.fields().stream().filter(f ->
+        BUCKET_TRANSFORM_CLAZZ.isAssignableFrom(f.transform().getClass())).findAny().isPresent();

Review comment:
       As discussed, we can't use IcebergTableUtil here. We should adjust the logic in `IcebergTableUtil` in a later PR to use the same class-loading logic as here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] marton-bod commented on a change in pull request #2842: HIVE-25772: Use ClusteredWriter when writing to Iceberg tables

Posted by GitBox <gi...@apache.org>.
marton-bod commented on a change in pull request #2842:
URL: https://github.com/apache/hive/pull/2842#discussion_r798738313



##########
File path: iceberg/patched-iceberg-core/src/main/java/org/apache/iceberg/io/ClusteredWriter.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.io;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.Comparator;
+import java.util.Set;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.StructLike;
+import org.apache.iceberg.encryption.EncryptedOutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types.StructType;
+import org.apache.iceberg.util.StructLikeSet;
+
+/**
+ * Copied over from Iceberg 0.13.0 and has a difference of allowing bucket transform partitions to arrive without
+ * any ordering. This is a temporary convenience solution until a permanent one probably using UDFs is done.
+ *
+ * Original description:
+ *
+ * A writer capable of writing to multiple specs and partitions that requires the incoming records
+ * to be clustered by partition spec and by partition within each spec.
+ * <p>
+ * As opposed to {@link FanoutWriter}, this writer keeps at most one file open to reduce
+ * the memory consumption. Prefer using this writer whenever the incoming records can be clustered
+ * by spec/partition.
+ */
+abstract class ClusteredWriter<T, R> implements PartitioningWriter<T, R> {
+
+  private static final Class<?> BUCKET_TRANSFORM_CLAZZ;
+
+  static {
+    try {
+      BUCKET_TRANSFORM_CLAZZ = Class.forName("org.apache.iceberg.transforms.Bucket");
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Couldn't find Bucket transform class", e);
+    }
+  }
+
+  private static final String NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE =
+      "Incoming records violate the writer assumption that records are clustered by spec and " +
+          "by partition within each spec. Either cluster the incoming records or switch to fanout writers.\n" +
+          "Encountered records that belong to already closed files:\n";
+
+  private final Set<Integer> completedSpecIds = Sets.newHashSet();
+
+  private PartitionSpec currentSpec = null;
+  private Comparator<StructLike> partitionComparator = null;
+  private Set<StructLike> completedPartitions = null;
+  private StructLike currentPartition = null;
+  private FileWriter<T, R> currentWriter = null;
+
+  private boolean closed = false;
+
+  protected abstract FileWriter<T, R> newWriter(PartitionSpec spec, StructLike partition);
+
+  protected abstract void addResult(R result);
+
+  protected abstract R aggregatedResult();
+
+  @Override
+  public void write(T row, PartitionSpec spec, StructLike partition) {
+    if (!spec.equals(currentSpec)) {
+      if (currentSpec != null) {
+        closeCurrentWriter();
+        completedSpecIds.add(currentSpec.specId());
+        completedPartitions.clear();
+      }
+
+      if (completedSpecIds.contains(spec.specId())) {
+        String errorCtx = String.format("spec %s", spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      StructType partitionType = spec.partitionType();
+
+      this.currentSpec = spec;
+      this.partitionComparator = Comparators.forType(partitionType);
+      this.completedPartitions = StructLikeSet.create(partitionType);
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+
+    } else if (partition != currentPartition && partitionComparator.compare(partition, currentPartition) != 0) {
+      closeCurrentWriter();
+      completedPartitions.add(currentPartition);
+
+      if (completedPartitions.contains(partition) && !hasBucketTransform(currentSpec)) {
+        String errorCtx = String.format("partition '%s' in spec %s", spec.partitionToPath(partition), spec);
+        throw new IllegalStateException(NOT_CLUSTERED_ROWS_ERROR_MSG_TEMPLATE + errorCtx);
+      }
+
+      // copy the partition key as the key object may be reused
+      this.currentPartition = StructCopy.copy(partition);
+      this.currentWriter = newWriter(currentSpec, currentPartition);
+    }
+
+    currentWriter.write(row);
+  }
+
+  @Override
+  public void close() throws IOException {
+    if (!closed) {
+      closeCurrentWriter();
+      this.closed = true;
+    }
+  }
+
+  private static boolean hasBucketTransform(PartitionSpec spec) {
+    return spec.fields().stream().filter(f ->
+        BUCKET_TRANSFORM_CLAZZ.isAssignableFrom(f.transform().getClass())).findAny().isPresent();

Review comment:
       We might want to synchronize this logic of checking bucketedness with `IcebergTableUtil.isBucketed()`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org