You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/12 22:39:11 UTC

[GitHub] [spark] dtenedor commented on a diff in pull request #37501: [SPARK-39926][SQL] Fix bug in column DEFAULT support for non-vectorized Parquet scans

dtenedor commented on code in PR #37501:
URL: https://github.com/apache/spark/pull/37501#discussion_r944913477


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala:
##########
@@ -282,14 +239,17 @@ private[parquet] class ParquetRowConverter(
       // Create a RowUpdater instance for converting Parquet objects to Catalyst rows. If any fields
       // in the Catalyst result schema have associated existence default values, maintain a boolean
       // array to track which fields have been explicitly assigned for each row.
-      val rowUpdater: RowUpdater =
-        if (catalystType.hasExistenceDefaultValues) {
-          resetExistenceDefaultsBitmask(catalystType)
-          new RowUpdaterWithBitmask(
-            currentRow, catalystFieldIndex, catalystType.existenceDefaultsBitmask)
-        } else {
-          new RowUpdater(currentRow, catalystFieldIndex)
+      val rowUpdater: RowUpdater = new RowUpdater(currentRow, catalystFieldIndex)
+      if (catalystType.hasExistenceDefaultValues) {
+        for (i <- 0 until catalystType.existenceDefaultValues.size) {
+          catalystType.existenceDefaultsBitmask(i) =
+            if (i < parquetType.getFieldCount) {

Review Comment:
   We discussed this offline. I moved this code out of the `parquetType.getFields.asScala.map { parquetField => ...` loop, and also ported the explanation into a comment here:
   
   ```
   // Assume the schema for a Parquet file-based table contains N fields. Then if we later
   // run a command "ALTER TABLE t ADD COLUMN c DEFAULT <value>" on the Parquet table, this
   // adds one field to the Catalyst schema. Then if we query the old files with the new
   // Catalyst schema, we should only apply the existence default value to all columns > N.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org