You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/06/28 23:50:05 UTC

[GitHub] [iceberg] samarthjain opened a new pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

samarthjain opened a new pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749


   With this change, we have added support for Parquet data written in V2 format.
   The only data encodings we support are dictionary and plain.
   Vectorized reads against data written using Delta/RLE and other encodings are
   not supported. As of this commit, note that the Spark Parquet vectorized reads also don't
   support vectorized reads for such encodings.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samarthjain commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
samarthjain commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660291902



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/BaseVectorizedParquetValuesReader.java
##########
@@ -80,17 +80,23 @@ public BaseVectorizedParquetValuesReader(int maxDefLevel, boolean setValidityVec
     this.setArrowValidityVector = setValidityVector;
   }
 
-  public BaseVectorizedParquetValuesReader(
-      int bitWidth,
-      int maxDefLevel,
-      boolean setValidityVector) {
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean setValidityVector) {
     this.fixedWidth = true;
     this.readLength = bitWidth != 0;
     this.maxDefLevel = maxDefLevel;
     this.setArrowValidityVector = setValidityVector;
     init(bitWidth);
   }
 
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean readLength,
+                                           boolean setValidityVector) {
+    this.fixedWidth = true;
+    this.readLength = readLength;

Review comment:
       Thanks! Good suggestion. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer merged pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer merged pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660208053



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       I would like to emphasize that a user can use non-vectorized reads to handle this file so maybe something like
   
   "Cannot perform a vectorized read of ParquetV2 File with encoding %s, disable vectorized reading with $param to read this table/file"

##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -217,6 +222,7 @@ WriteBuilder withWriterVersion(WriterVersion version) {
       String compressionLevel = config.getOrDefault(
           PARQUET_COMPRESSION_LEVEL, PARQUET_COMPRESSION_LEVEL_DEFAULT);
 
+

Review comment:
       nit: added whitespace

##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -170,6 +170,11 @@ public WriteBuilder overwrite(boolean enabled) {
       return this;
     }
 
+    public WriteBuilder writerVersion(WriterVersion version) {

Review comment:
       Is this mostly for testing? Or is it something we want folks to be using in general? Just wondering if this should be public

##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/BaseVectorizedParquetValuesReader.java
##########
@@ -80,17 +80,23 @@ public BaseVectorizedParquetValuesReader(int maxDefLevel, boolean setValidityVec
     this.setArrowValidityVector = setValidityVector;
   }
 
-  public BaseVectorizedParquetValuesReader(
-      int bitWidth,
-      int maxDefLevel,
-      boolean setValidityVector) {
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean setValidityVector) {
     this.fixedWidth = true;
     this.readLength = bitWidth != 0;
     this.maxDefLevel = maxDefLevel;
     this.setArrowValidityVector = setValidityVector;
     init(bitWidth);
   }
 
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean readLength,
+                                           boolean setValidityVector) {
+    this.fixedWidth = true;
+    this.readLength = readLength;

Review comment:
       It seems a little strange to me that we have this constructor which we only use when readLength is false. Perhaps we should swap the original constructor's code to call this constructor?
   
   ```java
     public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean setValidityVector) {
       this(bitWidth, maxDefLevel, bitWidth != 0, setValidityVector)
     }
   ```

##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/BasePageIterator.java
##########
@@ -77,7 +77,8 @@ protected void reset() {
   protected abstract void initDefinitionLevelsReader(DataPageV1 dataPageV1, ColumnDescriptor descriptor,
                                                      ByteBufferInputStream in, int count) throws IOException;
 
-  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor);
+  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor)
+          throws IOException;

Review comment:
       I didn't see where the IOException can get thrown, is this just to match the V1 reader?

##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       Actually since we may get users who read some columns successfully but fail on others we probably should be specific about which column failed in the error message as well. Just so someone doesn't say
   "When i do this projection it works, but when I do this projection it doesn't" 

##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       Sounds good to me, I do know most of the time we have errors styled as "Cannot X " but I think your content suggestion for the error message is solid. I would just add that so it fits with the other messages. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#issuecomment-872410101


   Thanks @samarthjain for writing and @kbendick for reviewing!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660212216



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/BaseVectorizedParquetValuesReader.java
##########
@@ -80,17 +80,23 @@ public BaseVectorizedParquetValuesReader(int maxDefLevel, boolean setValidityVec
     this.setArrowValidityVector = setValidityVector;
   }
 
-  public BaseVectorizedParquetValuesReader(
-      int bitWidth,
-      int maxDefLevel,
-      boolean setValidityVector) {
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean setValidityVector) {
     this.fixedWidth = true;
     this.readLength = bitWidth != 0;
     this.maxDefLevel = maxDefLevel;
     this.setArrowValidityVector = setValidityVector;
     init(bitWidth);
   }
 
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean readLength,
+                                           boolean setValidityVector) {
+    this.fixedWidth = true;
+    this.readLength = readLength;

Review comment:
       It seems a little strange to me that we have this constructor which we only use when readLength is false. Perhaps we should swap the original constructor's code to call this constructor?
   
   ```java
     public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean setValidityVector) {
       this(bitWidth, maxDefLevel, bitWidth != 0, setValidityVector)
     }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samarthjain commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
samarthjain commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660293425



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/BasePageIterator.java
##########
@@ -77,7 +77,8 @@ protected void reset() {
   protected abstract void initDefinitionLevelsReader(DataPageV1 dataPageV1, ColumnDescriptor descriptor,
                                                      ByteBufferInputStream in, int count) throws IOException;
 
-  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor);
+  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor)
+          throws IOException;

Review comment:
       Calling `dataPageV2.getDefinitionLevels.toInputStream()` below throws an IOException. 
   
   ```
   @Override
     protected void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor desc) throws IOException {
       int bitWidth = BytesUtils.getWidthFromMaxInt(desc.getMaxDefinitionLevel());
       // do not read the length from the stream. v2 pages handle dividing the page bytes.
       this.vectorizedDefinitionLevelReader = new VectorizedParquetDefinitionLevelReader(bitWidth,
               desc.getMaxDefinitionLevel(), false, setArrowValidityVector);
       this.vectorizedDefinitionLevelReader.initFromPage(
               dataPageV2.getValueCount(), dataPageV2.getDefinitionLevels().toInputStream());
     }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660213079



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/BasePageIterator.java
##########
@@ -77,7 +77,8 @@ protected void reset() {
   protected abstract void initDefinitionLevelsReader(DataPageV1 dataPageV1, ColumnDescriptor descriptor,
                                                      ByteBufferInputStream in, int count) throws IOException;
 
-  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor);
+  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor)
+          throws IOException;

Review comment:
       I didn't see where the IOException can get thrown, is this just to match the V1 reader?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660208451



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -217,6 +222,7 @@ WriteBuilder withWriterVersion(WriterVersion version) {
       String compressionLevel = config.getOrDefault(
           PARQUET_COMPRESSION_LEVEL, PARQUET_COMPRESSION_LEVEL_DEFAULT);
 
+

Review comment:
       nit: added whitespace




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660208053



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       I would like to emphasize that a user can use non-vectorized reads to handle this file so maybe something like
   
   "Cannot perform a vectorized read of ParquetV2 File with encoding %s, disable vectorized reading with $param to read this table/file"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samarthjain commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
samarthjain commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660294135



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       I like the idea of specifying the column name and being more descriptive about why we are failing. However, there are different ways to disable vectorization using table properties, spark session properties etc. For now, I am going with something like this:
   ```
   if (dataEncoding != Encoding.PLAIN) {
           throw new UnsupportedOperationException("Vectorized reads are not supported for column " + desc + " with " +
               "encoding " + dataEncoding + ". Disable vectorized reads  to read this table/file");
         }
   ``` 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#issuecomment-872396124


   AH sorry I didn't notice that you had updated the pr! Let me do a quick once over and merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samarthjain commented on pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
samarthjain commented on pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#issuecomment-872395354


   @RussellSpitzer - is this good to be merged now? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#issuecomment-872408492


   Solves #2692 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660213494



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       Actually since we may get users who read some columns successfully but fail on others we probably should be specific about which column failed in the error message as well. Just so someone doesn't say
   "When i do this projection it works, but when I do this projection it doesn't" 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samarthjain commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
samarthjain commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660291702



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -170,6 +170,11 @@ public WriteBuilder overwrite(boolean enabled) {
       return this;
     }
 
+    public WriteBuilder writerVersion(WriterVersion version) {

Review comment:
       This provides users a way to create Parquet files with different formats. Currently only used for testing, though. I don't see harm in leaving it public. 

##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/BaseVectorizedParquetValuesReader.java
##########
@@ -80,17 +80,23 @@ public BaseVectorizedParquetValuesReader(int maxDefLevel, boolean setValidityVec
     this.setArrowValidityVector = setValidityVector;
   }
 
-  public BaseVectorizedParquetValuesReader(
-      int bitWidth,
-      int maxDefLevel,
-      boolean setValidityVector) {
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean setValidityVector) {
     this.fixedWidth = true;
     this.readLength = bitWidth != 0;
     this.maxDefLevel = maxDefLevel;
     this.setArrowValidityVector = setValidityVector;
     init(bitWidth);
   }
 
+  public BaseVectorizedParquetValuesReader(int bitWidth, int maxDefLevel, boolean readLength,
+                                           boolean setValidityVector) {
+    this.fixedWidth = true;
+    this.readLength = readLength;

Review comment:
       Thanks! Good suggestion. 

##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/BasePageIterator.java
##########
@@ -77,7 +77,8 @@ protected void reset() {
   protected abstract void initDefinitionLevelsReader(DataPageV1 dataPageV1, ColumnDescriptor descriptor,
                                                      ByteBufferInputStream in, int count) throws IOException;
 
-  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor);
+  protected abstract void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor descriptor)
+          throws IOException;

Review comment:
       Calling `dataPageV2.getDefinitionLevels.toInputStream()` below throws an IOException. 
   
   ```
   @Override
     protected void initDefinitionLevelsReader(DataPageV2 dataPageV2, ColumnDescriptor desc) throws IOException {
       int bitWidth = BytesUtils.getWidthFromMaxInt(desc.getMaxDefinitionLevel());
       // do not read the length from the stream. v2 pages handle dividing the page bytes.
       this.vectorizedDefinitionLevelReader = new VectorizedParquetDefinitionLevelReader(bitWidth,
               desc.getMaxDefinitionLevel(), false, setArrowValidityVector);
       this.vectorizedDefinitionLevelReader.initFromPage(
               dataPageV2.getValueCount(), dataPageV2.getDefinitionLevels().toInputStream());
     }
   ```

##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       I like the idea of specifying the column name and being more descriptive about why we are failing. However, there are different ways to disable vectorization using table properties, spark session properties etc. For now, I am going with something like this:
   ```
   if (dataEncoding != Encoding.PLAIN) {
           throw new UnsupportedOperationException("Vectorized reads are not supported for column " + desc + " with " +
               "encoding " + dataEncoding + ". Disable vectorized reads  to read this table/file");
         }
   ``` 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660208730



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -170,6 +170,11 @@ public WriteBuilder overwrite(boolean enabled) {
       return this;
     }
 
+    public WriteBuilder writerVersion(WriterVersion version) {

Review comment:
       Is this mostly for testing? Or is it something we want folks to be using in general? Just wondering if this should be public




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] samarthjain commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
samarthjain commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660291702



##########
File path: parquet/src/main/java/org/apache/iceberg/parquet/Parquet.java
##########
@@ -170,6 +170,11 @@ public WriteBuilder overwrite(boolean enabled) {
       return this;
     }
 
+    public WriteBuilder writerVersion(WriterVersion version) {

Review comment:
       This provides users a way to create Parquet files with different formats. Currently only used for testing, though. I don't see harm in leaving it public. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #2749: [Spark] Add limited support for vectorized reads for Parquet V2

Posted by GitBox <gi...@apache.org>.
RussellSpitzer commented on a change in pull request #2749:
URL: https://github.com/apache/iceberg/pull/2749#discussion_r660573432



##########
File path: arrow/src/main/java/org/apache/iceberg/arrow/vectorized/parquet/VectorizedPageIterator.java
##########
@@ -512,6 +512,9 @@ protected void initDataReader(Encoding dataEncoding, ByteBufferInputStream in, i
         throw new ParquetDecodingException("could not read page in col " + desc, e);
       }
     } else {
+      if (dataEncoding != Encoding.PLAIN) {
+        throw new UnsupportedOperationException("Unsupported encoding: " + dataEncoding);

Review comment:
       Sounds good to me, I do know most of the time we have errors styled as "Cannot X " but I think your content suggestion for the error message is solid. I would just add that so it fits with the other messages. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org