You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/10/08 08:51:36 UTC

[GitHub] [spark] dcoliversun commented on a diff in pull request #38160: [SPARK-40710][DOCS] Supplement undocumented parquet configurations in documentation

dcoliversun commented on code in PR #38160:
URL: https://github.com/apache/spark/pull/38160#discussion_r990613064


##########
docs/sql-data-sources-parquet.md:
##########
@@ -454,6 +454,28 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.3.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.int96TimestampConversion</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L899-L905



##########
docs/sql-data-sources-parquet.md:
##########
@@ -493,6 +526,17 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.5.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.respectSummaryFiles</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L872-L879



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1033-L1038



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized Parquet decoding for nested columns (e.g., struct, list, map). 
+    Requires <code>spark.sql.parquet.enableVectorizedReader</code> to be enabled.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.recordLevelFilter.enabled</code></td>
+  <td>false</td>
+  <td>
+    If true, enables Parquet's native record-level filtering using the pushed down filters. 
+    This configuration only has an effect when <code>spark.sql.parquet.filterPushdown</code> 
+    is enabled and the vectorized reader is not used. You can ensure the vectorized reader 
+    is not used by setting <code>spark.sql.parquet.enableVectorizedReader</code> to false.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.columnarReaderBatchSize</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1058-L1063



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized Parquet decoding for nested columns (e.g., struct, list, map). 
+    Requires <code>spark.sql.parquet.enableVectorizedReader</code> to be enabled.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.recordLevelFilter.enabled</code></td>
+  <td>false</td>
+  <td>
+    If true, enables Parquet's native record-level filtering using the pushed down filters. 
+    This configuration only has an effect when <code>spark.sql.parquet.filterPushdown</code> 
+    is enabled and the vectorized reader is not used. You can ensure the vectorized reader 
+    is not used by setting <code>spark.sql.parquet.enableVectorizedReader</code> to false.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.columnarReaderBatchSize</code></td>
+  <td>4096</td>
+  <td>
+    The number of rows to include in a parquet vectorized reader batch. The number should 
+    be carefully chosen to minimize overhead and avoid OOMs in reading data.
+  </td>
+  <td>2.4.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.write.enabled</code></td>
+  <td>true</td>
+  <td>
+    Field ID is a native field of the Parquet schema spec. When enabled, 
+    Parquet writers will populate the field Id metadata (if present) in the Spark schema to the Parquet schema.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.read.enabled</code></td>
+  <td>false</td>
+  <td>
+    Field ID is a native field of the Parquet schema spec. When enabled, Parquet readers 
+    will use field IDs (if present) in the requested Spark schema to look up Parquet 
+    fields instead of using column names.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.read.ignoreMissing</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1083-L1090



##########
docs/sql-data-sources-parquet.md:
##########
@@ -473,6 +495,17 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   <td>Enables Parquet filter push-down optimization when set to true.</td>
   <td>1.2.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.aggregatePushdown</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1003-L1010



##########
docs/sql-data-sources-parquet.md:
##########
@@ -454,6 +454,28 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.3.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.int96TimestampConversion</code></td>
+  <td>false</td>
+  <td>
+    This controls whether timestamp adjustments should be applied to INT96 data when 
+    converting to timestamps, for data written by Impala.  This is necessary because Impala 
+    stores INT96 data with a different timezone offset than Hive & Spark.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.outputTimestampType</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L911-L921



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized Parquet decoding for nested columns (e.g., struct, list, map). 
+    Requires <code>spark.sql.parquet.enableVectorizedReader</code> to be enabled.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.recordLevelFilter.enabled</code></td>
+  <td>false</td>
+  <td>
+    If true, enables Parquet's native record-level filtering using the pushed down filters. 
+    This configuration only has an effect when <code>spark.sql.parquet.filterPushdown</code> 
+    is enabled and the vectorized reader is not used. You can ensure the vectorized reader 
+    is not used by setting <code>spark.sql.parquet.enableVectorizedReader</code> to false.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.columnarReaderBatchSize</code></td>
+  <td>4096</td>
+  <td>
+    The number of rows to include in a parquet vectorized reader batch. The number should 
+    be carefully chosen to minimize overhead and avoid OOMs in reading data.
+  </td>
+  <td>2.4.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.write.enabled</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1065-L1072



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized Parquet decoding for nested columns (e.g., struct, list, map). 
+    Requires <code>spark.sql.parquet.enableVectorizedReader</code> to be enabled.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.recordLevelFilter.enabled</code></td>
+  <td>false</td>
+  <td>
+    If true, enables Parquet's native record-level filtering using the pushed down filters. 
+    This configuration only has an effect when <code>spark.sql.parquet.filterPushdown</code> 
+    is enabled and the vectorized reader is not used. You can ensure the vectorized reader 
+    is not used by setting <code>spark.sql.parquet.enableVectorizedReader</code> to false.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.columnarReaderBatchSize</code></td>
+  <td>4096</td>
+  <td>
+    The number of rows to include in a parquet vectorized reader batch. The number should 
+    be carefully chosen to minimize overhead and avoid OOMs in reading data.
+  </td>
+  <td>2.4.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.write.enabled</code></td>
+  <td>true</td>
+  <td>
+    Field ID is a native field of the Parquet schema spec. When enabled, 
+    Parquet writers will populate the field Id metadata (if present) in the Spark schema to the Parquet schema.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.read.enabled</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1074-L1081



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1040-L1046



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized Parquet decoding for nested columns (e.g., struct, list, map). 
+    Requires <code>spark.sql.parquet.enableVectorizedReader</code> to be enabled.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.recordLevelFilter.enabled</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1048-L1056



##########
docs/sql-data-sources-parquet.md:
##########
@@ -505,6 +549,84 @@ Configuration of Parquet can be done using the `setConf` method on `SparkSession
   </td>
   <td>1.6.0</td>
 </tr>
+<tr>
+  <td><code>spark.sql.parquet.enableVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized parquet decoding.
+  </td>
+  <td>2.0.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.enableNestedColumnVectorizedReader</code></td>
+  <td>true</td>
+  <td>
+    Enables vectorized Parquet decoding for nested columns (e.g., struct, list, map). 
+    Requires <code>spark.sql.parquet.enableVectorizedReader</code> to be enabled.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.recordLevelFilter.enabled</code></td>
+  <td>false</td>
+  <td>
+    If true, enables Parquet's native record-level filtering using the pushed down filters. 
+    This configuration only has an effect when <code>spark.sql.parquet.filterPushdown</code> 
+    is enabled and the vectorized reader is not used. You can ensure the vectorized reader 
+    is not used by setting <code>spark.sql.parquet.enableVectorizedReader</code> to false.
+  </td>
+  <td>2.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.columnarReaderBatchSize</code></td>
+  <td>4096</td>
+  <td>
+    The number of rows to include in a parquet vectorized reader batch. The number should 
+    be carefully chosen to minimize overhead and avoid OOMs in reading data.
+  </td>
+  <td>2.4.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.write.enabled</code></td>
+  <td>true</td>
+  <td>
+    Field ID is a native field of the Parquet schema spec. When enabled, 
+    Parquet writers will populate the field Id metadata (if present) in the Spark schema to the Parquet schema.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.read.enabled</code></td>
+  <td>false</td>
+  <td>
+    Field ID is a native field of the Parquet schema spec. When enabled, Parquet readers 
+    will use field IDs (if present) in the requested Spark schema to look up Parquet 
+    fields instead of using column names.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.fieldId.read.ignoreMissing</code></td>
+  <td>false</td>
+  <td>
+    When the Parquet file doesn't have any field IDs but the 
+    Spark read schema is using field IDs to read, we will silently return nulls 
+    when this flag is enabled, or error otherwise.
+  </td>
+  <td>3.3.0</td>
+</tr>
+<tr>
+  <td><code>spark.sql.parquet.timestampNTZ.enabled</code></td>

Review Comment:
   https://github.com/apache/spark/blob/309638eeefbfb13dae8dbded0279bf44390389ee/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L1092-L1101



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org