You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2016/02/27 07:36:34 UTC
spark git commit: [SPARK-13518][SQL] Enable vectorized parquet
scanner by default
Repository: spark
Updated Branches:
refs/heads/master 59e3e10be -> 7a0cb4e58
[SPARK-13518][SQL] Enable vectorized parquet scanner by default
## What changes were proposed in this pull request?
Change the default of the flag to enable this feature now that the implementation is complete.
## How was this patch tested?
The new parquet reader should be a drop in, so will be exercised by the existing tests.
Author: Nong Li <no...@databricks.com>
Closes #11397 from nongli/spark-13518.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7a0cb4e5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7a0cb4e5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7a0cb4e5
Branch: refs/heads/master
Commit: 7a0cb4e58728834b49050ce4fae418acc18a601f
Parents: 59e3e10
Author: Nong Li <no...@databricks.com>
Authored: Fri Feb 26 22:36:32 2016 -0800
Committer: Reynold Xin <rx...@databricks.com>
Committed: Fri Feb 26 22:36:32 2016 -0800
----------------------------------------------------------------------
.../src/main/scala/org/apache/spark/sql/internal/SQLConf.scala | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark/blob/7a0cb4e5/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala b/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index 9a50ef7..1d1e288 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -345,12 +345,9 @@ object SQLConf {
defaultValue = Some(true),
doc = "Enables using the custom ParquetUnsafeRowRecordReader.")
- // Note: this can not be enabled all the time because the reader will not be returning UnsafeRows.
- // Doing so is very expensive and we should remove this requirement instead of fixing it here.
- // Initial testing seems to indicate only sort requires this.
val PARQUET_VECTORIZED_READER_ENABLED = booleanConf(
key = "spark.sql.parquet.enableVectorizedReader",
- defaultValue = Some(false),
+ defaultValue = Some(true),
doc = "Enables vectorized parquet decoding.")
val ORC_FILTER_PUSHDOWN_ENABLED = booleanConf("spark.sql.orc.filterPushdown",
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org