You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by cloud-fan <gi...@git.apache.org> on 2018/11/14 01:36:12 UTC
[GitHub] spark pull request #21957: [SPARK-24994][SQL] When the data type of the fiel...
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21957#discussion_r233287374
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala ---
@@ -269,7 +269,8 @@ case class FileSourceScanExec(
}
@transient
- private val pushedDownFilters = dataFilters.flatMap(DataSourceStrategy.translateFilter)
+ private val pushedDownFilters = dataFilters.flatMap(DataSourceStrategy.
+ translateFilter(_, !relation.fileFormat.isInstanceOf[ParquetSource]))
--- End diff --
I don't think we accept changes like this. If this is specific to parquet, do it in `ParquetFilters`.
And I still prefer to normalize the filters and remove unnecessary cast, before pushing filters down to data sources.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org