You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2016/01/04 02:18:39 UTC

[jira] [Commented] (SPARK-11661) We should still pushdown filters returned by a data source's unhandledFilters

    [ https://issues.apache.org/jira/browse/SPARK-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15080605#comment-15080605 ] 

Yin Huai commented on SPARK-11661:
----------------------------------

Can you create a jira? Do you also want to create a PR to fix it (DataSourceStrategy is the file that needs to updated)?

> We should still pushdown filters returned by a data source's unhandledFilters
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-11661
>                 URL: https://issues.apache.org/jira/browse/SPARK-11661
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Yin Huai
>            Assignee: Yin Huai
>            Priority: Blocker
>             Fix For: 1.6.0
>
>
> We added unhandledFilters interface to SPARK-10978. So, a data source has a chance to let Spark SQL know that for those returned filters, it is possible that the data source will not apply them to every row. So, Spark SQL should use a Filter operator to evaluate those filters. However, if a filter is a part of returned unhandledFilters, we should still push it down. For example, our internal data sources do not override this method, if we do not push down those filters, we are actually turning off the filter pushdown feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org