You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Andrew Pilloud (Jira)" <ji...@apache.org> on 2021/05/11 01:53:00 UTC

[jira] [Updated] (BEAM-6874) HCatalogTableProvider supports filter pushdown

     [ https://issues.apache.org/jira/browse/BEAM-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Pilloud updated BEAM-6874:
---------------------------------
    Summary: HCatalogTableProvider supports filter pushdown  (was: HCatalogTableProvider always read all rows)

> HCatalogTableProvider supports filter pushdown
> ----------------------------------------------
>
>                 Key: BEAM-6874
>                 URL: https://issues.apache.org/jira/browse/BEAM-6874
>             Project: Beam
>          Issue Type: Bug
>          Components: dsl-sql, io-java-hcatalog
>    Affects Versions: 2.11.0
>            Reporter: Near
>            Priority: P3
>         Attachments: limit.png
>
>
> Hi,
> I'm using HCatalogTableProvider while doing SqlTransform.query. The query is something like "select * from `hive`.`table_name` limit 10". Despite of the limit clause, the data source still reads much more rows (the data of Hive table are files on S3), even more than the number of rows in one file (or partition).
>  
> Some more details:
>  # It is running on Flink.
>  # I actually implemented my own HiveTableProvider because HCatalogBeamSchema only supports primitive types. However, the table provider works when I query a small table with ~1k rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)