You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by "zhongyujiang (via GitHub)" <gi...@apache.org> on 2023/03/10 13:14:58 UTC

[GitHub] [iceberg] zhongyujiang commented on issue #7022: Function filters() is unless on flink datastream api when iceberg table is stored by parquet format

zhongyujiang commented on issue #7022:
URL: https://github.com/apache/iceberg/issues/7022#issuecomment-1463789508

   I don't know why this gets the correct result when stored as ORC, but I think Iceberg always leave the row level filter to the post scan. And the java doc for `FlinkSource#forRowData` said that it's equivalent to `TableScan`, it is not for row level filtering, so I think this is an expected result.
   
   > Initialize a FlinkSource.Builder to read the data from iceberg table. Equivalent to TableScan. See more options in ScanContext.
   > The Source can be read static data in bounded mode. It can also continuously check the arrival of new data and read records incrementally.
   > Without startSnapshotId: Bounded
   > With startSnapshotId and with endSnapshotId: Bounded
   > With startSnapshotId (-1 means unbounded preceding) and Without endSnapshotId: Unbounded
   > 
   > Returns:
   > FlinkSource.Builder to connect the iceberg table


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org