You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Anonymous (Jira)" <ji...@apache.org> on 2023/04/13 11:04:00 UTC

[jira] [Updated] (BEAM-11913) Add support for Hadoop configuration on ParquetIO

     [ https://issues.apache.org/jira/browse/BEAM-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Anonymous updated BEAM-11913:
-----------------------------
    Status: Triage Needed  (was: Resolved)

> Add support for Hadoop configuration on ParquetIO
> -------------------------------------------------
>
>                 Key: BEAM-11913
>                 URL: https://issues.apache.org/jira/browse/BEAM-11913
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-parquet
>            Reporter: Ismaël Mejía
>            Assignee: Ismaël Mejía
>            Priority: P2
>             Fix For: 2.29.0
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This is a common request from users and we did not do it in the past because we tried to avoid Hadoop objects in ParquetIO's public API. However there are valid reasons to do it:
> 1. Many functionalities of Parquet are configurable via public helper methods on Parquet that prepare data inside of Hadoop's Configuration object, e.g. Column Projection via `{color:#000000}AvroReadSupport{color}.setRequestedProjection({color:#871094}conf{color}, {color:#871094}projectionSchema{color});` or Predicate Filters via `P{color:#000000}arquetInputFormat{color}.setFilterPredicate({color:#871094}sc{color}.hadoopConfiguration(), {color:#871094}filterPredicate{color});`. Giving access to those would allow power users to do advanced stuff without any maintenance on the IO side.
> 2. The main reason to avoid the Hadoop Configuration object was to align with future non Hadoop required APIs on Parquet see PARQUET-1126 for details but this does not seem that will happen soon.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)