You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Bryan Bende (JIRA)" <ji...@apache.org> on 2017/04/27 21:35:04 UTC

[jira] [Updated] (NIFI-3724) Add Put/Fetch Parquet Processors

     [ https://issues.apache.org/jira/browse/NIFI-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bryan Bende updated NIFI-3724:
------------------------------
    Status: Patch Available  (was: Open)

> Add Put/Fetch Parquet Processors
> --------------------------------
>
>                 Key: NIFI-3724
>                 URL: https://issues.apache.org/jira/browse/NIFI-3724
>             Project: Apache NiFi
>          Issue Type: Improvement
>            Reporter: Bryan Bende
>            Assignee: Bryan Bende
>            Priority: Minor
>
> Now that we have the record reader/writer services currently in master, it would be nice to have reader and writers for Parquet. Since Parquet's API is based on the Hadoop Path object, and not InputStreams/OutputStreams, we can't really implement direct conversions to and from Parquet in the middle of a flow, but we can we can perform the conversion by taking any record format and writing to a Path as Parquet, or reading Parquet from a Path and writing it out as another record format.
> We should add a PutParquet that uses a record reader and writes records to a Path as Parquet, and a FetchParquet that reads Parquet from a path and writes out records to a flow file using a record writer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)