You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@nifi.apache.org by "Tony Kurc (JIRA)" <ji...@apache.org> on 2015/11/07 03:56:10 UTC

[jira] [Updated] (NIFI-710) Enumerate available Hadoop library versions for GetHDFS and PutHDFS

     [ https://issues.apache.org/jira/browse/NIFI-710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tony Kurc updated NIFI-710:
---------------------------
    Fix Version/s:     (was: 0.4.0)

> Enumerate available Hadoop library versions for GetHDFS and PutHDFS
> -------------------------------------------------------------------
>
>                 Key: NIFI-710
>                 URL: https://issues.apache.org/jira/browse/NIFI-710
>             Project: Apache NiFi
>          Issue Type: Improvement
>          Components: Extensions
>         Environment: Unix, Hadoop
>            Reporter: Nathan Gough
>            Assignee: Oleg Zhurakousky
>            Priority: Minor
>              Labels: hadoop, hdfs, library
>
> As far as I'm aware, it is only possible to use a single Hadoop library version per NiFi instance/cluster.
> Would it be possible in some way to enumerate the available Hadoop library versions in the /lib directory and give the user the option to select which version of Hadoop cluster they are writing to or reading from per HDFS processor? The intent would be to allow a single NiFi instance/cluster to write to multiple HDFS clusters which may be running on different versions eg. QA and Prod. This eliminates the need for having multiple HDFS feeder NiFis only because the Hadoop versions are different.
> I would try and figure this out my self however I don't know a whole lot about class loading in NiFi or generally how I would start this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)