You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@camel.apache.org by "Marius Cornescu (Jira)" <ji...@apache.org> on 2019/10/17 07:15:00 UTC

[jira] [Commented] (CAMEL-14076) camel-hdfs - Make the HdfsProducer compatible with RemoteFileConsumer (from(hdfs) -> to(sftp))

    [ https://issues.apache.org/jira/browse/CAMEL-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953462#comment-16953462 ] 

Marius Cornescu commented on CAMEL-14076:
-----------------------------------------

will work on this.

> camel-hdfs - Make the HdfsProducer compatible with RemoteFileConsumer (from(hdfs) -> to(sftp))
> ----------------------------------------------------------------------------------------------
>
>                 Key: CAMEL-14076
>                 URL: https://issues.apache.org/jira/browse/CAMEL-14076
>             Project: Camel
>          Issue Type: Improvement
>          Components: camel-hdfs
>            Reporter: Marius Cornescu
>            Priority: Major
>
>   Users should be able to consume files from hadoop, and publish them to other destinations like sftp/ftp/...
>   The *RemoteFileComponent*s look to me like the golden standard, so it would be nice to be able to make this component compatible with them.
>   Currently, we have to set the *chunkSize* to a large enough number, so that only one message is produced, or do some aggregation after the consumer.
>  
>   This work should lay the ground for adding a *streamDownload* parameter for the consumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)