You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2014/12/11 21:55:13 UTC

[jira] [Resolved] (SPARK-2201) Improve FlumeInputDStream's stability and make it scalable

     [ https://issues.apache.org/jira/browse/SPARK-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-2201.
------------------------------
    Resolution: Won't Fix

I hope I understood this right, but the PR discussion seemed to end with suggesting that this would not go into Spark, but maybe a contrib repo, and that it was partly already implemented by other changes.

> Improve FlumeInputDStream's stability and make it scalable
> ----------------------------------------------------------
>
>                 Key: SPARK-2201
>                 URL: https://issues.apache.org/jira/browse/SPARK-2201
>             Project: Spark
>          Issue Type: Improvement
>          Components: Streaming
>            Reporter: sunsc
>
> Currently:
> FlumeUtils.createStream(ssc, "localhost", port); 
> This means that only one flume receiver can work with FlumeInputDStream .so the solution is not scalable. 
> I use a zookeeper to solve this problem.
> Spark flume receivers register themselves to a zk path when started, and a flume agent get physical hosts and push events to them.
> Some works need to be done here: 
> 1.receiver create tmp node in zk,  listeners just watch those tmp nodes.
> 2. when spark FlumeReceivers started, they acquire a physical host (localhost's ip and an idle port) and register itself to zookeeper.
> 3. A new flume sink. In the method of appendEvents, they get physical hosts and push data to them in a round-robin manner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org