You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Tobias Pfeiffer <tg...@preferred.jp> on 2014/12/01 04:44:44 UTC

Re: kafka pipeline exactly once semantics

Josh,

On Sun, Nov 30, 2014 at 10:17 PM, Josh J <jo...@gmail.com> wrote:
>
> I would like to setup a Kafka pipeline whereby I write my data to a single
> topic 1, then I continue to process using spark streaming and write the
> transformed results to topic2, and finally I read the results from topic 2.
>

Not really related to your question, but you may also want to look into
Samza <http://samza.incubator.apache.org/> which was built exactly for this
kind of processing.

Tobias