You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Maurin Lenglart <ma...@cuberonlabs.com> on 2017/03/22 19:49:09 UTC

Spark streaming to kafka exactly once

Hi,
we are trying to build a spark streaming solution that subscribe and push to kafka.
But we are running into the problem of duplicates events.
Right now, I am doing a “forEachRdd” and loop over the message of each partition and send those message to kafka.

Is there any good way of solving that issue?

thanks

Re: Spark streaming to kafka exactly once

Posted by Maurin Lenglart <ma...@cuberonlabs.com>.
Ok,
Thanks for your answers

On 3/22/17, 1:34 PM, "Cody Koeninger" <co...@koeninger.org> wrote:

    If you're talking about reading the same message multiple times in a
    failure situation, see
    
    https://github.com/koeninger/kafka-exactly-once
    
    If you're talking about producing the same message multiple times in a
    failure situation, keep an eye on
    
    https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging
    
    If you're talking about producers just misbehaving and sending
    different copies of what is essentially the same message from a domain
    perspective, you have to dedupe that with your own logic.
    
    On Wed, Mar 22, 2017 at 2:52 PM, Matt Deaver <ma...@gmail.com> wrote:
    > You have to handle de-duplication upstream or downstream. It might
    > technically be possible to handle this in Spark but you'll probably have a
    > better time handling duplicates in the service that reads from Kafka.
    >
    > On Wed, Mar 22, 2017 at 1:49 PM, Maurin Lenglart <ma...@cuberonlabs.com>
    > wrote:
    >>
    >> Hi,
    >> we are trying to build a spark streaming solution that subscribe and push
    >> to kafka.
    >>
    >> But we are running into the problem of duplicates events.
    >>
    >> Right now, I am doing a “forEachRdd” and loop over the message of each
    >> partition and send those message to kafka.
    >>
    >>
    >>
    >> Is there any good way of solving that issue?
    >>
    >>
    >>
    >> thanks
    >
    >
    >
    >
    > --
    > Regards,
    >
    > Matt
    > Data Engineer
    > https://www.linkedin.com/in/mdeaver
    > http://mattdeav.pythonanywhere.com/
    


Re: Spark streaming to kafka exactly once

Posted by Cody Koeninger <co...@koeninger.org>.
If you're talking about reading the same message multiple times in a
failure situation, see

https://github.com/koeninger/kafka-exactly-once

If you're talking about producing the same message multiple times in a
failure situation, keep an eye on

https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging

If you're talking about producers just misbehaving and sending
different copies of what is essentially the same message from a domain
perspective, you have to dedupe that with your own logic.

On Wed, Mar 22, 2017 at 2:52 PM, Matt Deaver <ma...@gmail.com> wrote:
> You have to handle de-duplication upstream or downstream. It might
> technically be possible to handle this in Spark but you'll probably have a
> better time handling duplicates in the service that reads from Kafka.
>
> On Wed, Mar 22, 2017 at 1:49 PM, Maurin Lenglart <ma...@cuberonlabs.com>
> wrote:
>>
>> Hi,
>> we are trying to build a spark streaming solution that subscribe and push
>> to kafka.
>>
>> But we are running into the problem of duplicates events.
>>
>> Right now, I am doing a “forEachRdd” and loop over the message of each
>> partition and send those message to kafka.
>>
>>
>>
>> Is there any good way of solving that issue?
>>
>>
>>
>> thanks
>
>
>
>
> --
> Regards,
>
> Matt
> Data Engineer
> https://www.linkedin.com/in/mdeaver
> http://mattdeav.pythonanywhere.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: Spark streaming to kafka exactly once

Posted by Matt Deaver <ma...@gmail.com>.
You have to handle de-duplication upstream or downstream. It might
technically be possible to handle this in Spark but you'll probably have a
better time handling duplicates in the service that reads from Kafka.

On Wed, Mar 22, 2017 at 1:49 PM, Maurin Lenglart <ma...@cuberonlabs.com>
wrote:

> Hi,
> we are trying to build a spark streaming solution that subscribe and push
> to kafka.
>
> But we are running into the problem of duplicates events.
>
> Right now, I am doing a “forEachRdd” and loop over the message of each
> partition and send those message to kafka.
>
>
>
> Is there any good way of solving that issue?
>
>
>
> thanks
>



-- 
Regards,

Matt
Data Engineer
https://www.linkedin.com/in/mdeaver
http://mattdeav.pythonanywhere.com/