You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Aljoscha Krettek (Jira)" <ji...@apache.org> on 2020/07/21 07:59:00 UTC

[jira] [Commented] (FLINK-18649) Add a MongoDB Connector with Exactly-Once Semantics

    [ https://issues.apache.org/jira/browse/FLINK-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161840#comment-17161840 ] 

Aljoscha Krettek commented on FLINK-18649:
------------------------------------------

I think this could be implemented either like the Cassandra connector, where we use a write-ahead log, or potentially like the ES connector where don't use a log but instead rely on idempotent writes and overwrite in case of recovery.

> Add a MongoDB Connector with Exactly-Once Semantics
> ---------------------------------------------------
>
>                 Key: FLINK-18649
>                 URL: https://issues.apache.org/jira/browse/FLINK-18649
>             Project: Flink
>          Issue Type: Wish
>          Components: Connectors / Common
>            Reporter: Eric Holsinger
>            Priority: Minor
>
> Before taking the Flink Plunge, and per the following recommendation, I'm opening a Jira ticket to see if someone can provide me with a formal recommendation for obtaining exactly-once semantics for MongoDB:
>  
> [https://stackoverflow.com/questions/35158683/kafka-flink-datastream-mongodb]
>  
> FYI, we cannot use Kafka or any other framework other than Flink and MongoDB, and we have constraints as to what can be installed in production.
>  
> Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)