You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2019/06/18 06:40:33 UTC

[GitHub] [flink] becketqin commented on issue #8535: [FLINK-11693] Add KafkaSerializationSchema that uses ProducerRecord

becketqin commented on issue #8535: [FLINK-11693] Add KafkaSerializationSchema that uses ProducerRecord
URL: https://github.com/apache/flink/pull/8535#issuecomment-502969101
 
 
   @alexeyt820 Multicast and broadcast is an interesting use case. I am not sure if the serializer is the best place to do that. It seems the processing node should do this instead of the sink node. However, the problem here is that the processing node may not know how many partitions are there, and the sink node probably does not know which partitions should a record be sent to. 
   
   For broadcast this might be easily resolved by having a special flag on the record, so the sink node just send the record to every partition. For multicast this is a little trickier. But in any case, putting it into the serializer seems mixing things up. I'd suggest to design multicast and broadcast mechanism in a more generic and explicit way that could be reused by all the Sink connectors.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services