You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by dhurandar S <dh...@gmail.com> on 2020/04/29 17:46:48 UTC

doing demultiplexing using Apache flink

Hi ,

We have a use case where we have to demultiplex the incoming stream to
multiple output streams.

We read from 1 Kafka topic and as an output we generate multiple Kafka
topics. The logic of generating each new Kafka topic is different and not
known beforehand. Users of the system keep adding new logic and henceforth
the system needs to generate the data in the new topic with logic applied
to the incoming stream.

 Input to the system would be logic code or SQL statement and destination
topic or S3 location. The system should be able to read this configuration
and emit those, hopefully at runtime.

Any guidance if this is possible in flink . and some pointers how this can
be achieved.

regards,
Dhurandar

doing demultiplexing using Apache flink

Posted by dhurandar S <dh...@gmail.com>.
>
> Hi ,
>
> We have a use case where we have to demultiplex the incoming stream to
> multiple output streams.
>
> We read from 1 Kafka topic and as an output we generate multiple Kafka
> topics. The logic of generating each new Kafka topic is different and not
> known beforehand. Users of the system keep adding new logic and henceforth
> the system needs to generate the data in the new topic with logic applied
> to the incoming stream.
>
>  Input to the system would be logic code or SQL statement and destination
> topic or S3 location. The system should be able to read this configuration
> and emit those, hopefully at runtime.
>
> Any guidance if this is possible in flink . and some pointers how this can
> be achieved.
>
> regards,
> Dhurandar
>

doing demultiplexing using Apache flink

Posted by dhurandar S <dh...@gmail.com>.
>
> Hi ,
>
> We have a use case where we have to demultiplex the incoming stream to
> multiple output streams.
>
> We read from 1 Kafka topic and as an output we generate multiple Kafka
> topics. The logic of generating each new Kafka topic is different and not
> known beforehand. Users of the system keep adding new logic and henceforth
> the system needs to generate the data in the new topic with logic applied
> to the incoming stream.
>
>  Input to the system would be logic code or SQL statement and destination
> topic or S3 location. The system should be able to read this configuration
> and emit those, hopefully at runtime.
>
> Any guidance if this is possible in flink . and some pointers how this can
> be achieved.
>
> regards,
> Dhurandar
>