You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Yuval Itzchakov <yu...@gmail.com> on 2020/08/19 16:35:57 UTC

Table API Kafka Connector Sink Key Definition

Hi,

I'm running Flink 1.9.0 and I'm trying to set the key to be published by
the Table API's Kafka Connector. I've searched the documentation by
could find no reference for such an ability.

Additionally, while browsing the code of the KafkaTableSink, it looks like
it creates a KeyedSerializationSchemaWrapper which just sets the key to
null?

Would love some help.

-- 
Best Regards,
Yuval Itzchakov.

Re: Table API Kafka Connector Sink Key Definition

Posted by Till Rohrmann <tr...@apache.org>.
This is indeed not optimal. Could you file a JIRA issue to add this
functionality? Thanks a lot Yuval.

Cheers,
Till

On Thu, Aug 20, 2020 at 9:47 AM Yuval Itzchakov <yu...@gmail.com> wrote:

> Hi Till,
> KafkaSerializationSchema is only pluggable for the DataStream API, not for
> the Table API. KafkaTableSink hard codes a KeyedSerializationSchema that
> uses a null key, and this behavior can't be overridden.
>
> I have to say I was quite surprised by this behavior, as publishing events
> to Kafka using a key to keep order inside a given partition is usually a
> very common requirement.
>
> On Thu, Aug 20, 2020 at 10:26 AM Till Rohrmann <tr...@apache.org>
> wrote:
>
>> Hi Yuval,
>>
>> it looks as if the KafkaTableSink only supports writing out rows without
>> a key. Pulling in Timo for verification.
>>
>> If you want to use a Kafka producer which writes the records out with a
>> key, then please take a look at KafkaSerializationSchema. It supports this
>> functionality.
>>
>> Cheers,
>> Till
>>
>> On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <yu...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I'm running Flink 1.9.0 and I'm trying to set the key to be published by
>>> the Table API's Kafka Connector. I've searched the documentation by
>>> could find no reference for such an ability.
>>>
>>> Additionally, while browsing the code of the KafkaTableSink, it looks
>>> like it creates a KeyedSerializationSchemaWrapper which just sets the key
>>> to null?
>>>
>>> Would love some help.
>>>
>>> --
>>> Best Regards,
>>> Yuval Itzchakov.
>>>
>>
>
> --
> Best Regards,
> Yuval Itzchakov.
>

Re: Table API Kafka Connector Sink Key Definition

Posted by Yuval Itzchakov <yu...@gmail.com>.
Hi Till,
KafkaSerializationSchema is only pluggable for the DataStream API, not for
the Table API. KafkaTableSink hard codes a KeyedSerializationSchema that
uses a null key, and this behavior can't be overridden.

I have to say I was quite surprised by this behavior, as publishing events
to Kafka using a key to keep order inside a given partition is usually a
very common requirement.

On Thu, Aug 20, 2020 at 10:26 AM Till Rohrmann <tr...@apache.org> wrote:

> Hi Yuval,
>
> it looks as if the KafkaTableSink only supports writing out rows without a
> key. Pulling in Timo for verification.
>
> If you want to use a Kafka producer which writes the records out with a
> key, then please take a look at KafkaSerializationSchema. It supports this
> functionality.
>
> Cheers,
> Till
>
> On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <yu...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm running Flink 1.9.0 and I'm trying to set the key to be published by
>> the Table API's Kafka Connector. I've searched the documentation by
>> could find no reference for such an ability.
>>
>> Additionally, while browsing the code of the KafkaTableSink, it looks
>> like it creates a KeyedSerializationSchemaWrapper which just sets the key
>> to null?
>>
>> Would love some help.
>>
>> --
>> Best Regards,
>> Yuval Itzchakov.
>>
>

-- 
Best Regards,
Yuval Itzchakov.

Re: Table API Kafka Connector Sink Key Definition

Posted by Dawid Wysakowicz <dw...@apache.org>.
Hi Yuval,

Unfortunately setting the key or timestamp (or other metadata) from the
SQL API is not supported yet. There is an ongoing discussion to support
it[1].

Right now your option would be to change the code of KafkaTableSink and
write your own version of KafkaSerializationSchema as Till mentioned.

Best,

Dawid


[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-107-Reading-table-columns-from-different-parts-of-source-records-td38277.html

On 20/08/2020 09:26, Till Rohrmann wrote:
> Hi Yuval,
>
> it looks as if the KafkaTableSink only supports writing out rows
> without a key. Pulling in Timo for verification.
>
> If you want to use a Kafka producer which writes the records out with
> a key, then please take a look atĀ KafkaSerializationSchema. It
> supports this functionality.
>
> Cheers,
> Till
>
> On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <yuvalos@gmail.com
> <ma...@gmail.com>> wrote:
>
>     Hi,
>
>     I'm running Flink 1.9.0 and I'm trying to set the key to be
>     published by the Table API's Kafka Connector. I've searched the
>     documentation by couldĀ find no reference for such an ability.
>
>     Additionally, while browsing the code of the KafkaTableSink, it
>     looks like it creates a KeyedSerializationSchemaWrapper which just
>     sets the key to null?
>
>     Would love some help.
>
>     -- 
>     Best Regards,
>     Yuval Itzchakov.
>

Re: Table API Kafka Connector Sink Key Definition

Posted by Till Rohrmann <tr...@apache.org>.
Hi Yuval,

it looks as if the KafkaTableSink only supports writing out rows without a
key. Pulling in Timo for verification.

If you want to use a Kafka producer which writes the records out with a
key, then please take a look at KafkaSerializationSchema. It supports this
functionality.

Cheers,
Till

On Wed, Aug 19, 2020 at 6:36 PM Yuval Itzchakov <yu...@gmail.com> wrote:

> Hi,
>
> I'm running Flink 1.9.0 and I'm trying to set the key to be published by
> the Table API's Kafka Connector. I've searched the documentation by
> could find no reference for such an ability.
>
> Additionally, while browsing the code of the KafkaTableSink, it looks like
> it creates a KeyedSerializationSchemaWrapper which just sets the key to
> null?
>
> Would love some help.
>
> --
> Best Regards,
> Yuval Itzchakov.
>