You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Matthias J. Sax (Jira)" <ji...@apache.org> on 2020/07/07 04:53:00 UTC
[jira] [Reopened] (KAFKA-6453) Reconsider timestamp propagation
semantics
[ https://issues.apache.org/jira/browse/KAFKA-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Matthias J. Sax reopened KAFKA-6453:
------------------------------------
IMHO, the PR only covers part of this ticket. We should also document how timestamps are computed for output records of aggregations and joins.
> Reconsider timestamp propagation semantics
> ------------------------------------------
>
> Key: KAFKA-6453
> URL: https://issues.apache.org/jira/browse/KAFKA-6453
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Reporter: Matthias J. Sax
> Assignee: Victoria Bialas
> Priority: Major
> Labels: needs-kip
>
> Atm, Kafka Streams only has a defined "contract" about timestamp propagation at the Processor API level: all processor within a sub-topology, see the timestamp from the input topic record and this timestamp will be used for all result record when writing them to an topic, too.
> The DSL, inherits this "contract" atm.
> From a DSL point of view, it would be desirable to provide a different contract to the user. To allow this, we need to do the following:
> - extend Processor API to allow manipulation timestamps (ie, a Processor can set a new timestamp for downstream records)
> - define a DSL "contract" for timestamp propagation for each DSL operator
> - document the DSL "contract"
> - implement the DSL "contract" using the new/extended Processor API
--
This message was sent by Atlassian Jira
(v8.3.4#803005)