You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Robert Metzger (JIRA)" <ji...@apache.org> on 2016/04/04 16:04:25 UTC

[jira] [Commented] (FLINK-3679) DeserializationSchema should handle zero or more outputs for every input

    [ https://issues.apache.org/jira/browse/FLINK-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224182#comment-15224182 ] 

Robert Metzger commented on FLINK-3679:
---------------------------------------

I had a quick offline chat about this with [~StephanEwen]. Changing the semantics of the DeserializationSchema to use an OutputCollector would be possible, but it would break existing code, introduce a new class and make the locking / operator chaining of the Kafka consumer code more complicated.
I wonder if the problems you've mentioned can't be solved with a flatMap() operator. When the Kafka consumer and the flatMap() are executed with the same parallelism, they'll be chained together and then executed in the same thread with almost no overhead.
If one Kafka message results in two or more logical messages, that "splitting" can be done in the flatMap() as well. For invalid records, this can also be reflected in the returned record (with a failure flag (some id set to -1 or a bool set to false), or a special field in a JSON record), ...) and then treated accordingly in the flatMap() call.

If you want, we can keep the JIRA issue open and see if more users run into this. If so, we can reconsider fixing it (I'm not saying I've decided against fixing it)

> DeserializationSchema should handle zero or more outputs for every input
> ------------------------------------------------------------------------
>
>                 Key: FLINK-3679
>                 URL: https://issues.apache.org/jira/browse/FLINK-3679
>             Project: Flink
>          Issue Type: Bug
>          Components: DataStream API
>            Reporter: Jamie Grier
>
> There are a couple of issues with the DeserializationSchema API that I think should be improved.  This request has come to me via an existing Flink user.
> The main issue is simply that the API assumes that there is a one-to-one mapping between input and outputs.  In reality there are scenarios where one input message (say from Kafka) might actually map to zero or more logical elements in the pipeline.
> Particularly important here is the case where you receive a message from a source (such as Kafka) and say the raw bytes don't deserialize properly.  Right now the only recourse is to throw IOException and therefore fail the job.  
> This is definitely not good since bad data is a reality and failing the job is not the right option.  If the job fails we'll just end up replaying the bad data and the whole thing will start again.
> Instead in this case it would be best if the user could just return the empty set.
> The other case is where one input message should logically be multiple output messages.  This case is probably less important since there are other ways to do this but in general it might be good to make the DeserializationSchema.deserialize() method return a collection rather than a single element.
> Maybe we need to support a DeserializationSchema variant that has semantics more like that of FlatMap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)