You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by 陈卓 <zh...@yirendai.com> on 2018/12/04 03:54:41 UTC
FlinkKafkaProducer011 fails when kafka broker crash
Hi
My flink version :1.6.0, flinkKafkaConnector version FlinkKafkaProducer011,job fail when kafka broker crash
the exception info:
[cid:image001.png@01D48BC7.E13D49F0]
--
Thanks
zhuo chen
Re: FlinkKafkaProducer011 fails when kafka broker crash
Posted by Dawid Wysakowicz <dw...@apache.org>.
Hi,
Yes, this is an expected behavior, flink kafka producer has to maintain
a consistent view of all available partitions(with its leaders) to align
with checkpointing mechanism. In such situation the expected behavior is
to fail the job, restart from checkpoint and continue processing with
newly discovered leaders.
Best,
Dawid
On 04/12/2018 07:06, hzyuemeng1 wrote:
> it's normal situation for flink,but maybe kafka problem
> you can add some config to kafka properties
>
> Properties properties = new Properties();
> properties.setProperty("request.timeout.ms", "120000");
> properties.setProperty("timeout.ms", "120000");
> properties.setProperty("retries", "10");
> properties.setProperty("max.request.size", "5773890");
> properties.setProperty("refresh.leader.backoff.ms", "10000");
> properties.setProperty("batch.size", "163480");
>
>
>
>
>
>
> hzyuemeng1
>
> hzyuemeng1@corp.netease.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=hzyuemeng1&uid=hzyuemeng1%40corp.netease.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22hzyuemeng1%40corp.netease.com%22%5D&logoUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyeicon%2F209a2912f40f6683af56bb7caff1cb54.png>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81>
> 定制
> On 12/4/2018 11:55,陈卓<zh...@yirendai.com>
> <ma...@yirendai.com> wrote:
>
> Hi
>
> My flink version :1.6.0, flinkKafkaConnector version
> FlinkKafkaProducer011,job fail when kafka broker crash
>
>
>
> the exception info:
>
>
>
>
>
>
>
> --
>
> Thanks
> zhuo chen
>
>
>