You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Wz <32...@qq.com> on 2020/11/27 08:43:43 UTC

FlinkKafkaProducer好像每次事务提交都会重连producer导致打印了大量log

下面是addSink的代码:
result.addSink(new FlinkKafkaProducer(DataSourceConfig.ResultTopic,new
MyKafkaSerializationSchema(DataSourceConfig.ResultTopic),ConnectToKafka.getKafKaProducerProperties(),FlinkKafkaProducer.Semantic.EXACTLY_ONCE,
3)).setParallelism(1);

KafkaProducer配置信息:
props_Producer.put("bootstrap.servers",
DataSourceConfig.bootstrapServersIPAddress);
props_Producer.put("acks","all");
props_Producer.put("request.timeout.ms", 3000);
    

总之我也不太清楚为什么会反复打印下面的链接时才会打印的log,推测是一直在重新连接,几乎无间断的打印下面的log给磁盘撑爆了。请教各位大佬可能的原因?


2020-11-20 15:55:56,672 INFO 
org.apache.kafka.clients.producer.KafkaProducer              [] - [Producer
clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-2, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-2] Instantiated a
transactional producer.
2020-11-20 15:55:56,672 INFO 
org.apache.kafka.clients.producer.KafkaProducer              [] - [Producer
clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-2, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-2] Overriding the default
retries config to the recommended value of 2147483647 since the idempotent
producer is enabled.
2020-11-20 15:55:56,676 INFO  org.apache.kafka.common.utils.AppInfoParser                 
[] - Kafka version: 2.4.1
2020-11-20 15:55:56,676 INFO  org.apache.kafka.common.utils.AppInfoParser                 
[] - Kafka commitId: c57222ae8cd7866b
2020-11-20 15:55:56,676 INFO  org.apache.kafka.common.utils.AppInfoParser                 
[] - Kafka startTimeMs: 1605858956676
2020-11-20 15:55:56,676 INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting
FlinkKafkaInternalProducer (1/1) to produce into default topic
spc_testResult
2020-11-20 15:55:56,676 INFO 
org.apache.kafka.clients.producer.internals.TransactionManager [] -
[Producer clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-2, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-2] ProducerId set to -1 with
epoch -1
2020-11-20 15:55:56,678 INFO  org.apache.kafka.clients.Metadata                           
[] - [Producer clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-2, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-2] Cluster ID:
8IUUMEvGQLKWsQRfKWc9Hw
2020-11-20 15:55:56,779 INFO 
org.apache.kafka.clients.producer.internals.TransactionManager [] -
[Producer clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-2, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-2] ProducerId set to 50001
with epoch 1753
2020-11-20 15:55:56,793 INFO 
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction []
- FlinkKafkaProducer 1/1 - checkpoint 5383 complete, committing transaction
TransactionHolder{handle=KafkaTransactionState [transactionalId=CepOperator
-> Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-0, producerId=50002,
epoch=1752], transactionStartTime=1605858953780} from checkpoint 5383
2020-11-20 15:55:56,793 INFO 
org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
[] - Flushing new partitions
2020-11-20 15:55:56,793 INFO 
org.apache.kafka.clients.producer.KafkaProducer              [] - [Producer
clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-0, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-0] Closing the Kafka producer
with timeoutMillis = 0 ms.
2020-11-20 15:55:56,793 INFO 
org.apache.kafka.clients.producer.KafkaProducer              [] - [Producer
clientId=producer-CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-0, transactionalId=CepOperator ->
Sink: Unnamed-1848139bf30d999062379bb9e1d14fd8-0] Proceeding to force close
the producer since pending requests could not be completed within timeout 0
ms.
2020-11-20 15:55:59,670 INFO 
org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
[] - Flushing new partitions
2020-11-20 15:55:59,671 INFO 
org.apache.kafka.clients.producer.ProducerConfig             [] -
ProducerConfig values: 
	acks = all
	batch.size = 16384
	bootstrap.servers = [192.168.81.128:9092]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 3000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 3600000
	transactional.id = CepOperator -> Sink:
Unnamed-1848139bf30d999062379bb9e1d14fd8-1
	value.serializer = class
org.apache.kafka.common.serialization.ByteArraySerializer



--
Sent from: http://apache-flink.147419.n8.nabble.com/