You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Tim Tattersall (Jira)" <ji...@apache.org> on 2020/11/18 02:30:00 UTC
[jira] [Created] (KAFKA-10735) Kafka producer producing corrupted
avro values when confluent cluster is recreated and producer application is
not restarted
Tim Tattersall created KAFKA-10735:
--------------------------------------
Summary: Kafka producer producing corrupted avro values when confluent cluster is recreated and producer application is not restarted
Key: KAFKA-10735
URL: https://issues.apache.org/jira/browse/KAFKA-10735
Project: Kafka
Issue Type: Bug
Reporter: Tim Tattersall
*Our Environment (AWS):*
1 x EC2 instance running 4 docker containers (using docker-compose)
* cp-kafka 5.5.1
* cp-zookeeper 5.5.1
* cp-schema-registry 5.5.1
* cp-enterprise-control-center 5.5.1
1 x ECS service running a single java application with spring-kafka producer
Topics are using String key and Avro value
*Problem:*
* Avro values published after confluent cluster is recreated are corrupted. Expecting Avro json structure, received string value with corrupted Avro details
** Expected: {"metadata":{"nabEventVersion":"1.0","type":"Kafka IBMMQ sink connector","schemaUrl": ...*ongoing*
** Actual: 1.08Kafka IBMMQ source connector^kafka-conector-ibm-mq-source-entitlements-check\Kafka IBMMQ source connector - sourced....*ongoing*
*How to Reproduce*
# Using an existing confluent cluster
# Start a kafka producer java application (ours running with spring-kafka)
# Destroy the existing confluent cluster (using docker-compose down)
# Recreate the confluent cluster (using docker-compose up)
# Add the topic back onto the new cluster
# Trigger a message to be produced by the running Kafka producer
*Current Workaround*
* Killing running tasks on ECS service and allowing AWS to start new ones
--
This message was sent by Atlassian Jira
(v8.3.4#803005)