You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Kunal Verma (Jira)" <ji...@apache.org> on 2019/11/28 12:44:00 UTC

[jira] [Created] (KAFKA-9245) The server experienced an unexpected error when processing the request

Kunal Verma created KAFKA-9245:
----------------------------------

             Summary: The server experienced an unexpected error when processing the request
                 Key: KAFKA-9245
                 URL: https://issues.apache.org/jira/browse/KAFKA-9245
             Project: Kafka
          Issue Type: Bug
          Components: compression, replication
    Affects Versions: 2.3.0
            Reporter: Kunal Verma


Hi,

I have 3 broker kafka cluster, on one broker machine disk got full(log.dirs) and eventually, the broker got shutdown. After which I have clean some logs and start up the Kafka serve.

 

Now kafka broker is working, however, on pushing any record, I am getting the following error:

 

[2019-11-28 16:09:52,659] ERROR [ReplicaManager broker=48] Error processing append operation on partition TEST-TOPIC (kafka.server.ReplicaManager)[2019-11-28 16:09:52,659] ERROR [ReplicaManager broker=48] Error processing append operation on partition staging-honeybee-input-1 (kafka.server.ReplicaManager)java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:435) at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:466) at java.io.DataInputStream.readByte(DataInputStream.java:265) at org.apache.kafka.common.utils.ByteUtils.readVarint(ByteUtils.java:168) at org.apache.kafka.common.record.DefaultRecord.readFrom(DefaultRecord.java:293) at org.apache.kafka.common.record.DefaultRecordBatch$1.readNext(DefaultRecordBatch.java:264) at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:569) at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:538) at org.apache.kafka.common.record.DefaultRecordBatch.iterator(DefaultRecordBatch.java:327) at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at kafka.log.LogValidator$.$anonfun$validateMessagesAndAssignOffsetsCompressed$1(LogValidator.scala:269) at kafka.log.LogValidator$.$anonfun$validateMessagesAndAssignOffsetsCompressed$1$adapted(LogValidator.scala:261) at scala.collection.Iterator.foreach(Iterator.scala:941) at scala.collection.Iterator.foreach$(Iterator.scala:941) at scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at scala.collection.IterableLike.foreach(IterableLike.scala:74) at scala.collection.IterableLike.foreach$(IterableLike.scala:73) at scala.collection.AbstractIterable.foreach(Iterable.scala:56) at kafka.log.LogValidator$.validateMessagesAndAssignOffsetsCompressed(LogValidator.scala:261) at kafka.log.LogValidator$.validateMessagesAndAssignOffsets(LogValidator.scala:73) at kafka.log.Log.liftedTree1$1(Log.scala:881) at kafka.log.Log.$anonfun$append$2(Log.scala:868) at kafka.log.Log.maybeHandleIOException(Log.scala:2065) at kafka.log.Log.append(Log.scala:850) at kafka.log.Log.appendAsLeader(Log.scala:819) at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759) at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149) at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237) at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) at scala.collection.mutable.HashMap.foreach(HashMap.scala:149) at scala.collection.TraversableLike.map(TraversableLike.scala:237) at scala.collection.TraversableLike.map$(TraversableLike.scala:230) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751) at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492) at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544) at kafka.server.KafkaApis.handle(KafkaApis.scala:113) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at java.lang.Thread.run(Thread.java:748)

 

 

I have even tried deleting the topic and updating compression.type of the topic to gzip, however no success. Kindly help in the recovering of the issue.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)