You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Ismael Juma (JIRA)" <ji...@apache.org> on 2016/05/27 20:44:13 UTC
[jira] [Commented] (KAFKA-3764) Error processing append operation
on partition
[ https://issues.apache.org/jira/browse/KAFKA-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15304742#comment-15304742 ]
Ismael Juma commented on KAFKA-3764:
------------------------------------
Is there any chance you could test with a different client (librdkafka, Java client, kafka-python)? There's a chance that the issue lies with ruby-kafka.
> Error processing append operation on partition
> ----------------------------------------------
>
> Key: KAFKA-3764
> URL: https://issues.apache.org/jira/browse/KAFKA-3764
> Project: Kafka
> Issue Type: Bug
> Affects Versions: 0.10.0.0
> Reporter: Martin Nowak
>
> After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error processing append operation on partition` errors. This happens with ruby-kafka as producer and enabled snappy compression.
> {noformat}
> [2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error processing append operation on partition m2m-0 (kafka.server.ReplicaManager)
> kafka.common.KafkaException:
> at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
> at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
> at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
> at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
> at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
> at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
> at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
> at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
> at kafka.log.Log.liftedTree1$1(Log.scala:339)
> at kafka.log.Log.append(Log.scala:338)
> at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
> at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
> at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
> at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
> at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
> at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
> at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
> at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: failed to read chunk
> at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
> at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
> at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)