You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Evan Huus (JIRA)" <ji...@apache.org> on 2013/07/31 16:43:50 UTC
[jira] [Updated] (KAFKA-993) Offset Management API is either broken
or mis-documented
[ https://issues.apache.org/jira/browse/KAFKA-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Evan Huus updated KAFKA-993:
----------------------------
Description:
I am in the process of building a set of Go client bindings for the new 0.8 protocol (https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol). Everything works but the Offset Commit/Fetch APIs. Fetch never returns any data, and trying to Commit results in the broker forcibly disconnecting my client. I have double-checked the bytes on the wire using Wireshark, and my client is obeying the protocol spec.
After some digging, I found KAFKA-852 which seems related, but I have tried my client against the 0.8 beta, 0.8 branch, and even trunk with the same results.
When I try and commit, the stack-trace that the broker produces is:
[2013-07-31 10:34:14,423] ERROR Closing socket for /192.168.12.71 because of error (kafka.network.Processor)
java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:127)
at java.nio.ByteBuffer.get(ByteBuffer.java:675)
at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
at kafka.api.UpdateMetadataRequest$$anonfun$readFrom$1.apply(UpdateMetadataRequest.scala:42)
at kafka.api.UpdateMetadataRequest$$anonfun$readFrom$1.apply(UpdateMetadataRequest.scala:41)
at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
at scala.collection.immutable.Range$$anon$2.foreach(Range.scala:265)
at kafka.api.UpdateMetadataRequest$.readFrom(UpdateMetadataRequest.scala:41)
at kafka.api.RequestKeys$$anonfun$7.apply(RequestKeys.scala:42)
at kafka.api.RequestKeys$$anonfun$7.apply(RequestKeys.scala:42)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:49)
at kafka.network.Processor.read(SocketServer.scala:345)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:680)
Is this a bug, or is the protocol spec wrong? Also, since I can't seem to find a straight answer anywhere else: is offset fetch/commit expected to be in 0.8, 0.8.1, or some later release?
Thanks,
Evan
was:
I am in the process of building a set of Go client bindings for the new 0.8 protocol (https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol). Everything works but the Offset Commit/Fetch APIs. Fetch never returns and data, and trying to Commit results in the broker forcibly disconnecting my client. I have double-checked the bytes on the wire using Wireshark, and my client is obeying the protocol spec.
After some digging, I found KAFKA-852 which seems related, but I have tried my client against the 0.8 beta, 0.8 branch, and even trunk with the same results.
When I try and commit, the stack-trace that the broker produces is:
[2013-07-31 10:34:14,423] ERROR Closing socket for /192.168.12.71 because of error (kafka.network.Processor)
java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:127)
at java.nio.ByteBuffer.get(ByteBuffer.java:675)
at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
at kafka.api.UpdateMetadataRequest$$anonfun$readFrom$1.apply(UpdateMetadataRequest.scala:42)
at kafka.api.UpdateMetadataRequest$$anonfun$readFrom$1.apply(UpdateMetadataRequest.scala:41)
at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
at scala.collection.immutable.Range$$anon$2.foreach(Range.scala:265)
at kafka.api.UpdateMetadataRequest$.readFrom(UpdateMetadataRequest.scala:41)
at kafka.api.RequestKeys$$anonfun$7.apply(RequestKeys.scala:42)
at kafka.api.RequestKeys$$anonfun$7.apply(RequestKeys.scala:42)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:49)
at kafka.network.Processor.read(SocketServer.scala:345)
at kafka.network.Processor.run(SocketServer.scala:245)
at java.lang.Thread.run(Thread.java:680)
Is this a bug, or is the protocol spec wrong? Also, since I can't seem to find a straight answer anywhere else: is offset fetch/commit expected to be in 0.8, 0.8.1, or some later release?
Thanks,
Evan
> Offset Management API is either broken or mis-documented
> --------------------------------------------------------
>
> Key: KAFKA-993
> URL: https://issues.apache.org/jira/browse/KAFKA-993
> Project: Kafka
> Issue Type: Bug
> Components: network
> Affects Versions: 0.8, 0.8.1
> Reporter: Evan Huus
> Assignee: Jun Rao
>
> I am in the process of building a set of Go client bindings for the new 0.8 protocol (https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol). Everything works but the Offset Commit/Fetch APIs. Fetch never returns any data, and trying to Commit results in the broker forcibly disconnecting my client. I have double-checked the bytes on the wire using Wireshark, and my client is obeying the protocol spec.
> After some digging, I found KAFKA-852 which seems related, but I have tried my client against the 0.8 beta, 0.8 branch, and even trunk with the same results.
> When I try and commit, the stack-trace that the broker produces is:
> [2013-07-31 10:34:14,423] ERROR Closing socket for /192.168.12.71 because of error (kafka.network.Processor)
> java.nio.BufferUnderflowException
> at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:127)
> at java.nio.ByteBuffer.get(ByteBuffer.java:675)
> at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38)
> at kafka.api.UpdateMetadataRequest$$anonfun$readFrom$1.apply(UpdateMetadataRequest.scala:42)
> at kafka.api.UpdateMetadataRequest$$anonfun$readFrom$1.apply(UpdateMetadataRequest.scala:41)
> at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:282)
> at scala.collection.immutable.Range$$anon$2.foreach(Range.scala:265)
> at kafka.api.UpdateMetadataRequest$.readFrom(UpdateMetadataRequest.scala:41)
> at kafka.api.RequestKeys$$anonfun$7.apply(RequestKeys.scala:42)
> at kafka.api.RequestKeys$$anonfun$7.apply(RequestKeys.scala:42)
> at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:49)
> at kafka.network.Processor.read(SocketServer.scala:345)
> at kafka.network.Processor.run(SocketServer.scala:245)
> at java.lang.Thread.run(Thread.java:680)
> Is this a bug, or is the protocol spec wrong? Also, since I can't seem to find a straight answer anywhere else: is offset fetch/commit expected to be in 0.8, 0.8.1, or some later release?
> Thanks,
> Evan
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira