You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Upendra Yadav (JIRA)" <ji...@apache.org> on 2017/04/10 07:29:41 UTC

[jira] [Updated] (KAFKA-4967) java.io.EOFException Error while committing offsets

     [ https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Upendra Yadav updated KAFKA-4967:
---------------------------------
    Description: 
kafka server and client : 0.10.0.1
And consumer and producer side using latest kafka jars as mentioned above but still using old consumer apis. 

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing old position(Could not fetch offset from topic1_group1 partition [topic1,0] due to missing offset data in zookeeper). after some time when 2nd commit comes then it get updated.

because of duel commit enabled, I think kafka side position get update successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [********], Error while committing offsets.
java.io.EOFException
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
        at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
        at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
        at kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
        at kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
        at kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
        at kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
        at com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
        at com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)

  was:
kafka server and client : 0.10.0.1

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing old position(Could not fetch offset from topic1_group1 partition [topic1,0] due to missing offset data in zookeeper). after some time when 2nd commit comes then it get updated.

because of duel commit enabled, I think kafka side position get update successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [********], Error while committing offsets.
java.io.EOFException
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
        at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
        at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
        at kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
        at kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
        at kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
        at kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
        at com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
        at com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)


> java.io.EOFException Error while committing offsets
> ---------------------------------------------------
>
>                 Key: KAFKA-4967
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4967
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.10.0.1
>         Environment: OS : CentOS
>            Reporter: Upendra Yadav
>
> kafka server and client : 0.10.0.1
> And consumer and producer side using latest kafka jars as mentioned above but still using old consumer apis. 
> kafka server side configuration :
> listeners=PLAINTEXT://:9092
> #below configuration is for old clients, that was exists before. but now every clients are already moved with latest kafka client - 0.10.0.1
> log.message.format.version=0.8.2.1
> broker.id.generation.enable=false
> unclean.leader.election.enable=false
> Some of configurations for kafka consumer :
> auto.commit.enable is overridden to false
> auto.offset.reset is overridden to smallest
> consumer.timeout.ms is overridden to 100
> dual.commit.enabled is overridden to true
> fetch.message.max.bytes is overridden to 209715200
> group.id is overridden to crm_172_19_255_187_hadoop_tables
> offsets.storage is overridden to kafka
> rebalance.backoff.ms is overridden to 6000
> zookeeper.session.timeout.ms is overridden to 23000
> zookeeper.sync.time.ms is overridden to 2000
> below exception I'm getting on commit offset.
> Consumer process is still running after this exception..
> but when I'm checking offset position through kafka shell scripts its showing old position(Could not fetch offset from topic1_group1 partition [topic1,0] due to missing offset data in zookeeper). after some time when 2nd commit comes then it get updated.
> because of duel commit enabled, I think kafka side position get update successfully for both time.
> ERROR kafka.consumer.ZookeeperConsumerConnector: [********], Error while committing offsets.
> java.io.EOFException
>         at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>         at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
>         at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
>         at kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
>         at kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
>         at kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
>         at kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
>         at com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
>         at com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)