You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Rodrigo Queiroz Saramago <de...@gmail.com> on 2017/01/20 14:18:36 UTC

NullPointerException on consumer - kafka 0.10.1.1

Hello,

I have a test environment with 3 brokers and 1 zookeeper nodes, in which
clients connect using two-way ssl authentication. Recently I updated kafka
0.10.1.0 to version 0.10.1.1, and now the consumers are throwing the
following error when started:

$ bin/kafka-console-consumer.sh --bootstrap-server
broker001-node.aws.zup.com.br:9092,broker002-node.aws.zup.com.br:9092,
broker003-node.aws.zup.com.br:9092 --topic gateway-topic --new-consumer
--consumer.config client.properties


[2017-01-19 18:40:34,902] ERROR Error processing message, terminating
consumer process:  (kafka.tools.ConsoleConsumer$)
java.lang.NullPointerException
    at org.apache.kafka.common.record.ByteBufferInputStream.
read(ByteBufferInputStream.java:34)
    at java.util.zip.CheckedInputStream.read(CheckedInputStream.java:59)
    at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:266)
    at java.util.zip.GZIPInputStream.readUShort(GZIPInputStream.java:258)
    at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:164)
    at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:79)
    at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:91)
    at org.apache.kafka.common.record.Compressor.
wrapForInput(Compressor.java:280)
    at org.apache.kafka.common.record.MemoryRecords$RecordsIterator.<init>(
MemoryRecords.java:247)
    at org.apache.kafka.common.record.MemoryRecords$
RecordsIterator.makeNext(MemoryRecords.java:316)
    at org.apache.kafka.common.record.MemoryRecords$
RecordsIterator.makeNext(MemoryRecords.java:222)
    at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(
AbstractIterator.java:79)
    at org.apache.kafka.common.utils.AbstractIterator.hasNext(
AbstractIterator.java:45)
    at org.apache.kafka.clients.consumer.internals.Fetcher.
parseFetchedData(Fetcher.java:685)
    at org.apache.kafka.clients.consumer.internals.Fetcher.
fetchedRecords(Fetcher.java:424)
    at org.apache.kafka.clients.consumer.KafkaConsumer.
pollOnce(KafkaConsumer.java:1045)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(
KafkaConsumer.java:979)
    at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
    at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
    at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
    at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
    at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages


The brokers also report the following error when the consumer try to
connect to then:

[2017-01-19 18:01:36,631] WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
    at org.apache.kafka.common.network.SslTransportLayer.
flush(SslTransportLayer.java:195)
    at org.apache.kafka.common.network.SslTransportLayer.
close(SslTransportLayer.java:150)
    at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:690)
    at org.apache.kafka.common.network.KafkaChannel.close(
KafkaChannel.java:47)
    at org.apache.kafka.common.network.Selector.close(Selector.java:487)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.
java:368)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
    at kafka.network.Processor.poll(SocketServer.scala:476)
    at kafka.network.Processor.run(SocketServer.scala:416)
    at java.lang.Thread.run(Thread.java:745)
[2017-01-19 18:04:50,616] INFO [Group Metadata Manager on Broker 1003]:
Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.
GroupMetadataManager)

I already checked the certificates and they are correct. In fact, they work
perfectly with old version of kafka (0.10.1.0).

Any idea what might be wrong?

Thanks.

-- 
Rodrigo Q. Saramago