You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "dan norwood (JIRA)" <ji...@apache.org> on 2017/05/10 21:58:04 UTC

[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

    [ https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16005525#comment-16005525 ] 

dan norwood commented on KAFKA-5213:
------------------------------------

turns out i was running trunk client against 0.10.2.1 brokers. currently rebuilding trunk locally to try all trunk everything.

> IllegalStateException in ensureOpenForRecordAppend
> --------------------------------------------------
>
>                 Key: KAFKA-5213
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5213
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: dan norwood
>            Assignee: Apurva Mehta
>            Priority: Critical
>             Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4] Streams application error during processing: {} (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but MemoryRecordsBuilder is closed for record appends
> 	at org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
> 	at org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
> 	at org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
> 	at org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
> 	at org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
> 	at org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
> 	at org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
> 	at org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
> 	at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
> 	at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
> 	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
> 	at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
> 	at org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
> 	at org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
> 	at org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
> 	at org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
> 	at org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
> 	at org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
> 	at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
> 	at org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
> 	at org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
> 	at org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
> 	at org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
> 	at org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
> 	at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
> 	at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
> 	at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
> 	at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
> 	at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
> 	at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
> 	at org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
> 	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
> 	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)