You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/12/14 10:19:00 UTC
[jira] [Commented] (KAFKA-6360) RocksDB segments not removed when
store is closed causes re-initialization to fail
[ https://issues.apache.org/jira/browse/KAFKA-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16290670#comment-16290670 ]
ASF GitHub Bot commented on KAFKA-6360:
---------------------------------------
GitHub user dguy opened a pull request:
https://github.com/apache/kafka/pull/4324
KAFKA-6360: Clear RocksDB Segments when store is closed
Now that we support re-initializing state stores, we need to clear the segments when the store is closed so that they can be re-opened.
### Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/dguy/kafka kafka-6360
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/kafka/pull/4324.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #4324
----
commit 4c84af7522d847f82b14e5b3b4c589b0223a5bd8
Author: Damian Guy <da...@gmail.com>
Date: 2017-12-14T10:13:44Z
clear segments on close
----
> RocksDB segments not removed when store is closed causes re-initialization to fail
> ----------------------------------------------------------------------------------
>
> Key: KAFKA-6360
> URL: https://issues.apache.org/jira/browse/KAFKA-6360
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 1.1.0
> Reporter: Damian Guy
> Assignee: Damian Guy
> Priority: Blocker
> Fix For: 1.1.0
>
>
> When a store is re-initialized it is first closed, before it is opened again. When this happens the segments in the {{Segments}} class are closed, but they are not removed from the list of segments. So when the store is re-initialized the old closed segments are used. This results in:
> {code}
> [2017-12-13 09:29:32,037] ERROR [streams-saak-test-client-StreamThread-3] task [1_3] Failed to flush state store KSTREAM-AGGREGATE-STATE-STORE-0000000024: (org.apache.kafka.streams.processor.internals.ProcessorStateManager)
> org.apache.kafka.streams.errors.InvalidStateStoreException: Store KSTREAM-AGGREGATE-STATE-STORE-0000000024.1513080000000 is currently closed
> at org.apache.kafka.streams.state.internals.RocksDBStore.validateStoreOpen(RocksDBStore.java:241)
> at org.apache.kafka.streams.state.internals.RocksDBStore.put(RocksDBStore.java:289)
> at org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:102)
> at org.apache.kafka.streams.state.internals.RocksDBSessionStore.put(RocksDBSessionStore.java:122)
> at org.apache.kafka.streams.state.internals.ChangeLoggingSessionBytesStore.put(ChangeLoggingSessionBytesStore.java:78)
> at org.apache.kafka.streams.state.internals.ChangeLoggingSessionBytesStore.put(ChangeLoggingSessionBytesStore.java:33)
> at org.apache.kafka.streams.state.internals.CachingSessionStore.putAndMaybeForward(CachingSessionStore.java:179)
> at org.apache.kafka.streams.state.internals.CachingSessionStore.access$000(CachingSessionStore.java:38)
> at org.apache.kafka.streams.state.internals.CachingSessionStore$1.apply(CachingSessionStore.java:88)
> at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:142)
> at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:100)
> at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:127)
> at org.apache.kafka.streams.state.internals.CachingSessionStore.flush(CachingSessionStore.java:196)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)