You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Daniel (JIRA)" <ji...@apache.org> on 2018/08/08 14:34:00 UTC
[jira] [Updated] (NIFI-5498) QueryDatabaseTable unable to store
state in Zookeeper
[ https://issues.apache.org/jira/browse/NIFI-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Daniel updated NIFI-5498:
-------------------------
Description:
Running NiFi 1.7.1 in a clustered environment, external zookeeper. I'm getting the following stack trace when running any of my QueryDatabaseTable processors:
{code:java}
eryDatabaseTable[id=a26cafd1-56e4-3f52-99f5-8143abe3ce3a] failed to update State Manager, maximum observed values will not be recorded: java.io.IOException: Fai
led to set cluster-wide state in ZooKeeper for component with ID a26cafd1-56e4-3f52-99f5-8143abe3ce3a
java.io.IOException: Failed to set cluster-wide state in ZooKeeper for component with ID a26cafd1-56e4-3f52-99f5-8143abe3ce3a
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:343)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:283)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:228)
at org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.setState(StandardStateManagerProvider.java:271)
at org.apache.nifi.controller.state.StandardStateManager.setState(StandardStateManager.java:79)
at org.apache.nifi.controller.lifecycle.TaskTerminationAwareStateManager.setState(TaskTerminationAwareStateManager.java:64)
at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:430)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /nifi/components/a26cafd1-56e4-3f52-99f5-8143abe3ce3a
at org.apache.zookeeper.KeeperException.create(KeeperException.java:121)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
... 16 common frames omitted
{code}
The processors do work, but their inability to store state limits their usage, and the error message floods the log file. Any suggestions would be welcomed.
Other things to note:
All my other processors (AMQP, Kafka, GetHTTP) work okay. I've reproduced this in several environments of mine, including a single node, two node and three node cluster.
was:
Running NiFi 1.7.1 in a clustered environment, external zookeeper. I'm getting the following stack trace when running any of my QueryDatabaseTable processors:
{code:java}
eryDatabaseTable[id=a26cafd1-56e4-3f52-99f5-8143abe3ce3a] failed to update State Manager, maximum observed values will not be recorded: java.io.IOException: Fai
led to set cluster-wide state in ZooKeeper for component with ID a26cafd1-56e4-3f52-99f5-8143abe3ce3a
java.io.IOException: Failed to set cluster-wide state in ZooKeeper for component with ID a26cafd1-56e4-3f52-99f5-8143abe3ce3a
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:343)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:283)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:228)
at org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.setState(StandardStateManagerProvider.java:271)
at org.apache.nifi.controller.state.StandardStateManager.setState(StandardStateManager.java:79)
at org.apache.nifi.controller.lifecycle.TaskTerminationAwareStateManager.setState(TaskTerminationAwareStateManager.java:64)
at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:430)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /nifi/components/a26cafd1-56e4-3f52-99f5-8143abe3ce3a
at org.apache.zookeeper.KeeperException.create(KeeperException.java:121)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
... 16 common frames omitted
{code}
The processors do work, but their inability to store state limits their usage, and the error message floods the log file. Any suggestions would be welcomed.
> QueryDatabaseTable unable to store state in Zookeeper
> -----------------------------------------------------
>
> Key: NIFI-5498
> URL: https://issues.apache.org/jira/browse/NIFI-5498
> Project: Apache NiFi
> Issue Type: Improvement
> Components: Extensions
> Affects Versions: 1.7.1
> Reporter: Daniel
> Priority: Critical
>
> Running NiFi 1.7.1 in a clustered environment, external zookeeper. I'm getting the following stack trace when running any of my QueryDatabaseTable processors:
> {code:java}
> eryDatabaseTable[id=a26cafd1-56e4-3f52-99f5-8143abe3ce3a] failed to update State Manager, maximum observed values will not be recorded: java.io.IOException: Fai
> led to set cluster-wide state in ZooKeeper for component with ID a26cafd1-56e4-3f52-99f5-8143abe3ce3a
> java.io.IOException: Failed to set cluster-wide state in ZooKeeper for component with ID a26cafd1-56e4-3f52-99f5-8143abe3ce3a
> at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:343)
> at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:283)
> at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:228)
> at org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.setState(StandardStateManagerProvider.java:271)
> at org.apache.nifi.controller.state.StandardStateManager.setState(StandardStateManager.java:79)
> at org.apache.nifi.controller.lifecycle.TaskTerminationAwareStateManager.setState(TaskTerminationAwareStateManager.java:64)
> at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:430)
> at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
> at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
> at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /nifi/components/a26cafd1-56e4-3f52-99f5-8143abe3ce3a
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:121)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
> at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.createNode(ZooKeeperStateProvider.java:360)
> at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.setState(ZooKeeperStateProvider.java:321)
> ... 16 common frames omitted
> {code}
> The processors do work, but their inability to store state limits their usage, and the error message floods the log file. Any suggestions would be welcomed.
> Other things to note:
> All my other processors (AMQP, Kafka, GetHTTP) work okay. I've reproduced this in several environments of mine, including a single node, two node and three node cluster.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)