You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by nifi-san <na...@gmail.com> on 2017/06/28 13:45:52 UTC

Not able to start nifi nodes when clustered

I am trying to set up a fresh nifi cluster with 2 nodes.Details below:-

OS-104-Ubuntu
Nifi version 1.3.0
jdk - Oracle jdk-1.8.0_131

I tried to start the nifi nodes in a non clustered mode and both of them
started up fine.
After that,I have been trying to set up the 2 node nifi cluster and it fails
everytime with the below error:-

2017-06-28 18:44:12,769 WARN [main] o.a.n.d.html.HtmlDocumentationWriter
Could not link to
org.apache.nifi.couchbase.CouchbaseClusterControllerService becaus
e no bundles were found
2017-06-28 18:44:12,857 WARN [main] o.a.n.d.html.HtmlDocumentationWriter
Could not link to
org.apache.nifi.couchbase.CouchbaseClusterControllerService becaus
e no bundles were found
2017-06-28 18:44:12,894 WARN [main] o.a.n.d.html.HtmlDocumentationWriter
Could not link to
org.apache.nifi.distributed.cache.server.map.DistributedMapCacheCl
ient because no bundles were found
2017-06-28 18:44:12,912 INFO [main] org.eclipse.jetty.server.Server
jetty-9.4.3.v20170317
2017-06-28 18:44:13,068 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=81ms
2017-06-28 18:44:13,210 INFO [main] org.eclipse.jetty.server.session
DefaultSessionIdManager workerName=node0
2017-06-28 18:44:13,211 INFO [main] org.eclipse.jetty.server.session No
SessionScavenger set, using defaults
2017-06-28 18:44:13,212 INFO [main] org.eclipse.jetty.server.session
Scavenging every 600000ms
2017-06-28 18:44:13,246 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@34009349{/nifi-image-viewer-1.3.0,file:///opt/nifi/
nifi-1.3.0/work/jetty/nifi-image-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-media-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/ni
fi-image-viewer-1.3.0.war}
2017-06-28 18:44:14,113 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=733ms
2017-06-28 18:44:14,261 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@44fd7ba4{/nifi-update-attribute-ui-1.3.0,file:///op
t/nifi/nifi-1.3.0/work/jetty/nifi-update-attribute-ui-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-update-attribute-nar-1.3.0.nar-unpacked/META-IN
F/bundled-dependencies/nifi-update-attribute-ui-1.3.0.war}
2017-06-28 18:44:14,814 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=464ms
2017-06-28 18:44:14,858 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@167a21b{/nifi-standard-content-viewer-1.3.0,file://
/opt/nifi/nifi-1.3.0/work/jetty/nifi-standard-content-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-standard-nar-1.3.0.nar-unpacked/META-INF
/bundled-dependencies/nifi-standard-content-viewer-1.3.0.war}
2017-06-28 18:44:16,299 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=1235ms
2017-06-28 18:44:16,363 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@3dc39459{/nifi-jolt-transform-json-ui-1.3.0,file://
/opt/nifi/nifi-1.3.0/work/jetty/nifi-jolt-transform-json-ui-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-standard-nar-1.3.0.nar-unpacked/META-INF/
bundled-dependencies/nifi-jolt-transform-json-ui-1.3.0.war}
2017-06-28 18:44:16,539 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=55ms
2017-06-28 18:44:16,549 INFO [main] org.eclipse.jetty.ContextHandler./nifi
No Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:16,591 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@2bdab835{/nifi,file:///opt/nifi/nifi-1.3.0/work/jet
ty/nifi-web-ui-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-ui-1.3.0.war}
2017-06-28 18:44:16,713 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=77ms
2017-06-28 18:44:16,755 INFO [main] o.eclipse.jetty.ContextHandler./nifi-api
No Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:16,792 INFO [main] o.eclipse.jetty.ContextHandler./nifi-api
Initializing Spring root WebApplicationContext
2017-06-28 18:44:19,047 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader
Determined default nifi.properties path to be
'/opt/nifi/nifi-1.3.0/./conf/nifi.
properties'
2017-06-28 18:44:19,048 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader
Determined default nifi.properties path to be
'/opt/nifi/nifi-1.3.0/./conf/nifi.
properties'
2017-06-28 18:44:19,049 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader
Loaded 125 properties from /opt/nifi/nifi-1.3.0/./conf/nifi.properties
2017-06-28 18:44:20,511 INFO [main] o.a.nifi.util.FileBasedVariableRegistry
Loaded 86 properties from system properties and environment variables
2017-06-28 18:44:20,511 INFO [main] o.a.nifi.util.FileBasedVariableRegistry
Loaded a total of 86 properties.  Including precedence overrides effective
access
ible registry key size is 86
2017-06-28 18:44:20,558 INFO [main] o.a.n.c.r.WriteAheadFlowFileRepository
Initialized FlowFile Repository using 256 partitions
2017-06-28 18:44:20,741 INFO [main] o.a.n.p.lucene.SimpleIndexManager Index
Writer for ./provenance_repository/index-1498647000000 has been returned to
Index
 Manager and is no longer in use. Closing Index Writer
2017-06-28 18:44:20,745 INFO [main] o.a.n.p.PersistentProvenanceRepository
Recovered 0 records
2017-06-28 18:44:20,753 INFO [main] o.a.n.p.PersistentProvenanceRepository
Created new Provenance Event Writers for events starting with ID 0
2017-06-28 18:44:20,757 INFO [main] o.a.n.c.repository.FileSystemRepository
Maximum Threshold for Container default set to 15975036846 bytes; if volume
excee
ds this size, archived data will be deleted until it no longer exceeds this
size
2017-06-28 18:44:20,757 INFO [main] o.a.n.c.repository.FileSystemRepository
Initializing FileSystemRepository with 'Always Sync' set to false
2017-06-28 18:44:20,839 INFO [main] org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@468eff41 finished recovering records.
Performin
g Checkpoint to ensure proper state of Partitions before updates
2017-06-28 18:44:20,839 INFO [main] org.wali.MinimalLockingWriteAheadLog
Successfully recovered 0 records in 3 milliseconds
2017-06-28 18:44:20,850 INFO [main] org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@468eff41 checkpointed with 0 Records
and 0 Swap
 Files in 10 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit
Logs time = 2 millis), max Transaction ID -1
2017-06-28 18:44:20,894 ERROR [main] o.a.z.server.quorum.QuorumPeerConfig 
does not have the form host:port or host:port:port  or host:port:port:type
2017-06-28 18:44:20,897 WARN [main] org.eclipse.jetty.webapp.WebAppContext
Failed startup of context
o.e.j.w.WebAppContext@7e764e5c{/nifi-api,file:///opt/nif
i/nifi-1.3.0/work/jetty/nifi-web-api-1.3.0.war/webapp/,UNAVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/
nifi-web-api-1.3.0.war}
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
        at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
        at
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
        at
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
        at
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
        at
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
        at
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
        at
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
        at
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
        at
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
        at
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
        at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
        at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at
org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at org.eclipse.jetty.server.Server.start(Server.java:452)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
        at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:419)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.apache.nifi.web.server.JettyServer.start(JettyServer.java:705)
        at org.apache.nifi.NiFi.<init>(NiFi.java:160)
        at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowService': FactoryBean threw exception on object
creati
on; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'flowController': FactoryBean threw exception
on object creation; nested exception is
java.lang.ArrayIndexOutOfBoundsException: 1
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
        at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:55)
        ... 28 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowController': FactoryBean threw exception on
object cre
ation; nested exception is java.lang.ArrayIndexOutOfBoundsException: 1
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
        at
org.apache.nifi.spring.StandardFlowServiceFactoryBean.getObject(StandardFlowServiceFactoryBean.java:48)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
        ... 34 common frames omitted
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
        at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:188)
        at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.<init>(ZooKeeperStateServer.java:53)
        at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.create(ZooKeeperStateServer.java:176)
        at
org.apache.nifi.controller.FlowController.<init>(FlowController.java:575)
        at
org.apache.nifi.controller.FlowController.createClusteredInstance(FlowController.java:417)
        at
org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:61)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
        ... 41 common frames omitted
2017-06-28 18:44:21,436 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=408ms
2017-06-28 18:44:21,465 INFO [main] o.e.j.C./nifi-content-viewer No Spring
WebApplicationInitializer types detected on classpath
2017-06-28 18:44:21,468 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@7ed5cc8c{/nifi-content-viewer,file:///opt/nifi/nifi
-1.3.0/work/jetty/nifi-web-content-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependenci
es/nifi-web-content-viewer-1.3.0.war}
2017-06-28 18:44:21,470 INFO [main] o.e.jetty.server.handler.ContextHandler
Started o.e.j.s.h.ContextHandler@374bf34b{/nifi-docs,null,AVAILABLE}
2017-06-28 18:44:21,500 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=19ms
2017-06-28 18:44:21,502 INFO [main] o.e.jetty.ContextHandler./nifi-docs No
Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:21,529 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@67aaf882{/nifi-docs,file:///opt/nifi/nifi-1.3.0/wor
k/jetty/nifi-web-docs-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.3
.0.war}
2017-06-28 18:44:21,566 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=21ms
2017-06-28 18:44:21,581 INFO [main] org.eclipse.jetty.ContextHandler./ No
Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:21,584 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@65b1693c{/,file:///opt/nifi/nifi-1.3.0/work/jetty/n
ifi-web-error-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.3.0.war}
2017-06-28 18:44:21,600 INFO [main] o.eclipse.jetty.server.AbstractConnector
Started ServerConnector@1c09bb7a{HTTP/1.1,[http/1.1]}{hostname-1:8080}
2017-06-28 18:44:21,601 INFO [main] org.eclipse.jetty.server.Server Started
@16693ms
2017-06-28 18:44:21,601 WARN [main] org.apache.nifi.web.server.JettyServer
Failed to start web server... shutting down.
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
        at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
        at
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
        at
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
        at
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
        at
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
        at
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
        at
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
        at
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
        at
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
        at
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
        at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
        at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at
org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
        at org.eclipse.jetty.server.Server.start(Server.java:452)
        at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
        at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:419)
        at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at
org.apache.nifi.web.server.JettyServer.start(JettyServer.java:705)
        at org.apache.nifi.NiFi.<init>(NiFi.java:160)
        at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowService': FactoryBean threw exception on object
creati
on; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'flowController': FactoryBean threw exception
on object creation; nested exception is
java.lang.ArrayIndexOutOfBoundsException: 1
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
        at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:55)
        ... 28 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowController': FactoryBean threw exception on
object cre
ation; nested exception is java.lang.ArrayIndexOutOfBoundsException: 1
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
        at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
        at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
        at
org.apache.nifi.spring.StandardFlowServiceFactoryBean.getObject(StandardFlowServiceFactoryBean.java:48)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
        ... 34 common frames omitted
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
        at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:188)
        at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.<init>(ZooKeeperStateServer.java:53)
        at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.create(ZooKeeperStateServer.java:176)
        at
org.apache.nifi.controller.FlowController.<init>(FlowController.java:575)
        at
org.apache.nifi.controller.FlowController.createClusteredInstance(FlowController.java:417)
        at
org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:61)
        at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
        ... 41 common frames omitted
2017-06-28 18:44:21,602 INFO [Thread-1] org.apache.nifi.NiFi Initiating
shutdown of Jetty web server...
2017-06-28 18:44:21,613 INFO [Thread-1]
o.eclipse.jetty.server.AbstractConnector Stopped
ServerConnector@1c09bb7a{HTTP/1.1,[http/1.1]}{hostname-1:8080}
2017-06-28 18:44:21,613 INFO [Thread-1] org.eclipse.jetty.server.session
Stopped scavenging

I have validated all the configurations and below are my nifi.properties and
zookeeper.properties files:-

nifi.properties
more nifi.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait
before checking again for work?
nifi.bored.yield.duration=10 millis

nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components

####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is
not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded
ZooKeeper server
#nifi.state.management.embedded.zookeeper.start=false
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if
<nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties


# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE

# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false

nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4

# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/

# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
nifi.provenance.repository.debug.frequency=1_000_000
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=

# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be
searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename,
ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable.  Some
examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when
searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when
retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will
be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536

# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000

# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min

# Site to Site properties
nifi.remote.input.host=hostname-1
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=hostname-1
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=

nifi.security.keystore=
nifi.security.keystoreType=
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=
nifi.security.truststoreType=
nifi.security.truststorePasswd=
nifi.security.needClientAuth=
nifi.security.user.authorizer=file-provider
nifi.security.user.login.identity.provider=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=

# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities
coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi.
The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity
string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?),
L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1@$2
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance@(.*?)$
# nifi.security.identity.mapping.value.kerb=$1@$2

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=hostname-1
nifi.cluster.node.protocol.port=9999
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=hostname-1:2181,hostname-2:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi

# kerberos #
nifi.kerberos.krb5.file=

# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=

# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours

# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=

zookeeper.properties:-

#
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
#
#

clientPort=2181
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30

#
# Specifies the servers that are part of this zookeeper ensemble. For
# every NiFi instance running an embedded zookeeper, there needs to be
# a server entry below. For instance:
#
server.1=hostname-1:2888:3888
server.2=hostname-2:2888:3888
# server.2=nifi-node2-hostname:2888:3888
# server.3=nifi-node3-hostname:2888:3888
#
# The index of the server corresponds to the myid file that gets created
# in the dataDir of each node running an embedded zookeeper. See the
# administration guide for more details.


NOTE:-I am yet to secure the cluster which I plan for after this.

Any help is appreciated.





--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-start-nifi-nodes-when-clustered-tp16289.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: Not able to start nifi nodes when clustered

Posted by Mark Payne <ma...@hotmail.com>.
Hello,

We see in the attached log the following line:

2017-06-28 18:44:20,894 ERROR [main] o.a.z.server.quorum.QuorumPeerConfig
does not have the form host:port or host:port:port  or host:port:port:type

The code in ZooKeeper that produces this line looks like this:

            } else if (key.startsWith("server.")) {
                int dot = key.indexOf('.');
                long sid = Long.parseLong(key.substring(dot + 1));
                String parts[] = value.split(":");
                if ((parts.length != 2) && (parts.length != 3) && (parts.length !=4)) {
                    LOG.error(value
                       + " does not have the form host:port or host:port:port " +
                       " or host:port:port:type");
                }

So we can see here that it is in fact logging the invalid value. But since we don't see it in the logs, which means that it is some
sort of white space. From what you pasted, all looks good, but can you double-check your zookeeper.properties file again and
make sure that there's not something in it like:

server.3=

Or perhaps some sort of non-printable character in one of the two server.1=, server.2= lines?

Thanks
-Mark




On Jun 28, 2017, at 9:45 AM, nifi-san <na...@gmail.com>> wrote:

I am trying to set up a fresh nifi cluster with 2 nodes.Details below:-

OS-104-Ubuntu
Nifi version 1.3.0
jdk - Oracle jdk-1.8.0_131

I tried to start the nifi nodes in a non clustered mode and both of them
started up fine.
After that,I have been trying to set up the 2 node nifi cluster and it fails
everytime with the below error:-

2017-06-28 18:44:12,769 WARN [main] o.a.n.d.html.HtmlDocumentationWriter
Could not link to
org.apache.nifi.couchbase.CouchbaseClusterControllerService becaus
e no bundles were found
2017-06-28 18:44:12,857 WARN [main] o.a.n.d.html.HtmlDocumentationWriter
Could not link to
org.apache.nifi.couchbase.CouchbaseClusterControllerService becaus
e no bundles were found
2017-06-28 18:44:12,894 WARN [main] o.a.n.d.html.HtmlDocumentationWriter
Could not link to
org.apache.nifi.distributed.cache.server.map.DistributedMapCacheCl
ient because no bundles were found
2017-06-28 18:44:12,912 INFO [main] org.eclipse.jetty.server.Server
jetty-9.4.3.v20170317
2017-06-28 18:44:13,068 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=81ms
2017-06-28 18:44:13,210 INFO [main] org.eclipse.jetty.server.session
DefaultSessionIdManager workerName=node0
2017-06-28 18:44:13,211 INFO [main] org.eclipse.jetty.server.session No
SessionScavenger set, using defaults
2017-06-28 18:44:13,212 INFO [main] org.eclipse.jetty.server.session
Scavenging every 600000ms
2017-06-28 18:44:13,246 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@34009349{/nifi-image-viewer-1.3.0,file:///opt/nifi/
nifi-1.3.0/work/jetty/nifi-image-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-media-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/ni
fi-image-viewer-1.3.0.war}
2017-06-28 18:44:14,113 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=733ms
2017-06-28 18:44:14,261 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@44fd7ba4{/nifi-update-attribute-ui-1.3.0,file:///op
t/nifi/nifi-1.3.0/work/jetty/nifi-update-attribute-ui-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-update-attribute-nar-1.3.0.nar-unpacked/META-IN
F/bundled-dependencies/nifi-update-attribute-ui-1.3.0.war}
2017-06-28 18:44:14,814 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=464ms
2017-06-28 18:44:14,858 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@167a21b{/nifi-standard-content-viewer-1.3.0,file://
/opt/nifi/nifi-1.3.0/work/jetty/nifi-standard-content-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-standard-nar-1.3.0.nar-unpacked/META-INF
/bundled-dependencies/nifi-standard-content-viewer-1.3.0.war}
2017-06-28 18:44:16,299 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=1235ms
2017-06-28 18:44:16,363 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@3dc39459{/nifi-jolt-transform-json-ui-1.3.0,file://
/opt/nifi/nifi-1.3.0/work/jetty/nifi-jolt-transform-json-ui-1.3.0.war/webapp/,AVAILABLE}{./work/nar/extensions/nifi-standard-nar-1.3.0.nar-unpacked/META-INF/
bundled-dependencies/nifi-jolt-transform-json-ui-1.3.0.war}
2017-06-28 18:44:16,539 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=55ms
2017-06-28 18:44:16,549 INFO [main] org.eclipse.jetty.ContextHandler./nifi
No Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:16,591 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@2bdab835{/nifi,file:///opt/nifi/nifi-1.3.0/work/jet
ty/nifi-web-ui-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-ui-1.3.0.war}
2017-06-28 18:44:16,713 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=77ms
2017-06-28 18:44:16,755 INFO [main] o.eclipse.jetty.ContextHandler./nifi-api
No Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:16,792 INFO [main] o.eclipse.jetty.ContextHandler./nifi-api
Initializing Spring root WebApplicationContext
2017-06-28 18:44:19,047 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader
Determined default nifi.properties path to be
'/opt/nifi/nifi-1.3.0/./conf/nifi.
properties'
2017-06-28 18:44:19,048 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader
Determined default nifi.properties path to be
'/opt/nifi/nifi-1.3.0/./conf/nifi.
properties'
2017-06-28 18:44:19,049 INFO [main] o.a.nifi.properties.NiFiPropertiesLoader
Loaded 125 properties from /opt/nifi/nifi-1.3.0/./conf/nifi.properties
2017-06-28 18:44:20,511 INFO [main] o.a.nifi.util.FileBasedVariableRegistry
Loaded 86 properties from system properties and environment variables
2017-06-28 18:44:20,511 INFO [main] o.a.nifi.util.FileBasedVariableRegistry
Loaded a total of 86 properties.  Including precedence overrides effective
access
ible registry key size is 86
2017-06-28 18:44:20,558 INFO [main] o.a.n.c.r.WriteAheadFlowFileRepository
Initialized FlowFile Repository using 256 partitions
2017-06-28 18:44:20,741 INFO [main] o.a.n.p.lucene.SimpleIndexManager Index
Writer for ./provenance_repository/index-1498647000000 has been returned to
Index
Manager and is no longer in use. Closing Index Writer
2017-06-28 18:44:20,745 INFO [main] o.a.n.p.PersistentProvenanceRepository
Recovered 0 records
2017-06-28 18:44:20,753 INFO [main] o.a.n.p.PersistentProvenanceRepository
Created new Provenance Event Writers for events starting with ID 0
2017-06-28 18:44:20,757 INFO [main] o.a.n.c.repository.FileSystemRepository
Maximum Threshold for Container default set to 15975036846 bytes; if volume
excee
ds this size, archived data will be deleted until it no longer exceeds this
size
2017-06-28 18:44:20,757 INFO [main] o.a.n.c.repository.FileSystemRepository
Initializing FileSystemRepository with 'Always Sync' set to false
2017-06-28 18:44:20,839 INFO [main] org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@468eff41 finished recovering records.
Performin
g Checkpoint to ensure proper state of Partitions before updates
2017-06-28 18:44:20,839 INFO [main] org.wali.MinimalLockingWriteAheadLog
Successfully recovered 0 records in 3 milliseconds
2017-06-28 18:44:20,850 INFO [main] org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@468eff41 checkpointed with 0 Records
and 0 Swap
Files in 10 milliseconds (Stop-the-world time = 1 milliseconds, Clear Edit
Logs time = 2 millis), max Transaction ID -1
2017-06-28 18:44:20,894 ERROR [main] o.a.z.server.quorum.QuorumPeerConfig
does not have the form host:port or host:port:port  or host:port:port:type
2017-06-28 18:44:20,897 WARN [main] org.eclipse.jetty.webapp.WebAppContext
Failed startup of context
o.e.j.w.WebAppContext@7e764e5c{/nifi-api,file:///opt/nif
i/nifi-1.3.0/work/jetty/nifi-web-api-1.3.0.war/webapp/,UNAVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/
nifi-web-api-1.3.0.war}
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
       at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
       at
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
       at
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
       at
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
       at
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
       at
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
       at
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
       at
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
       at
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
       at
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
       at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
       at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
       at
org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
       at org.eclipse.jetty.server.Server.start(Server.java:452)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
       at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
       at org.eclipse.jetty.server.Server.doStart(Server.java:419)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.apache.nifi.web.server.JettyServer.start(JettyServer.java:705)
       at org.apache.nifi.NiFi.<init>(NiFi.java:160)
       at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowService': FactoryBean threw exception on object
creati
on; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'flowController': FactoryBean threw exception
on object creation; nested exception is
java.lang.ArrayIndexOutOfBoundsException: 1
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
       at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
       at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:55)
       ... 28 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowController': FactoryBean threw exception on
object cre
ation; nested exception is java.lang.ArrayIndexOutOfBoundsException: 1
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
       at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
       at
org.apache.nifi.spring.StandardFlowServiceFactoryBean.getObject(StandardFlowServiceFactoryBean.java:48)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
       ... 34 common frames omitted
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
       at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:188)
       at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.<init>(ZooKeeperStateServer.java:53)
       at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.create(ZooKeeperStateServer.java:176)
       at
org.apache.nifi.controller.FlowController.<init>(FlowController.java:575)
       at
org.apache.nifi.controller.FlowController.createClusteredInstance(FlowController.java:417)
       at
org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:61)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
       ... 41 common frames omitted
2017-06-28 18:44:21,436 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=408ms
2017-06-28 18:44:21,465 INFO [main] o.e.j.C./nifi-content-viewer No Spring
WebApplicationInitializer types detected on classpath
2017-06-28 18:44:21,468 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@7ed5cc8c{/nifi-content-viewer,file:///opt/nifi/nifi
-1.3.0/work/jetty/nifi-web-content-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependenci
es/nifi-web-content-viewer-1.3.0.war}
2017-06-28 18:44:21,470 INFO [main] o.e.jetty.server.handler.ContextHandler
Started o.e.j.s.h.ContextHandler@374bf34b{/nifi-docs,null,AVAILABLE}
2017-06-28 18:44:21,500 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=19ms
2017-06-28 18:44:21,502 INFO [main] o.e.jetty.ContextHandler./nifi-docs No
Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:21,529 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@67aaf882{/nifi-docs,file:///opt/nifi/nifi-1.3.0/wor
k/jetty/nifi-web-docs-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-docs-1.3
.0.war}
2017-06-28 18:44:21,566 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=21ms
2017-06-28 18:44:21,581 INFO [main] org.eclipse.jetty.ContextHandler./ No
Spring WebApplicationInitializer types detected on classpath
2017-06-28 18:44:21,584 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@65b1693c{/,file:///opt/nifi/nifi-1.3.0/work/jetty/n
ifi-web-error-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.3.0.war}
2017-06-28 18:44:21,600 INFO [main] o.eclipse.jetty.server.AbstractConnector
Started ServerConnector@1c09bb7a{HTTP/1.1,[http/1.1]}{hostname-1:8080}
2017-06-28 18:44:21,601 INFO [main] org.eclipse.jetty.server.Server Started
@16693ms
2017-06-28 18:44:21,601 WARN [main] org.apache.nifi.web.server.JettyServer
Failed to start web server... shutting down.
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
       at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
       at
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
       at
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
       at
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
       at
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
       at
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
       at
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
       at
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
       at
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
       at
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
       at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
       at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
       at
org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
       at org.eclipse.jetty.server.Server.start(Server.java:452)
       at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
       at
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
       at org.eclipse.jetty.server.Server.doStart(Server.java:419)
       at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
       at
org.apache.nifi.web.server.JettyServer.start(JettyServer.java:705)
       at org.apache.nifi.NiFi.<init>(NiFi.java:160)
       at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowService': FactoryBean threw exception on object
creati
on; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'flowController': FactoryBean threw exception
on object creation; nested exception is
java.lang.ArrayIndexOutOfBoundsException: 1
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
       at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
       at
org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:55)
       ... 28 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'flowController': FactoryBean threw exception on
object cre
ation; nested exception is java.lang.ArrayIndexOutOfBoundsException: 1
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
       at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
       at
org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1060)
       at
org.apache.nifi.spring.StandardFlowServiceFactoryBean.getObject(StandardFlowServiceFactoryBean.java:48)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
       ... 34 common frames omitted
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
       at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:188)
       at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.<init>(ZooKeeperStateServer.java:53)
       at
org.apache.nifi.controller.state.server.ZooKeeperStateServer.create(ZooKeeperStateServer.java:176)
       at
org.apache.nifi.controller.FlowController.<init>(FlowController.java:575)
       at
org.apache.nifi.controller.FlowController.createClusteredInstance(FlowController.java:417)
       at
org.apache.nifi.spring.FlowControllerFactoryBean.getObject(FlowControllerFactoryBean.java:61)
       at
org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
       ... 41 common frames omitted
2017-06-28 18:44:21,602 INFO [Thread-1] org.apache.nifi.NiFi Initiating
shutdown of Jetty web server...
2017-06-28 18:44:21,613 INFO [Thread-1]
o.eclipse.jetty.server.AbstractConnector Stopped
ServerConnector@1c09bb7a{HTTP/1.1,[http/1.1]}{hostname-1:8080}
2017-06-28 18:44:21,613 INFO [Thread-1] org.eclipse.jetty.server.session
Stopped scavenging

I have validated all the configurations and below are my nifi.properties and
zookeeper.properties files:-

nifi.properties
more nifi.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Core Properties #
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.count=
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait
before checking again for work?
nifi.bored.yield.duration=10 millis

nifi.authorizer.configuration.file=./conf/authorizers.xml
nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components

####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is
not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded
ZooKeeper server
#nifi.state.management.embedded.zookeeper.start=false
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if
<nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties


# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE

# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false

nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4

# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/

# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
nifi.provenance.repository.debug.frequency=1_000_000
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key=

# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=2
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be
searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
AlternateIdentifierURI, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename,
ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable.  Some
examples to consider are filename, uuid, mime.type
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when
searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when
retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will
be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536

# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000

# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min

# Site to Site properties
nifi.remote.input.host=hostname-1
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec

# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=hostname-1
nifi.web.http.port=8080
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200

# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=

nifi.security.keystore=
nifi.security.keystoreType=
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=
nifi.security.truststoreType=
nifi.security.truststorePasswd=
nifi.security.needClientAuth=
nifi.security.user.authorizer=file-provider
nifi.security.user.login.identity.provider=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=

# Identity Mapping Properties #
# These properties allow normalizing user identities such that identities
coming from different identity providers
# (certificates, LDAP, Kerberos) can be treated the same internally in NiFi.
The following example demonstrates normalizing
# DNs from certificates and principals from Kerberos into a common identity
string:
#
# nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), O=(.*?),
L=(.*?), ST=(.*?), C=(.*?)$
# nifi.security.identity.mapping.value.dn=$1@$2
# nifi.security.identity.mapping.pattern.kerb=^(.*?)/instance@(.*?)$
# nifi.security.identity.mapping.value.kerb=$1@$2

# cluster common properties (all nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false

# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=hostname-1
nifi.cluster.node.protocol.port=9999
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=

# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=hostname-1:2181,hostname-2:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi

# kerberos #
nifi.kerberos.krb5.file=

# kerberos service principal #
nifi.kerberos.service.principal=
nifi.kerberos.service.keytab.location=

# kerberos spnego principal #
nifi.kerberos.spnego.principal=
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.authentication.expiration=12 hours

# external properties files for variable registry
# supports a comma delimited list of file locations
nifi.variable.registry.properties=

zookeeper.properties:-

#
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
#
#

clientPort=2181
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=./state/zookeeper
autopurge.snapRetainCount=30

#
# Specifies the servers that are part of this zookeeper ensemble. For
# every NiFi instance running an embedded zookeeper, there needs to be
# a server entry below. For instance:
#
server.1=hostname-1:2888:3888
server.2=hostname-2:2888:3888
# server.2=nifi-node2-hostname:2888:3888
# server.3=nifi-node3-hostname:2888:3888
#
# The index of the server corresponds to the myid file that gets created
# in the dataDir of each node running an embedded zookeeper. See the
# administration guide for more details.


NOTE:-I am yet to secure the cluster which I plan for after this.

Any help is appreciated.





--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-start-nifi-nodes-when-clustered-tp16289.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com<http://Nabble.com>.


Re: Not able to start nifi nodes when clustered

Posted by nifi-san <na...@gmail.com>.
Thanks Mark.

I did the same and have got the embedded zookeeper running only on one of
the nodes now.
Both the nodes are up now.
However,I am not able to connect to the UI now.
It runs on the default port 8080.
root@hostname-1/opt/nifi/test/nifi-1.3.0/logs# netstat -an |grep 9999
tcp        0      0 0.0.0.0:9999            0.0.0.0:*               LISTEN
tcp        0      0 172.18.2.129:55222      172.18.2.135:9999      
TIME_WAIT
tcp        0      0 172.18.2.129:55220      172.18.2.135:9999      
TIME_WAIT
tcp        0      0 172.18.2.129:55218      172.18.2.135:9999      
TIME_WAIT
tcp        0      0 172.18.2.129:55224      172.18.2.135:9999      
TIME_WAIT
tcp        0      0 172.18.2.129:55216      172.18.2.135:9999      
TIME_WAIT
root@hostname-1:/opt/nifi/test/nifi-1.3.0/logs# netstat -an |grep 9998
tcp        0      0 0.0.0.0:9998            0.0.0.0:*               LISTEN
root@hostname-1:/opt/nifi/test/nifi-1.3.0/logs# netstat -an |grep 2181
tcp        0      0 172.18.2.129:56590      172.18.2.135:2181      
ESTABLISHED
root@hostname-1:/opt/nifi/test/nifi-1.3.0/logs# netstat -an |grep 2888 --
not listening
root@hostname-1:/opt/nifi/test/nifi-1.3.0/logs# netstat -an |grep 3888
---not listening
root@hhostname-1:/opt/nifi/test/nifi-1.3.0/logs# netstat -an |grep 8080
tcp        0      0 127.0.1.1:8080          0.0.0.0:*               LISTEN


There are no errors on checking the logs too.

:/opt/nifi/test/nifi-1.3.0/logs# grep -i error nifi-app.log
:/opt/nifi/test/nifi-1.3.0/logs# grep -i error nifi-bootstrap.log

The controller 

The browser just displays :-
The page cannot be displayed

Tried accessing through curl too but the same issue.

2017-06-30 17:53:36,306 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@10ecdd85{/nifi-content-viewer,file:///opt/nifi/test
/nifi-1.3.0/work/jetty/nifi-web-content-viewer-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-depen
dencies/nifi-web-content-viewer-1.3.0.war}
2017-06-30 17:53:36,308 INFO [main] o.e.jetty.server.handler.ContextHandler
Started o.e.j.s.h.ContextHandler@605a10fd{/nifi-docs,null,AVAILABLE}
2017-06-30 17:53:36,335 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=16ms
2017-06-30 17:53:36,336 INFO [main] o.e.jetty.ContextHandler./nifi-docs No
Spring WebApplicationInitializer types detected on classpath
2017-06-30 17:53:36,357 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@16acf986{/nifi-docs,file:///opt/nifi/test/nifi-1.3.
0/work/jetty/nifi-web-docs-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-doc
s-1.3.0.war}
2017-06-30 17:53:36,382 INFO [main] o.e.j.a.AnnotationConfiguration Scanning
elapsed time=15ms
2017-06-30 17:53:36,399 INFO [main] org.eclipse.jetty.ContextHandler./ No
Spring WebApplicationInitializer types detected on classpath
2017-06-30 17:53:36,401 INFO [main] o.e.jetty.server.handler.ContextHandler
Started
o.e.j.w.WebAppContext@526ae1fc{/,file:///opt/nifi/test/nifi-1.3.0/work/je
tty/nifi-web-error-1.3.0.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.3.0.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.3.0
.war}
2017-06-30 17:53:36,410 INFO [main] o.eclipse.jetty.server.AbstractConnector
Started ServerConnector@3cf8926d{HTTP/1.1,[http/1.1]}{hdp-poc-02:8080}
2017-06-30 17:53:36,410 INFO [main] org.eclipse.jetty.server.Server Started
@25446ms
2017-06-30 17:53:37,029 INFO [main] org.apache.nifi.web.server.JettyServer
Loading Flow...
2017-06-30 17:53:37,041 INFO [main] org.apache.nifi.io.socket.SocketListener
Now listening for connections from nodes on port 9999
2017-06-30 17:53:37,122 INFO [main] o.apache.nifi.controller.FlowController
Successfully synchronized controller with proposed flow
2017-06-30 17:53:37,141 INFO [main] o.a.nifi.controller.StandardFlowService
Connecting Node: hdp-poc-02:8080
2017-06-30 17:53:37,151 INFO [main]
o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster
Coordinator is located at hdp-poc-01:9999; will use
this address for sending heartbeat messages
2017-06-30 17:53:37,243 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator
Resetting cluster node statuses from {} to
{hdp-poc-02:8080=NodeConnectionStatus[no
deId=hdp-poc-02:8080, state=CONNECTING, updateId=60],
hdp-poc-01:8080=NodeConnectionStatus[nodeId=hdp-poc-01:8080,
state=DISCONNECTED, Disconnect Code=Has No
t Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to
Cluster, updateId=1]}
2017-06-30 17:53:37,271 INFO [main] o.apache.nifi.controller.FlowController
Successfully synchronized controller with proposed flow
2017-06-30 17:53:37,281 INFO [main] o.a.nifi.controller.StandardFlowService
Setting Flow Controller's Node ID: hdp-poc-02:8080
2017-06-30 17:53:37,285 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator
This node is now connected to the cluster. Will no longer require election
of DataF
low.
2017-06-30 17:53:37,286 INFO [main] o.apache.nifi.controller.FlowController
Cluster State changed from Not Clustered to Clustered
2017-06-30 17:53:37,288 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager
CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector
for r
ole Primary Node; this node is an active participant in the election.
2017-06-30 17:53:37,289 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager
CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector
for r
ole Cluster Coordinator; this node is an active participant in the election.
2017-06-30 17:53:37,329 WARN [Leader Election Notification Thread-1]
org.apache.curator.utils.ZKPaths The version of ZooKeeper being used doesn't
support Con
tainer nodes. CreateMode.PERSISTENT will be used instead.
2017-06-30 17:53:37,337 INFO [Leader Election Notification Thread-1]
o.a.n.c.l.e.CuratorLeaderElectionManager
org.apache.nifi.controller.leader.election.Cura
torLeaderElectionManager$ElectionListener@64151f8f This node has been
elected Leader for Role 'Primary Node'
2017-06-30 17:53:37,348 INFO [Leader Election Notification Thread-1]
o.apache.nifi.controller.FlowController This node has been elected Primary
Node
2017-06-30 17:53:37,359 INFO [main] org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@178cfe5e finished recovering records.
Performin
g Checkpoint to ensure proper state of Partitions before updates
2017-06-30 17:53:37,359 INFO [main] org.wali.MinimalLockingWriteAheadLog
Successfully recovered 0 records in 69 milliseconds
2017-06-30 17:53:37,416 INFO [main] org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@178cfe5e checkpointed with 0 Records
and 0 Swap
 Files in 56 milliseconds (Stop-the-world time = 29 milliseconds, Clear Edit
Logs time = 18 millis), max Transaction ID -1
2017-06-30 17:53:37,416 INFO [main] o.a.n.c.r.WriteAheadFlowFileRepository
Successfully restored 0 FlowFiles
2017-06-30 17:53:37,433 INFO [main] o.apache.nifi.controller.FlowController
Starting 0 processors/ports/funnels
2017-06-30 17:53:37,433 INFO [main] o.apache.nifi.controller.FlowController
Started 0 Remote Group Ports transmitting
2017-06-30 17:53:37,462 INFO [main] org.apache.nifi.web.server.JettyServer
Flow loaded successfully.
2017-06-30 17:53:37,462 INFO [main] org.apache.nifi.web.server.JettyServer
NiFi has started. The UI is available at the following URLs:
2017-06-30 17:53:37,462 INFO [main] org.apache.nifi.web.server.JettyServer
http://hdp-poc-02:8080/nifi
2017-06-30 17:53:37,464 INFO [main] org.apache.nifi.BootstrapListener
Successfully initiated communication with Bootstrap
2017-06-30 17:53:37,465 INFO [main] org.apache.nifi.NiFi Controller
initialization took 14279054394 nanoseconds.
2017-06-30 17:53:37,519 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater After receiving heartbeat response,
updated status of hdp-poc-0
1:8080 to NodeConnectionStatus[nodeId=hdp-poc-01:8080, state=CONNECTING,
updateId=62]
2017-06-30 17:53:37,519 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:53:37,486 and sent to hdp-po
c-01:9999 at 2017-06-30 17:53:37,519; send took 32 millis
2017-06-30 17:53:40,387 INFO [Process Cluster Protocol Request-1]
o.a.n.c.c.node.NodeClusterCoordinator Status of hdp-poc-02:8080 changed from
NodeConnection
Status[nodeId=hdp-poc-02:8080, state=CONNECTING, updateId=60] to
NodeConnectionStatus[nodeId=hdp-poc-02:8080, state=CONNECTED, updateId=64]
2017-06-30 17:53:40,387 INFO [Process Cluster Protocol Request-2]
o.a.n.c.c.node.NodeClusterCoordinator Status of hdp-poc-01:8080 changed from
NodeConnection
Status[nodeId=hdp-poc-01:8080, state=CONNECTING, updateId=62] to
NodeConnectionStatus[nodeId=hdp-poc-01:8080, state=CONNECTED, updateId=65]
2017-06-30 17:53:40,396 INFO [Process Cluster Protocol Request-2]
o.a.n.c.p.impl.SocketProtocolListener Finished processing request
5b5a8ce8-2ba9-4bcc-994f-2
f8b06dd2413 (type=NODE_STATUS_CHANGE, length=927 bytes) from hdp-poc-01 in
20 millis
2017-06-30 17:53:40,399 INFO [Process Cluster Protocol Request-1]
o.a.n.c.p.impl.SocketProtocolListener Finished processing request
b210a132-6001-4cc8-a7cb-6
15fc18bd104 (type=NODE_STATUS_CHANGE, length=927 bytes) from hdp-poc-01 in
26 millis
2017-06-30 17:53:42,526 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:53:42,520 and sent to hdp-po
c-01:9999 at 2017-06-30 17:53:42,526; send took 5 millis
2017-06-30 17:53:47,533 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:53:47,527 and sent to hdp-po
c-01:9999 at 2017-06-30 17:53:47,533; send took 5 millis
2017-06-30 17:53:53,372 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:53:52,534 and sent to hdp-po
c-01:9999 at 2017-06-30 17:53:53,372; send took 837 millis
2017-06-30 17:53:58,379 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:53:58,373 and sent to hdp-po
c-01:9999 at 2017-06-30 17:53:58,379; send took 6 millis
2017-06-30 17:54:03,386 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:03,380 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:03,386; send took 6 millis
2017-06-30 17:54:08,397 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:08,389 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:08,397; send took 7 millis
2017-06-30 17:54:13,409 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:13,400 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:13,409; send took 8 millis
2017-06-30 17:54:18,423 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:18,414 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:18,423; send took 8 millis
2017-06-30 17:54:23,430 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:23,424 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:23,430; send took 5 millis
2017-06-30 17:54:28,436 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:28,431 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:28,436; send took 5 millis
2017-06-30 17:54:33,443 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:33,438 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:33,443; send took 5 millis
2017-06-30 17:54:38,451 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:38,445 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:38,451; send took 5 millis
2017-06-30 17:54:43,460 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:43,454 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:43,460; send took 5 millis
2017-06-30 17:54:48,466 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:48,460 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:48,466; send took 5 millis
2017-06-30 17:54:53,473 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:53,466 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:53,473; send took 6 millis
2017-06-30 17:54:58,485 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:54:58,474 and sent to hdp-po
c-01:9999 at 2017-06-30 17:54:58,485; send took 11 millis
2017-06-30 17:55:03,497 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:03,486 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:03,497; send took 11 millis
2017-06-30 17:55:08,503 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:08,497 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:08,503; send took 5 millis
2017-06-30 17:55:13,509 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:13,504 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:13,509; send took 5 millis
2017-06-30 17:55:18,515 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:18,510 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:18,515; send took 4 millis
2017-06-30 17:55:23,521 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:23,516 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:23,521; send took 5 millis
2017-06-30 17:55:28,532 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:28,522 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:28,532; send took 9 millis
2017-06-30 17:55:33,540 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:33,534 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:33,540; send took 5 millis
2017-06-30 17:55:35,108 INFO [Write-Ahead Local State Provider Maintenance]
org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@565b77b
a checkpointed with 1 Records and 0 Swap Files in 7 milliseconds
(Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 0 millis), max
Transaction ID
0
2017-06-30 17:55:37,417 INFO [pool-12-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile
Repository
2017-06-30 17:55:37,481 INFO [pool-12-thread-1]
org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@178cfe5e checkpointed with 0 Record
s and 0 Swap Files in 64 milliseconds (Stop-the-world time = 23
milliseconds, Clear Edit Logs time = 26 millis), max Transaction ID -1
2017-06-30 17:55:37,482 INFO [pool-12-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile
Repository with 0 records in 64 mil
liseconds
2017-06-30 17:55:38,547 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:38,541 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:38,547; send took 5 millis
2017-06-30 17:55:43,555 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:43,547 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:43,555; send took 7 millis
2017-06-30 17:55:48,560 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:48,555 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:48,560; send took 5 millis
2017-06-30 17:55:53,568 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:53,561 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:53,568; send took 6 millis
2017-06-30 17:55:58,574 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:55:58,569 and sent to hdp-po
c-01:9999 at 2017-06-30 17:55:58,574; send took 5 millis
2017-06-30 17:56:03,580 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:03,575 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:03,580; send took 4 millis
2017-06-30 17:56:08,585 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:08,580 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:08,585; send took 4 millis
2017-06-30 17:56:13,591 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:13,586 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:13,591; send took 5 millis
2017-06-30 17:56:18,597 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:18,592 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:18,597; send took 5 millis
2017-06-30 17:56:23,603 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:23,598 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:23,603; send took 4 millis
2017-06-30 17:56:28,608 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:28,603 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:28,608; send took 4 millis
2017-06-30 17:56:33,614 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:33,610 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:33,614; send took 4 millis
2017-06-30 17:56:38,621 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:38,615 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:38,621; send took 6 millis
2017-06-30 17:56:43,626 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:43,622 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:43,626; send took 4 millis
2017-06-30 17:56:48,632 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:48,627 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:48,632; send took 4 millis
2017-06-30 17:56:53,637 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:53,632 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:53,637; send took 4 millis
2017-06-30 17:56:58,642 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:56:58,638 and sent to hdp-po
c-01:9999 at 2017-06-30 17:56:58,642; send took 4 millis
2017-06-30 17:57:03,648 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:03,643 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:03,648; send took 4 millis
2017-06-30 17:57:08,655 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:08,650 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:08,655; send took 4 millis
2017-06-30 17:57:13,660 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:13,656 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:13,660; send took 4 millis
2017-06-30 17:57:18,670 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:18,662 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:18,670; send took 8 millis
2017-06-30 17:57:23,676 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:23,672 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:23,676; send took 4 millis
2017-06-30 17:57:28,683 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:28,678 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:28,683; send took 4 millis
2017-06-30 17:57:33,688 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:33,683 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:33,688; send took 5 millis
2017-06-30 17:57:35,116 INFO [Write-Ahead Local State Provider Maintenance]
org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@565b77b
a checkpointed with 1 Records and 0 Swap Files in 7 milliseconds
(Stop-the-world time = 3 milliseconds, Clear Edit Logs time = 0 millis), max
Transaction ID
0
2017-06-30 17:57:37,482 INFO [pool-12-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile
Repository
2017-06-30 17:57:37,539 INFO [pool-12-thread-1]
org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@178cfe5e checkpointed with 0 Record
s and 0 Swap Files in 57 milliseconds (Stop-the-world time = 22
milliseconds, Clear Edit Logs time = 13 millis), max Transaction ID -1
2017-06-30 17:57:37,539 INFO [pool-12-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile
Repository with 0 records in 57 mil
liseconds
2017-06-30 17:57:38,694 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:38,689 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:38,694; send took 4 millis
2017-06-30 17:57:43,705 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:43,695 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:43,705; send took 9 millis
2017-06-30 17:57:48,710 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:48,705 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:48,710; send took 5 millis
2017-06-30 17:57:53,716 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:53,711 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:53,716; send took 5 millis
2017-06-30 17:57:58,721 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:57:58,717 and sent to hdp-po
c-01:9999 at 2017-06-30 17:57:58,721; send took 3 millis
2017-06-30 17:58:03,726 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:03,722 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:03,726; send took 4 millis
2017-06-30 17:58:08,731 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:08,727 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:08,731; send took 4 millis
2017-06-30 17:58:13,736 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:13,732 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:13,736; send took 3 millis
2017-06-30 17:58:18,741 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:18,736 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:18,741; send took 4 millis
2017-06-30 17:58:23,745 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:23,741 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:23,745; send took 3 millis
2017-06-30 17:58:28,751 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:28,746 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:28,751; send took 4 millis
2017-06-30 17:58:33,756 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:33,751 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:33,756; send took 4 millis
2017-06-30 17:58:38,762 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:38,756 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:38,762; send took 5 millis
2017-06-30 17:58:43,768 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:43,763 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:43,768; send took 4 millis
2017-06-30 17:58:48,773 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:48,768 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:48,773; send took 4 millis
2017-06-30 17:58:53,778 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:53,773 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:53,778; send took 4 millis
2017-06-30 17:58:58,783 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:58:58,779 and sent to hdp-po
c-01:9999 at 2017-06-30 17:58:58,783; send took 4 millis
2017-06-30 17:59:03,787 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:03,783 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:03,787; send took 4 millis
2017-06-30 17:59:08,792 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:08,788 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:08,792; send took 3 millis
2017-06-30 17:59:13,797 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:13,793 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:13,797; send took 4 millis
2017-06-30 17:59:18,802 INFO [Clustering Tasks Thread-3]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:18,798 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:18,802; send took 3 millis
2017-06-30 17:59:23,806 INFO [Clustering Tasks Thread-1]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:23,802 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:23,806; send took 3 millis
2017-06-30 17:59:28,815 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:28,807 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:28,815; send took 7 millis
2017-06-30 17:59:33,823 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:33,815 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:33,823; send took 7 millis
2017-06-30 17:59:35,120 INFO [Write-Ahead Local State Provider Maintenance]
org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@565b77b
a checkpointed with 1 Records and 0 Swap Files in 4 milliseconds
(Stop-the-world time = 1 milliseconds, Clear Edit Logs time = 0 millis), max
Transaction ID
0
2017-06-30 17:59:37,539 INFO [pool-12-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile
Repository
2017-06-30 17:59:37,589 INFO [pool-12-thread-1]
org.wali.MinimalLockingWriteAheadLog
org.wali.MinimalLockingWriteAheadLog@178cfe5e checkpointed with 0 Record
s and 0 Swap Files in 49 milliseconds (Stop-the-world time = 23
milliseconds, Clear Edit Logs time = 14 millis), max Transaction ID -1
2017-06-30 17:59:37,589 INFO [pool-12-thread-1]
o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile
Repository with 0 records in 49 mil
liseconds
2017-06-30 17:59:38,829 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:38,824 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:38,829; send took 4 millis
2017-06-30 17:59:43,835 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:43,830 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:43,835; send took 5 millis
2017-06-30 17:59:48,839 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:48,835 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:48,839; send took 3 millis
2017-06-30 17:59:53,846 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:53,840 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:53,846; send took 5 millis
2017-06-30 17:59:58,852 INFO [Clustering Tasks Thread-2]
o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-06-30
17:59:58,847 and sent to hdp-po
c-01:9999 at 2017-06-30 17:59:58,852; send took 4 millis







--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-start-nifi-nodes-when-clustered-tp16289p16316.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.

Re: Not able to start nifi nodes when clustered

Posted by Mark Payne <ma...@hotmail.com>.
The problem that you're running into is that when NiFi starts, it tries to connect to ZooKeeper, but
you have a two-node cluster, both running ZooKeeper. As a result, you will need both nodes up and
running in order to establish a ZooKeeper quorum. So when you startup, you'll have trouble connecting
to ZooKeeper because there is no quorum.

For a production use, I would highly recommend using an external zookeeper instead of the embedded
instance. For a simple cluster for testing/integration/etc the embedded zookeeper is fine, but I would recommend
you run only 1 zookeeper instance or run 3 nodes. If you only need two NiFi nodes, you will want to remove the
"server.2" line from the zookeeper.properties file on Node 1, and then on Node 2 set the 
"nifi.state.management.embedded.zookeeper.start" property to false. At that point, as long as Node 1 is started
first, Node 2 should have no problem joining. This is partially why we recommend an external ZK for any sort
of production use.

Thanks
-Mark


> On Jun 29, 2017, at 3:47 AM, nifi-san <na...@gmail.com> wrote:
> 
> I was able to get over this.There was a typo and I can now start the two
> clustered Nifi nodes.
> 
> However, I keep on getting the below messages on both the nodes when I try
> to start them.
> 
> 2017-06-29 13:03:58,537 WARN [main] o.a.nifi.controller.StandardFlowService
> There is currently no Cluster Coordinator. This often happens upon restart
> of NiFi when running an embedded ZooKeeper. Will register this node to
> become the active Cluster Coordinator and will attempt to connect to cluster
> again
> 2017-06-29 13:03:58,538 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
> Election for role 'Cluster Coordinator' but this role is already registered
> 2017-06-29 13:04:20,867 WARN [main] o.a.nifi.controller.StandardFlowService
> There is currently no Cluster Coordinator. This often happens upon restart
> of NiFi when running an embedded ZooKeeper. Will register this node to
> become the active Cluster Coordinator and will attempt to connect to cluster
> again
> 2017-06-29 13:04:20,867 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager
> CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
> Election for role 'Cluster Coordinator' but this role is already registered
> 2017-06-29 13:04:28,871 INFO [Curator-Framework-0]
> o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
> 2017-06-29 13:04:28,872 INFO [Curator-ConnectionStateManager-0]
> o.a.n.c.l.e.CuratorLeaderElectionManager
> org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@25c5f7c5
> Connection State changed to SUSPENDED
> 2017-06-29 13:04:28,878 ERROR [Curator-Framework-0]
> o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss
>        at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>        at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728)
>        at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857)
>        at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
>        at
> org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
>        at
> org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>        at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>        at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>        at java.lang.Thread.run(Thread.java:748)
> 
> 
> The above error comes in both the nodes.
> 
> I tried to modify the java security file to /dev/urandom on both the nodes
> in the cluster but it did not help.
> Also modified the properties belwo in nifi.properties on both nodes:-
> 
> nifi.cluster.flow.election.max.wait.time=5 mins
> nifi.cluster.flow.election.max.candidates=2
> 
> Still it does not work.
> 
> The only ports established are :-
> 
> root@hostname-1:/opt/nifi/nifi-1.3.0/conf# ^C
> root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 8080
> tcp        0      0 127.0.1.1:8080          0.0.0.0:*               LISTEN
> root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 9999
> tcp        0      0 0.0.0.0:9999            0.0.0.0:*               LISTEN
> root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 9998 --not
> running
> root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 2888  -- not
> running
> root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 3888 
> tcp        0      0 127.0.1.1:3888          0.0.0.0:*               LISTEN\
> 
> Tried to ping the hostnames from each of the two nodes and they look to be
> fine.
> Firewall has been disabled.
> 
> Any pointers ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> --
> View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-start-nifi-nodes-when-clustered-tp16289p16304.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: Not able to start nifi nodes when clustered

Posted by nifi-san <na...@gmail.com>.
I was able to get over this.There was a typo and I can now start the two
clustered Nifi nodes.

However, I keep on getting the below messages on both the nodes when I try
to start them.

2017-06-29 13:03:58,537 WARN [main] o.a.nifi.controller.StandardFlowService
There is currently no Cluster Coordinator. This often happens upon restart
of NiFi when running an embedded ZooKeeper. Will register this node to
become the active Cluster Coordinator and will attempt to connect to cluster
again
2017-06-29 13:03:58,538 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager
CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
Election for role 'Cluster Coordinator' but this role is already registered
2017-06-29 13:04:20,867 WARN [main] o.a.nifi.controller.StandardFlowService
There is currently no Cluster Coordinator. This often happens upon restart
of NiFi when running an embedded ZooKeeper. Will register this node to
become the active Cluster Coordinator and will attempt to connect to cluster
again
2017-06-29 13:04:20,867 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager
CuratorLeaderElectionManager[stopped=false] Attempted to register Leader
Election for role 'Cluster Coordinator' but this role is already registered
2017-06-29 13:04:28,871 INFO [Curator-Framework-0]
o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
2017-06-29 13:04:28,872 INFO [Curator-ConnectionStateManager-0]
o.a.n.c.l.e.CuratorLeaderElectionManager
org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@25c5f7c5
Connection State changed to SUSPENDED
2017-06-29 13:04:28,878 ERROR [Curator-Framework-0]
o.a.c.f.imps.CuratorFrameworkImpl Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss
        at
org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at
org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:728)
        at
org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:857)
        at
org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809)
        at
org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64)
        at
org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)


The above error comes in both the nodes.

I tried to modify the java security file to /dev/urandom on both the nodes
in the cluster but it did not help.
Also modified the properties belwo in nifi.properties on both nodes:-

nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=2

Still it does not work.

The only ports established are :-

root@hostname-1:/opt/nifi/nifi-1.3.0/conf# ^C
root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 8080
tcp        0      0 127.0.1.1:8080          0.0.0.0:*               LISTEN
root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 9999
tcp        0      0 0.0.0.0:9999            0.0.0.0:*               LISTEN
root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 9998 --not
running
root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 2888  -- not
running
root@hostname-1:/opt/nifi/nifi-1.3.0/conf# netstat -an |grep 3888 
tcp        0      0 127.0.1.1:3888          0.0.0.0:*               LISTEN\

Tried to ping the hostnames from each of the two nodes and they look to be
fine.
Firewall has been disabled.

Any pointers ?










--
View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/Not-able-to-start-nifi-nodes-when-clustered-tp16289p16304.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.