You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@nifi.apache.org by Chakrader Dewaragatla <Ch...@lifelock.com> on 2015/09/30 00:56:07 UTC
nifi Cluster setup issue
Hi – We are exploring nifi for our workflow management, I have a cluster setup with 3 nodes. One as master and rest as slaves.
I see following error when I try to access the nifi workflow webpage.
2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost, apiPort=8080, socketAddress=localhost, socketPort=3002] encountered exception: java.util.concurrent.ExecutionException: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered exception: java.util.concurrent.ExecutionException: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
2015-09-29 22:46:13,264 INFO [NiFi Web Server-23] o.a.n.c.m.e.NoConnectedNodesException org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No nodes were able to process this request.. Returning Conflict response.
Master is not hybrid, I wonder why it is trying to self connect 3002.
Master settings:
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=true
nifi.cluster.manager.address=10.233.2.40
nifi.cluster.manager.protocol.port=3001
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
Slave settings:
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.233.2.42
nifi.cluster.node.protocol.port=3002
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.node.unicast.manager.protocol.port=3001
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
Re: nifi Cluster setup issue
Posted by Aldrin Piri <al...@gmail.com>.
Chakrader,
You certainly can have hybrid nodes although we tend to avoid that if
possible. The key point to keep in mind is your allocation of ports for
your http server and clustering protocol as there are two processes living
on one system. It looks like from the configurations you have previously
shared, your clustering ports are okay, just be sure to confirm that is the
case with your web server.
In terms of the authority provider, the only currently provided
implementation in this release is that of PKI via two-way SSL as outlined
in the Administrator Guide [1]. Additional providers are an area under
current development as per NIFI-655 [2].
[1]
http://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#controlling-levels-of-access
[2] https://issues.apache.org/jira/browse/NIFI-655
On Wed, Sep 30, 2015 at 1:37 PM, Chakrader Dewaragatla <
Chakrader.Dewaragatla@lifelock.com> wrote:
> Thanks Corey, below settings on Master node works.
>
> Question, can I have master and slave on one node (Hybrid) ?
>
> Can you send me details on setting up authority provider ? Meantime I will
> go through the documentation and give a try.
>
> On 9/29/15, 6:04 PM, "Corey Flowers" <cf...@onyxpoint.com> wrote:
>
> >Did you try setting the
> >nifi.cluster.node.unicast.manager.address=
> >nifi.cluster.manager.address=
> >
> >To
> >nifi.cluster.node.unicast.manager.address=10.233.2.40
> >nifi.cluster.manager.address=10.233.2.40
> >
> >
> >
> >Also if you are running the cluster you should setup the authority
> >provider. Let me know if you need help with that.
> >
> >Sent from my iPhone
> >
> >> On Sep 29, 2015, at 8:49 PM, Chakrader Dewaragatla
> >><Ch...@lifelock.com> wrote:
> >>
> >> 10.233.2.40
>
> ________________________________
> The information contained in this transmission may contain privileged and
> confidential information. It is intended only for the use of the person(s)
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> ________________________________
>
Re: nifi Cluster setup issue
Posted by Chakrader Dewaragatla <Ch...@lifelock.com>.
Thanks Corey, below settings on Master node works.
Question, can I have master and slave on one node (Hybrid) ?
Can you send me details on setting up authority provider ? Meantime I will
go through the documentation and give a try.
On 9/29/15, 6:04 PM, "Corey Flowers" <cf...@onyxpoint.com> wrote:
>Did you try setting the
>nifi.cluster.node.unicast.manager.address=
>nifi.cluster.manager.address=
>
>To
>nifi.cluster.node.unicast.manager.address=10.233.2.40
>nifi.cluster.manager.address=10.233.2.40
>
>
>
>Also if you are running the cluster you should setup the authority
>provider. Let me know if you need help with that.
>
>Sent from my iPhone
>
>> On Sep 29, 2015, at 8:49 PM, Chakrader Dewaragatla
>><Ch...@lifelock.com> wrote:
>>
>> 10.233.2.40
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
Re: nifi Cluster setup issue
Posted by Corey Flowers <cf...@onyxpoint.com>.
Did you try setting the
nifi.cluster.node.unicast.manager.address=
nifi.cluster.manager.address=
To
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.manager.address=10.233.2.40
Also if you are running the cluster you should setup the authority
provider. Let me know if you need help with that.
Sent from my iPhone
> On Sep 29, 2015, at 8:49 PM, Chakrader Dewaragatla <Ch...@lifelock.com> wrote:
>
> 10.233.2.40
Re: nifi Cluster setup issue
Posted by Chakrader Dewaragatla <Ch...@lifelock.com>.
Thanks Aldrin/Corey. I will try it tomorrow morning.
From: Aldrin Piri <al...@gmail.com>>
Reply-To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Date: Tuesday, September 29, 2015 at 6:44 PM
To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Subject: Re: nifi Cluster setup issue
Oops, definitely missed what Corey sent out. Please specify the nifi.cluster.manager.address as he suggests.
On Tue, Sep 29, 2015 at 9:40 PM, Aldrin Piri <al...@gmail.com>> wrote:
Chakrader,
You would also need to set the nifi.web.http.host for the manager as well. Each member of the cluster provides how they can be accessed in the protocol. This would explain what you are seeing in the node from the master/manager. Please try also setting the manager and let us know if this gets your cluster up and running.
On Tue, Sep 29, 2015 at 8:48 PM, Chakrader Dewaragatla <Ch...@lifelock.com>> wrote:
Aldrin - I redeployed with nifi with default settings and modified the required settings needed for cluster setup documented in https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html.
I tried to change nifi.web.http.host property on Node (slave) with its ip. On slave I notice following error:
2015-09-30 00:44:21,855 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: [id=75baec43-adf1-4e17-98fd-49111a5a0c76, apiAddress=10.233.2.42, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002]
On Master:
As usual:
2015-09-30 00:44:51,036 INFO [Process Pending Heartbeats] org.apache.nifi.cluster.heartbeat Received heartbeat for node [id=644370b1-4d8f-4004-ac6c-8bd614a1890b, apiAddress=localhost, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002].
Here is my complete conf file :
Master conf file:
# Core Properties #
nifi.version=0.3.0
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.authority.provider.configuration.file=./conf/authority-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=1
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, ContentType, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.socket.host=
nifi.remote.input.socket.port=
nifi.remote.input.secure=true
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.https.host=
nifi.web.https.port=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.security.keystore=
nifi.security.keystoreType=
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=
nifi.security.truststoreType=
nifi.security.truststorePasswd=
nifi.security.needClientAuth=
nifi.security.user.credential.cache.duration=24 hours
nifi.security.user.authority.provider=file-provider
nifi.security.support.new.account.requests=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# cluster common properties (cluster manager and nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
nifi.cluster.protocol.socket.timeout=30 sec
nifi.cluster.protocol.connection.handshake.timeout=45 sec
# if multicast is used, then nifi.cluster.protocol.multicast.xxx properties must be configured #
nifi.cluster.protocol.use.multicast=false
nifi.cluster.protocol.multicast.address=
nifi.cluster.protocol.multicast.port=
nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
nifi.cluster.protocol.multicast.service.locator.attempts=3
nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=false
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=
nifi.cluster.node.unicast.manager.protocol.port=
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=true
nifi.cluster.manager.address=
nifi.cluster.manager.protocol.port=3001
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
# kerberos #
nifi.kerberos.krb5.file=
Slave conf file :
# Core Properties #
nifi.version=0.3.0
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.authority.provider.configuration.file=./conf/authority-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=1
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, ContentType, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.socket.host=
nifi.remote.input.socket.port=
nifi.remote.input.secure=true
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=10.233.2.42
nifi.web.http.port=8080
nifi.web.https.host=
nifi.web.https.port=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.security.keystore=
nifi.security.keystoreType=
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=
nifi.security.truststoreType=
nifi.security.truststorePasswd=
nifi.security.needClientAuth=
nifi.security.user.credential.cache.duration=24 hours
nifi.security.user.authority.provider=file-provider
nifi.security.support.new.account.requests=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# cluster common properties (cluster manager and nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
nifi.cluster.protocol.socket.timeout=30 sec
nifi.cluster.protocol.connection.handshake.timeout=45 sec
# if multicast is used, then nifi.cluster.protocol.multicast.xxx properties must be configured #
nifi.cluster.protocol.use.multicast=false
nifi.cluster.protocol.multicast.address=
nifi.cluster.protocol.multicast.port=
nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
nifi.cluster.protocol.multicast.service.locator.attempts=3
nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.233.2.42
nifi.cluster.node.protocol.port=3002
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.node.unicast.manager.protocol.port=3001
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=false
nifi.cluster.manager.address=
nifi.cluster.manager.protocol.port=
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
# kerberos #
nifi.kerberos.krb5.file=
From: Aldrin Piri <al...@gmail.com>>
Reply-To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Date: Tuesday, September 29, 2015 at 4:26 PM
To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Subject: Re: nifi Cluster setup issue
Chakrader,
I suspect that the nifi.web.http.host property is not using the same address as that specified and is transmitting "localhost" (the system's response to a localhost hostname lookup from Java). While the clustering protocol communicates via the properties you list, the actual command-control and replication of requests from master to slave nodes is carried out via the REST API which also runs on the web tier. The system's hostname, as previously determined, is transmitted as part of the clustering handshake.
Either the system needs to have it report a valid hostname or a host needs to be specified for nifi.web.http.host. In either case of hostname or specified host, each must be network reachable from the master and able to be bound to locally within your server.
Let us know if you need additional direction and we'd be happy to help you through the process.
Thanks!
On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <Ch...@lifelock.com>> wrote:
Hi – We are exploring nifi for our workflow management, I have a cluster setup with 3 nodes. One as master and rest as slaves.
I see following error when I try to access the nifi workflow webpage.
2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost, apiPort=8080, socketAddress=localhost, socketPort=3002] encountered exception: java.util.concurrent.ExecutionException: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered exception: java.util.concurrent.ExecutionException: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
2015-09-29 22:46:13,264 INFO [NiFi Web Server-23] o.a.n.c.m.e.NoConnectedNodesException org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No nodes were able to process this request.. Returning Conflict response.
Master is not hybrid, I wonder why it is trying to self connect 3002.
Master settings:
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=true
nifi.cluster.manager.address=10.233.2.40
nifi.cluster.manager.protocol.port=3001
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
Slave settings:
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.233.2.42
nifi.cluster.node.protocol.port=3002
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.node.unicast.manager.protocol.port=3001
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
Re: nifi Cluster setup issue
Posted by Aldrin Piri <al...@gmail.com>.
Oops, definitely missed what Corey sent out. Please specify the
nifi.cluster.manager.address as he suggests.
On Tue, Sep 29, 2015 at 9:40 PM, Aldrin Piri <al...@gmail.com> wrote:
> Chakrader,
>
> You would also need to set the nifi.web.http.host for the manager as
> well. Each member of the cluster provides how they can be accessed in the
> protocol. This would explain what you are seeing in the node from the
> master/manager. Please try also setting the manager and let us know if
> this gets your cluster up and running.
>
> On Tue, Sep 29, 2015 at 8:48 PM, Chakrader Dewaragatla <
> Chakrader.Dewaragatla@lifelock.com> wrote:
>
>> Aldrin - I redeployed with nifi with default settings and modified the
>> required settings needed for cluster setup documented in
>> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html.
>>
>> I tried to change nifi.web.http.host property on Node (slave) with its
>> ip. On slave I notice following error:
>>
>> 2015-09-30 00:44:21,855 INFO [main]
>> o.a.nifi.controller.StandardFlowService Connecting Node:
>> [id=75baec43-adf1-4e17-98fd-49111a5a0c76, apiAddress=10.233.2.42,
>> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002]
>>
>>
>>
>> On Master:
>>
>> As usual:
>>
>>
>> 2015-09-30 00:44:51,036 INFO [Process Pending Heartbeats]
>> org.apache.nifi.cluster.heartbeat Received heartbeat for node
>> [id=644370b1-4d8f-4004-ac6c-8bd614a1890b, apiAddress=localhost,
>> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002].
>>
>>
>> Here is my complete conf file :
>>
>>
>> Master conf file:
>>
>>
>>
>> # Core Properties #
>>
>> nifi.version=0.3.0
>>
>> nifi.flow.configuration.file=./conf/flow.xml.gz
>>
>> nifi.flow.configuration.archive.dir=./conf/archive/
>>
>> nifi.flowcontroller.autoResumeState=true
>>
>> nifi.flowcontroller.graceful.shutdown.period=10 sec
>>
>> nifi.flowservice.writedelay.interval=500 ms
>>
>> nifi.administrative.yield.duration=30 sec
>>
>> # If a component has no work to do (is "bored"), how long should we wait
>> before checking again for work?
>>
>> nifi.bored.yield.duration=10 millis
>>
>>
>> nifi.authority.provider.configuration.file=./conf/authority-providers.xml
>>
>> nifi.templates.directory=./conf/templates
>>
>> nifi.ui.banner.text=
>>
>> nifi.ui.autorefresh.interval=30 sec
>>
>> nifi.nar.library.directory=./lib
>>
>> nifi.nar.working.directory=./work/nar/
>>
>> nifi.documentation.working.directory=./work/docs/components
>>
>>
>> # H2 Settings
>>
>> nifi.database.directory=./database_repository
>>
>> nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
>>
>>
>> # FlowFile Repository
>>
>>
>> nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
>>
>> nifi.flowfile.repository.directory=./flowfile_repository
>>
>> nifi.flowfile.repository.partitions=256
>>
>> nifi.flowfile.repository.checkpoint.interval=2 mins
>>
>> nifi.flowfile.repository.always.sync=false
>>
>>
>>
>> nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
>>
>> nifi.queue.swap.threshold=20000
>>
>> nifi.swap.in.period=5 sec
>>
>> nifi.swap.in.threads=1
>>
>> nifi.swap.out.period=5 sec
>>
>> nifi.swap.out.threads=4
>>
>>
>> # Content Repository
>>
>>
>> nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
>>
>> nifi.content.claim.max.appendable.size=10 MB
>>
>> nifi.content.claim.max.flow.files=100
>>
>> nifi.content.repository.directory.default=./content_repository
>>
>> nifi.content.repository.archive.max.retention.period=12 hours
>>
>> nifi.content.repository.archive.max.usage.percentage=50%
>>
>> nifi.content.repository.archive.enabled=true
>>
>> nifi.content.repository.always.sync=false
>>
>> nifi.content.viewer.url=/nifi-content-viewer/
>>
>>
>> # Provenance Repository Properties
>>
>>
>> nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
>>
>>
>> # Persistent Provenance Repository Properties
>>
>> nifi.provenance.repository.directory.default=./provenance_repository
>>
>> nifi.provenance.repository.max.storage.time=24 hours
>>
>> nifi.provenance.repository.max.storage.size=1 GB
>>
>> nifi.provenance.repository.rollover.time=30 secs
>>
>> nifi.provenance.repository.rollover.size=100 MB
>>
>> nifi.provenance.repository.query.threads=2
>>
>> nifi.provenance.repository.index.threads=1
>>
>> nifi.provenance.repository.compress.on.rollover=true
>>
>> nifi.provenance.repository.always.sync=false
>>
>> nifi.provenance.repository.journal.count=16
>>
>> # Comma-separated list of fields. Fields that are not indexed will not be
>> searchable. Valid fields are:
>>
>> # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
>> AlternateIdentifierURI, ContentType, Relationship, Details
>>
>> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
>> Filename, ProcessorID, Relationship
>>
>> # FlowFile Attributes that should be indexed and made searchable
>>
>> nifi.provenance.repository.indexed.attributes=
>>
>> # Large values for the shard size will result in more Java heap usage
>> when searching the Provenance Repository
>>
>> # but should provide better performance
>>
>> nifi.provenance.repository.index.shard.size=500 MB
>>
>> # Indicates the maximum length that a FlowFile attribute can be when
>> retrieving a Provenance Event from
>>
>> # the repository. If the length of any attribute exceeds this value, it
>> will be truncated when the event is retrieved.
>>
>> nifi.provenance.repository.max.attribute.length=65536
>>
>>
>> # Volatile Provenance Respository Properties
>>
>> nifi.provenance.repository.buffer.size=100000
>>
>>
>> # Component Status Repository
>>
>>
>> nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
>>
>> nifi.components.status.repository.buffer.size=1440
>>
>> nifi.components.status.snapshot.frequency=1 min
>>
>>
>> # Site to Site properties
>>
>> nifi.remote.input.socket.host=
>>
>> nifi.remote.input.socket.port=
>>
>> nifi.remote.input.secure=true
>>
>>
>> # web properties #
>>
>> nifi.web.war.directory=./lib
>>
>> nifi.web.http.host=
>>
>> nifi.web.http.port=8080
>>
>> nifi.web.https.host=
>>
>> nifi.web.https.port=
>>
>> nifi.web.jetty.working.directory=./work/jetty
>>
>> nifi.web.jetty.threads=200
>>
>>
>> # security properties #
>>
>> nifi.sensitive.props.key=
>>
>> nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
>>
>> nifi.sensitive.props.provider=BC
>>
>>
>> nifi.security.keystore=
>>
>> nifi.security.keystoreType=
>>
>> nifi.security.keystorePasswd=
>>
>> nifi.security.keyPasswd=
>>
>> nifi.security.truststore=
>>
>> nifi.security.truststoreType=
>>
>> nifi.security.truststorePasswd=
>>
>> nifi.security.needClientAuth=
>>
>> nifi.security.user.credential.cache.duration=24 hours
>>
>> nifi.security.user.authority.provider=file-provider
>>
>> nifi.security.support.new.account.requests=
>>
>> nifi.security.ocsp.responder.url=
>>
>> nifi.security.ocsp.responder.certificate=
>>
>>
>> # cluster common properties (cluster manager and nodes must have same
>> values) #
>>
>> nifi.cluster.protocol.heartbeat.interval=5 sec
>>
>> nifi.cluster.protocol.is.secure=false
>>
>> nifi.cluster.protocol.socket.timeout=30 sec
>>
>> nifi.cluster.protocol.connection.handshake.timeout=45 sec
>>
>> # if multicast is used, then nifi.cluster.protocol.multicast.xxx
>> properties must be configured #
>>
>> nifi.cluster.protocol.use.multicast=false
>>
>> nifi.cluster.protocol.multicast.address=
>>
>> nifi.cluster.protocol.multicast.port=
>>
>> nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
>>
>> nifi.cluster.protocol.multicast.service.locator.attempts=3
>>
>> nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
>>
>>
>> # cluster node properties (only configure for cluster nodes) #
>>
>> nifi.cluster.is.node=false
>>
>> nifi.cluster.node.address=
>>
>> nifi.cluster.node.protocol.port=
>>
>> nifi.cluster.node.protocol.threads=2
>>
>> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
>> values as nifi.cluster.manager.xxx #
>>
>> nifi.cluster.node.unicast.manager.address=
>>
>> nifi.cluster.node.unicast.manager.protocol.port=
>>
>>
>> # cluster manager properties (only configure for cluster manager) #
>>
>> nifi.cluster.is.manager=true
>>
>> nifi.cluster.manager.address=
>>
>> nifi.cluster.manager.protocol.port=3001
>>
>> nifi.cluster.manager.node.firewall.file=
>>
>> nifi.cluster.manager.node.event.history.size=10
>>
>> nifi.cluster.manager.node.api.connection.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.read.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.request.threads=10
>>
>> nifi.cluster.manager.flow.retrieval.delay=5 sec
>>
>> nifi.cluster.manager.protocol.threads=10
>>
>> nifi.cluster.manager.safemode.duration=0 sec
>>
>>
>> # kerberos #
>>
>> nifi.kerberos.krb5.file=
>>
>>
>>
>>
>>
>>
>>
>>
>> Slave conf file :
>>
>>
>>
>>
>> # Core Properties #
>>
>> nifi.version=0.3.0
>>
>> nifi.flow.configuration.file=./conf/flow.xml.gz
>>
>> nifi.flow.configuration.archive.dir=./conf/archive/
>>
>> nifi.flowcontroller.autoResumeState=true
>>
>> nifi.flowcontroller.graceful.shutdown.period=10 sec
>>
>> nifi.flowservice.writedelay.interval=500 ms
>>
>> nifi.administrative.yield.duration=30 sec
>>
>> # If a component has no work to do (is "bored"), how long should we wait
>> before checking again for work?
>>
>> nifi.bored.yield.duration=10 millis
>>
>>
>> nifi.authority.provider.configuration.file=./conf/authority-providers.xml
>>
>> nifi.templates.directory=./conf/templates
>>
>> nifi.ui.banner.text=
>>
>> nifi.ui.autorefresh.interval=30 sec
>>
>> nifi.nar.library.directory=./lib
>>
>> nifi.nar.working.directory=./work/nar/
>>
>> nifi.documentation.working.directory=./work/docs/components
>>
>>
>> # H2 Settings
>>
>> nifi.database.directory=./database_repository
>>
>> nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
>>
>>
>> # FlowFile Repository
>>
>>
>> nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
>>
>> nifi.flowfile.repository.directory=./flowfile_repository
>>
>> nifi.flowfile.repository.partitions=256
>>
>> nifi.flowfile.repository.checkpoint.interval=2 mins
>>
>> nifi.flowfile.repository.always.sync=false
>>
>>
>>
>> nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
>>
>> nifi.queue.swap.threshold=20000
>>
>> nifi.swap.in.period=5 sec
>>
>> nifi.swap.in.threads=1
>>
>> nifi.swap.out.period=5 sec
>>
>> nifi.swap.out.threads=4
>>
>>
>> # Content Repository
>>
>>
>> nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
>>
>> nifi.content.claim.max.appendable.size=10 MB
>>
>> nifi.content.claim.max.flow.files=100
>>
>> nifi.content.repository.directory.default=./content_repository
>>
>> nifi.content.repository.archive.max.retention.period=12 hours
>>
>> nifi.content.repository.archive.max.usage.percentage=50%
>>
>> nifi.content.repository.archive.enabled=true
>>
>> nifi.content.repository.always.sync=false
>>
>> nifi.content.viewer.url=/nifi-content-viewer/
>>
>>
>> # Provenance Repository Properties
>>
>>
>> nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
>>
>>
>> # Persistent Provenance Repository Properties
>>
>> nifi.provenance.repository.directory.default=./provenance_repository
>>
>> nifi.provenance.repository.max.storage.time=24 hours
>>
>> nifi.provenance.repository.max.storage.size=1 GB
>>
>> nifi.provenance.repository.rollover.time=30 secs
>>
>> nifi.provenance.repository.rollover.size=100 MB
>>
>> nifi.provenance.repository.query.threads=2
>>
>> nifi.provenance.repository.index.threads=1
>>
>> nifi.provenance.repository.compress.on.rollover=true
>>
>> nifi.provenance.repository.always.sync=false
>>
>> nifi.provenance.repository.journal.count=16
>>
>> # Comma-separated list of fields. Fields that are not indexed will not be
>> searchable. Valid fields are:
>>
>> # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
>> AlternateIdentifierURI, ContentType, Relationship, Details
>>
>> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
>> Filename, ProcessorID, Relationship
>>
>> # FlowFile Attributes that should be indexed and made searchable
>>
>> nifi.provenance.repository.indexed.attributes=
>>
>> # Large values for the shard size will result in more Java heap usage
>> when searching the Provenance Repository
>>
>> # but should provide better performance
>>
>> nifi.provenance.repository.index.shard.size=500 MB
>>
>> # Indicates the maximum length that a FlowFile attribute can be when
>> retrieving a Provenance Event from
>>
>> # the repository. If the length of any attribute exceeds this value, it
>> will be truncated when the event is retrieved.
>>
>> nifi.provenance.repository.max.attribute.length=65536
>>
>>
>> # Volatile Provenance Respository Properties
>>
>> nifi.provenance.repository.buffer.size=100000
>>
>>
>> # Component Status Repository
>>
>>
>> nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
>>
>> nifi.components.status.repository.buffer.size=1440
>>
>> nifi.components.status.snapshot.frequency=1 min
>>
>>
>> # Site to Site properties
>>
>> nifi.remote.input.socket.host=
>>
>> nifi.remote.input.socket.port=
>>
>> nifi.remote.input.secure=true
>>
>>
>> # web properties #
>>
>> nifi.web.war.directory=./lib
>>
>> nifi.web.http.host=10.233.2.42
>>
>> nifi.web.http.port=8080
>>
>> nifi.web.https.host=
>>
>> nifi.web.https.port=
>>
>> nifi.web.jetty.working.directory=./work/jetty
>>
>> nifi.web.jetty.threads=200
>>
>>
>> # security properties #
>>
>> nifi.sensitive.props.key=
>>
>> nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
>>
>> nifi.sensitive.props.provider=BC
>>
>>
>> nifi.security.keystore=
>>
>> nifi.security.keystoreType=
>>
>> nifi.security.keystorePasswd=
>>
>> nifi.security.keyPasswd=
>>
>> nifi.security.truststore=
>>
>> nifi.security.truststoreType=
>>
>> nifi.security.truststorePasswd=
>>
>> nifi.security.needClientAuth=
>>
>> nifi.security.user.credential.cache.duration=24 hours
>>
>> nifi.security.user.authority.provider=file-provider
>>
>> nifi.security.support.new.account.requests=
>>
>> nifi.security.ocsp.responder.url=
>>
>> nifi.security.ocsp.responder.certificate=
>>
>>
>> # cluster common properties (cluster manager and nodes must have same
>> values) #
>>
>> nifi.cluster.protocol.heartbeat.interval=5 sec
>>
>> nifi.cluster.protocol.is.secure=false
>>
>> nifi.cluster.protocol.socket.timeout=30 sec
>>
>> nifi.cluster.protocol.connection.handshake.timeout=45 sec
>>
>> # if multicast is used, then nifi.cluster.protocol.multicast.xxx
>> properties must be configured #
>>
>> nifi.cluster.protocol.use.multicast=false
>>
>> nifi.cluster.protocol.multicast.address=
>>
>> nifi.cluster.protocol.multicast.port=
>>
>> nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
>>
>> nifi.cluster.protocol.multicast.service.locator.attempts=3
>>
>> nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
>>
>>
>> # cluster node properties (only configure for cluster nodes) #
>>
>> nifi.cluster.is.node=true
>>
>> nifi.cluster.node.address=10.233.2.42
>>
>> nifi.cluster.node.protocol.port=3002
>>
>> nifi.cluster.node.protocol.threads=2
>>
>> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
>> values as nifi.cluster.manager.xxx #
>>
>> nifi.cluster.node.unicast.manager.address=10.233.2.40
>>
>> nifi.cluster.node.unicast.manager.protocol.port=3001
>>
>>
>> # cluster manager properties (only configure for cluster manager) #
>>
>> nifi.cluster.is.manager=false
>>
>> nifi.cluster.manager.address=
>>
>> nifi.cluster.manager.protocol.port=
>>
>> nifi.cluster.manager.node.firewall.file=
>>
>> nifi.cluster.manager.node.event.history.size=10
>>
>> nifi.cluster.manager.node.api.connection.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.read.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.request.threads=10
>>
>> nifi.cluster.manager.flow.retrieval.delay=5 sec
>>
>> nifi.cluster.manager.protocol.threads=10
>>
>> nifi.cluster.manager.safemode.duration=0 sec
>>
>>
>> # kerberos #
>>
>> nifi.kerberos.krb5.file=
>>
>> From: Aldrin Piri <al...@gmail.com>
>> Reply-To: "users@nifi.apache.org" <us...@nifi.apache.org>
>> Date: Tuesday, September 29, 2015 at 4:26 PM
>> To: "users@nifi.apache.org" <us...@nifi.apache.org>
>> Subject: Re: nifi Cluster setup issue
>>
>> Chakrader,
>>
>> I suspect that the nifi.web.http.host property is not using the same
>> address as that specified and is transmitting "localhost" (the system's
>> response to a localhost hostname lookup from Java). While the clustering
>> protocol communicates via the properties you list, the actual
>> command-control and replication of requests from master to slave nodes is
>> carried out via the REST API which also runs on the web tier. The system's
>> hostname, as previously determined, is transmitted as part of the
>> clustering handshake.
>>
>> Either the system needs to have it report a valid hostname or a host
>> needs to be specified for nifi.web.http.host. In either case of hostname
>> or specified host, each must be network reachable from the master and able
>> to be bound to locally within your server.
>>
>> Let us know if you need additional direction and we'd be happy to help
>> you through the process.
>>
>> Thanks!
>>
>> On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <
>> Chakrader.Dewaragatla@lifelock.com> wrote:
>>
>>> Hi – We are exploring nifi for our workflow management, I have a cluster
>>> setup with 3 nodes. One as master and rest as slaves.
>>>
>>> I see following error when I try to access the nifi workflow webpage.
>>>
>>> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
>>> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
>>> [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost,
>>> apiPort=8080, socketAddress=localhost, socketPort=3002] encountered
>>> exception: java.util.concurrent.ExecutionException:
>>> com.sun.jersey.api.client.ClientHandlerException:
>>> java.net.ConnectException: Connection refused
>>>
>>> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
>>> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
>>> [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost,
>>> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered
>>> exception: java.util.concurrent.ExecutionException:
>>> com.sun.jersey.api.client.ClientHandlerException:
>>> java.net.ConnectException: Connection refused
>>>
>>> 2015-09-29 22:46:13,264 INFO [NiFi Web Server-23]
>>> o.a.n.c.m.e.NoConnectedNodesException
>>> org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No
>>> nodes were able to process this request.. Returning Conflict response.
>>>
>>>
>>> Master is not hybrid, I wonder why it is trying to self connect 3002.
>>>
>>>
>>> Master settings:
>>>
>>> # cluster manager properties (only configure for cluster manager) #
>>>
>>> nifi.cluster.is.manager=true
>>>
>>> nifi.cluster.manager.address=10.233.2.40
>>>
>>> nifi.cluster.manager.protocol.port=3001
>>>
>>> nifi.cluster.manager.node.firewall.file=
>>>
>>> nifi.cluster.manager.node.event.history.size=10
>>>
>>> nifi.cluster.manager.node.api.connection.timeout=30 sec
>>>
>>> nifi.cluster.manager.node.api.read.timeout=30 sec
>>>
>>> nifi.cluster.manager.node.api.request.threads=10
>>>
>>> nifi.cluster.manager.flow.retrieval.delay=5 sec
>>>
>>> nifi.cluster.manager.protocol.threads=10
>>>
>>> nifi.cluster.manager.safemode.duration=0 sec
>>>
>>>
>>> Slave settings:
>>>
>>> # cluster node properties (only configure for cluster nodes) #
>>>
>>> nifi.cluster.is.node=true
>>>
>>> nifi.cluster.node.address=10.233.2.42
>>>
>>> nifi.cluster.node.protocol.port=3002
>>>
>>> nifi.cluster.node.protocol.threads=2
>>>
>>> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
>>> values as nifi.cluster.manager.xxx #
>>>
>>> nifi.cluster.node.unicast.manager.address=10.233.2.40
>>>
>>> nifi.cluster.node.unicast.manager.protocol.port=3001
>>>
>>>
>>>
>>>
>>> ------------------------------
>>> The information contained in this transmission may contain privileged
>>> and confidential information. It is intended only for the use of the
>>> person(s) named above. If you are not the intended recipient, you are
>>> hereby notified that any review, dissemination, distribution or duplication
>>> of this communication is strictly prohibited. If you are not the intended
>>> recipient, please contact the sender by reply email and destroy all copies
>>> of the original message.
>>> ------------------------------
>>>
>>
>> ------------------------------
>> The information contained in this transmission may contain privileged and
>> confidential information. It is intended only for the use of the person(s)
>> named above. If you are not the intended recipient, you are hereby notified
>> that any review, dissemination, distribution or duplication of this
>> communication is strictly prohibited. If you are not the intended
>> recipient, please contact the sender by reply email and destroy all copies
>> of the original message.
>> ------------------------------
>>
>
>
Re: nifi Cluster setup issue
Posted by Aldrin Piri <al...@gmail.com>.
Chakrader,
You would also need to set the nifi.web.http.host for the manager as well.
Each member of the cluster provides how they can be accessed in the
protocol. This would explain what you are seeing in the node from the
master/manager. Please try also setting the manager and let us know if
this gets your cluster up and running.
On Tue, Sep 29, 2015 at 8:48 PM, Chakrader Dewaragatla <
Chakrader.Dewaragatla@lifelock.com> wrote:
> Aldrin - I redeployed with nifi with default settings and modified the
> required settings needed for cluster setup documented in
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html.
>
> I tried to change nifi.web.http.host property on Node (slave) with its ip.
> On slave I notice following error:
>
> 2015-09-30 00:44:21,855 INFO [main]
> o.a.nifi.controller.StandardFlowService Connecting Node:
> [id=75baec43-adf1-4e17-98fd-49111a5a0c76, apiAddress=10.233.2.42,
> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002]
>
>
>
> On Master:
>
> As usual:
>
>
> 2015-09-30 00:44:51,036 INFO [Process Pending Heartbeats]
> org.apache.nifi.cluster.heartbeat Received heartbeat for node
> [id=644370b1-4d8f-4004-ac6c-8bd614a1890b, apiAddress=localhost,
> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002].
>
>
> Here is my complete conf file :
>
>
> Master conf file:
>
>
>
> # Core Properties #
>
> nifi.version=0.3.0
>
> nifi.flow.configuration.file=./conf/flow.xml.gz
>
> nifi.flow.configuration.archive.dir=./conf/archive/
>
> nifi.flowcontroller.autoResumeState=true
>
> nifi.flowcontroller.graceful.shutdown.period=10 sec
>
> nifi.flowservice.writedelay.interval=500 ms
>
> nifi.administrative.yield.duration=30 sec
>
> # If a component has no work to do (is "bored"), how long should we wait
> before checking again for work?
>
> nifi.bored.yield.duration=10 millis
>
>
> nifi.authority.provider.configuration.file=./conf/authority-providers.xml
>
> nifi.templates.directory=./conf/templates
>
> nifi.ui.banner.text=
>
> nifi.ui.autorefresh.interval=30 sec
>
> nifi.nar.library.directory=./lib
>
> nifi.nar.working.directory=./work/nar/
>
> nifi.documentation.working.directory=./work/docs/components
>
>
> # H2 Settings
>
> nifi.database.directory=./database_repository
>
> nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
>
>
> # FlowFile Repository
>
>
> nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
>
> nifi.flowfile.repository.directory=./flowfile_repository
>
> nifi.flowfile.repository.partitions=256
>
> nifi.flowfile.repository.checkpoint.interval=2 mins
>
> nifi.flowfile.repository.always.sync=false
>
>
>
> nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
>
> nifi.queue.swap.threshold=20000
>
> nifi.swap.in.period=5 sec
>
> nifi.swap.in.threads=1
>
> nifi.swap.out.period=5 sec
>
> nifi.swap.out.threads=4
>
>
> # Content Repository
>
>
> nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
>
> nifi.content.claim.max.appendable.size=10 MB
>
> nifi.content.claim.max.flow.files=100
>
> nifi.content.repository.directory.default=./content_repository
>
> nifi.content.repository.archive.max.retention.period=12 hours
>
> nifi.content.repository.archive.max.usage.percentage=50%
>
> nifi.content.repository.archive.enabled=true
>
> nifi.content.repository.always.sync=false
>
> nifi.content.viewer.url=/nifi-content-viewer/
>
>
> # Provenance Repository Properties
>
>
> nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
>
>
> # Persistent Provenance Repository Properties
>
> nifi.provenance.repository.directory.default=./provenance_repository
>
> nifi.provenance.repository.max.storage.time=24 hours
>
> nifi.provenance.repository.max.storage.size=1 GB
>
> nifi.provenance.repository.rollover.time=30 secs
>
> nifi.provenance.repository.rollover.size=100 MB
>
> nifi.provenance.repository.query.threads=2
>
> nifi.provenance.repository.index.threads=1
>
> nifi.provenance.repository.compress.on.rollover=true
>
> nifi.provenance.repository.always.sync=false
>
> nifi.provenance.repository.journal.count=16
>
> # Comma-separated list of fields. Fields that are not indexed will not be
> searchable. Valid fields are:
>
> # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
> AlternateIdentifierURI, ContentType, Relationship, Details
>
> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
> Filename, ProcessorID, Relationship
>
> # FlowFile Attributes that should be indexed and made searchable
>
> nifi.provenance.repository.indexed.attributes=
>
> # Large values for the shard size will result in more Java heap usage when
> searching the Provenance Repository
>
> # but should provide better performance
>
> nifi.provenance.repository.index.shard.size=500 MB
>
> # Indicates the maximum length that a FlowFile attribute can be when
> retrieving a Provenance Event from
>
> # the repository. If the length of any attribute exceeds this value, it
> will be truncated when the event is retrieved.
>
> nifi.provenance.repository.max.attribute.length=65536
>
>
> # Volatile Provenance Respository Properties
>
> nifi.provenance.repository.buffer.size=100000
>
>
> # Component Status Repository
>
>
> nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
>
> nifi.components.status.repository.buffer.size=1440
>
> nifi.components.status.snapshot.frequency=1 min
>
>
> # Site to Site properties
>
> nifi.remote.input.socket.host=
>
> nifi.remote.input.socket.port=
>
> nifi.remote.input.secure=true
>
>
> # web properties #
>
> nifi.web.war.directory=./lib
>
> nifi.web.http.host=
>
> nifi.web.http.port=8080
>
> nifi.web.https.host=
>
> nifi.web.https.port=
>
> nifi.web.jetty.working.directory=./work/jetty
>
> nifi.web.jetty.threads=200
>
>
> # security properties #
>
> nifi.sensitive.props.key=
>
> nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
>
> nifi.sensitive.props.provider=BC
>
>
> nifi.security.keystore=
>
> nifi.security.keystoreType=
>
> nifi.security.keystorePasswd=
>
> nifi.security.keyPasswd=
>
> nifi.security.truststore=
>
> nifi.security.truststoreType=
>
> nifi.security.truststorePasswd=
>
> nifi.security.needClientAuth=
>
> nifi.security.user.credential.cache.duration=24 hours
>
> nifi.security.user.authority.provider=file-provider
>
> nifi.security.support.new.account.requests=
>
> nifi.security.ocsp.responder.url=
>
> nifi.security.ocsp.responder.certificate=
>
>
> # cluster common properties (cluster manager and nodes must have same
> values) #
>
> nifi.cluster.protocol.heartbeat.interval=5 sec
>
> nifi.cluster.protocol.is.secure=false
>
> nifi.cluster.protocol.socket.timeout=30 sec
>
> nifi.cluster.protocol.connection.handshake.timeout=45 sec
>
> # if multicast is used, then nifi.cluster.protocol.multicast.xxx
> properties must be configured #
>
> nifi.cluster.protocol.use.multicast=false
>
> nifi.cluster.protocol.multicast.address=
>
> nifi.cluster.protocol.multicast.port=
>
> nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
>
> nifi.cluster.protocol.multicast.service.locator.attempts=3
>
> nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
>
>
> # cluster node properties (only configure for cluster nodes) #
>
> nifi.cluster.is.node=false
>
> nifi.cluster.node.address=
>
> nifi.cluster.node.protocol.port=
>
> nifi.cluster.node.protocol.threads=2
>
> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
> values as nifi.cluster.manager.xxx #
>
> nifi.cluster.node.unicast.manager.address=
>
> nifi.cluster.node.unicast.manager.protocol.port=
>
>
> # cluster manager properties (only configure for cluster manager) #
>
> nifi.cluster.is.manager=true
>
> nifi.cluster.manager.address=
>
> nifi.cluster.manager.protocol.port=3001
>
> nifi.cluster.manager.node.firewall.file=
>
> nifi.cluster.manager.node.event.history.size=10
>
> nifi.cluster.manager.node.api.connection.timeout=30 sec
>
> nifi.cluster.manager.node.api.read.timeout=30 sec
>
> nifi.cluster.manager.node.api.request.threads=10
>
> nifi.cluster.manager.flow.retrieval.delay=5 sec
>
> nifi.cluster.manager.protocol.threads=10
>
> nifi.cluster.manager.safemode.duration=0 sec
>
>
> # kerberos #
>
> nifi.kerberos.krb5.file=
>
>
>
>
>
>
>
>
> Slave conf file :
>
>
>
>
> # Core Properties #
>
> nifi.version=0.3.0
>
> nifi.flow.configuration.file=./conf/flow.xml.gz
>
> nifi.flow.configuration.archive.dir=./conf/archive/
>
> nifi.flowcontroller.autoResumeState=true
>
> nifi.flowcontroller.graceful.shutdown.period=10 sec
>
> nifi.flowservice.writedelay.interval=500 ms
>
> nifi.administrative.yield.duration=30 sec
>
> # If a component has no work to do (is "bored"), how long should we wait
> before checking again for work?
>
> nifi.bored.yield.duration=10 millis
>
>
> nifi.authority.provider.configuration.file=./conf/authority-providers.xml
>
> nifi.templates.directory=./conf/templates
>
> nifi.ui.banner.text=
>
> nifi.ui.autorefresh.interval=30 sec
>
> nifi.nar.library.directory=./lib
>
> nifi.nar.working.directory=./work/nar/
>
> nifi.documentation.working.directory=./work/docs/components
>
>
> # H2 Settings
>
> nifi.database.directory=./database_repository
>
> nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
>
>
> # FlowFile Repository
>
>
> nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
>
> nifi.flowfile.repository.directory=./flowfile_repository
>
> nifi.flowfile.repository.partitions=256
>
> nifi.flowfile.repository.checkpoint.interval=2 mins
>
> nifi.flowfile.repository.always.sync=false
>
>
>
> nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
>
> nifi.queue.swap.threshold=20000
>
> nifi.swap.in.period=5 sec
>
> nifi.swap.in.threads=1
>
> nifi.swap.out.period=5 sec
>
> nifi.swap.out.threads=4
>
>
> # Content Repository
>
>
> nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
>
> nifi.content.claim.max.appendable.size=10 MB
>
> nifi.content.claim.max.flow.files=100
>
> nifi.content.repository.directory.default=./content_repository
>
> nifi.content.repository.archive.max.retention.period=12 hours
>
> nifi.content.repository.archive.max.usage.percentage=50%
>
> nifi.content.repository.archive.enabled=true
>
> nifi.content.repository.always.sync=false
>
> nifi.content.viewer.url=/nifi-content-viewer/
>
>
> # Provenance Repository Properties
>
>
> nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
>
>
> # Persistent Provenance Repository Properties
>
> nifi.provenance.repository.directory.default=./provenance_repository
>
> nifi.provenance.repository.max.storage.time=24 hours
>
> nifi.provenance.repository.max.storage.size=1 GB
>
> nifi.provenance.repository.rollover.time=30 secs
>
> nifi.provenance.repository.rollover.size=100 MB
>
> nifi.provenance.repository.query.threads=2
>
> nifi.provenance.repository.index.threads=1
>
> nifi.provenance.repository.compress.on.rollover=true
>
> nifi.provenance.repository.always.sync=false
>
> nifi.provenance.repository.journal.count=16
>
> # Comma-separated list of fields. Fields that are not indexed will not be
> searchable. Valid fields are:
>
> # EventType, FlowFileUUID, Filename, TransitURI, ProcessorID,
> AlternateIdentifierURI, ContentType, Relationship, Details
>
> nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID,
> Filename, ProcessorID, Relationship
>
> # FlowFile Attributes that should be indexed and made searchable
>
> nifi.provenance.repository.indexed.attributes=
>
> # Large values for the shard size will result in more Java heap usage when
> searching the Provenance Repository
>
> # but should provide better performance
>
> nifi.provenance.repository.index.shard.size=500 MB
>
> # Indicates the maximum length that a FlowFile attribute can be when
> retrieving a Provenance Event from
>
> # the repository. If the length of any attribute exceeds this value, it
> will be truncated when the event is retrieved.
>
> nifi.provenance.repository.max.attribute.length=65536
>
>
> # Volatile Provenance Respository Properties
>
> nifi.provenance.repository.buffer.size=100000
>
>
> # Component Status Repository
>
>
> nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
>
> nifi.components.status.repository.buffer.size=1440
>
> nifi.components.status.snapshot.frequency=1 min
>
>
> # Site to Site properties
>
> nifi.remote.input.socket.host=
>
> nifi.remote.input.socket.port=
>
> nifi.remote.input.secure=true
>
>
> # web properties #
>
> nifi.web.war.directory=./lib
>
> nifi.web.http.host=10.233.2.42
>
> nifi.web.http.port=8080
>
> nifi.web.https.host=
>
> nifi.web.https.port=
>
> nifi.web.jetty.working.directory=./work/jetty
>
> nifi.web.jetty.threads=200
>
>
> # security properties #
>
> nifi.sensitive.props.key=
>
> nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
>
> nifi.sensitive.props.provider=BC
>
>
> nifi.security.keystore=
>
> nifi.security.keystoreType=
>
> nifi.security.keystorePasswd=
>
> nifi.security.keyPasswd=
>
> nifi.security.truststore=
>
> nifi.security.truststoreType=
>
> nifi.security.truststorePasswd=
>
> nifi.security.needClientAuth=
>
> nifi.security.user.credential.cache.duration=24 hours
>
> nifi.security.user.authority.provider=file-provider
>
> nifi.security.support.new.account.requests=
>
> nifi.security.ocsp.responder.url=
>
> nifi.security.ocsp.responder.certificate=
>
>
> # cluster common properties (cluster manager and nodes must have same
> values) #
>
> nifi.cluster.protocol.heartbeat.interval=5 sec
>
> nifi.cluster.protocol.is.secure=false
>
> nifi.cluster.protocol.socket.timeout=30 sec
>
> nifi.cluster.protocol.connection.handshake.timeout=45 sec
>
> # if multicast is used, then nifi.cluster.protocol.multicast.xxx
> properties must be configured #
>
> nifi.cluster.protocol.use.multicast=false
>
> nifi.cluster.protocol.multicast.address=
>
> nifi.cluster.protocol.multicast.port=
>
> nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
>
> nifi.cluster.protocol.multicast.service.locator.attempts=3
>
> nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
>
>
> # cluster node properties (only configure for cluster nodes) #
>
> nifi.cluster.is.node=true
>
> nifi.cluster.node.address=10.233.2.42
>
> nifi.cluster.node.protocol.port=3002
>
> nifi.cluster.node.protocol.threads=2
>
> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
> values as nifi.cluster.manager.xxx #
>
> nifi.cluster.node.unicast.manager.address=10.233.2.40
>
> nifi.cluster.node.unicast.manager.protocol.port=3001
>
>
> # cluster manager properties (only configure for cluster manager) #
>
> nifi.cluster.is.manager=false
>
> nifi.cluster.manager.address=
>
> nifi.cluster.manager.protocol.port=
>
> nifi.cluster.manager.node.firewall.file=
>
> nifi.cluster.manager.node.event.history.size=10
>
> nifi.cluster.manager.node.api.connection.timeout=30 sec
>
> nifi.cluster.manager.node.api.read.timeout=30 sec
>
> nifi.cluster.manager.node.api.request.threads=10
>
> nifi.cluster.manager.flow.retrieval.delay=5 sec
>
> nifi.cluster.manager.protocol.threads=10
>
> nifi.cluster.manager.safemode.duration=0 sec
>
>
> # kerberos #
>
> nifi.kerberos.krb5.file=
>
> From: Aldrin Piri <al...@gmail.com>
> Reply-To: "users@nifi.apache.org" <us...@nifi.apache.org>
> Date: Tuesday, September 29, 2015 at 4:26 PM
> To: "users@nifi.apache.org" <us...@nifi.apache.org>
> Subject: Re: nifi Cluster setup issue
>
> Chakrader,
>
> I suspect that the nifi.web.http.host property is not using the same
> address as that specified and is transmitting "localhost" (the system's
> response to a localhost hostname lookup from Java). While the clustering
> protocol communicates via the properties you list, the actual
> command-control and replication of requests from master to slave nodes is
> carried out via the REST API which also runs on the web tier. The system's
> hostname, as previously determined, is transmitted as part of the
> clustering handshake.
>
> Either the system needs to have it report a valid hostname or a host needs
> to be specified for nifi.web.http.host. In either case of hostname or
> specified host, each must be network reachable from the master and able to
> be bound to locally within your server.
>
> Let us know if you need additional direction and we'd be happy to help you
> through the process.
>
> Thanks!
>
> On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <
> Chakrader.Dewaragatla@lifelock.com> wrote:
>
>> Hi – We are exploring nifi for our workflow management, I have a cluster
>> setup with 3 nodes. One as master and rest as slaves.
>>
>> I see following error when I try to access the nifi workflow webpage.
>>
>> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
>> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
>> [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost,
>> apiPort=8080, socketAddress=localhost, socketPort=3002] encountered
>> exception: java.util.concurrent.ExecutionException:
>> com.sun.jersey.api.client.ClientHandlerException:
>> java.net.ConnectException: Connection refused
>>
>> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
>> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
>> [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost,
>> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered
>> exception: java.util.concurrent.ExecutionException:
>> com.sun.jersey.api.client.ClientHandlerException:
>> java.net.ConnectException: Connection refused
>>
>> 2015-09-29 22:46:13,264 INFO [NiFi Web Server-23]
>> o.a.n.c.m.e.NoConnectedNodesException
>> org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No
>> nodes were able to process this request.. Returning Conflict response.
>>
>>
>> Master is not hybrid, I wonder why it is trying to self connect 3002.
>>
>>
>> Master settings:
>>
>> # cluster manager properties (only configure for cluster manager) #
>>
>> nifi.cluster.is.manager=true
>>
>> nifi.cluster.manager.address=10.233.2.40
>>
>> nifi.cluster.manager.protocol.port=3001
>>
>> nifi.cluster.manager.node.firewall.file=
>>
>> nifi.cluster.manager.node.event.history.size=10
>>
>> nifi.cluster.manager.node.api.connection.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.read.timeout=30 sec
>>
>> nifi.cluster.manager.node.api.request.threads=10
>>
>> nifi.cluster.manager.flow.retrieval.delay=5 sec
>>
>> nifi.cluster.manager.protocol.threads=10
>>
>> nifi.cluster.manager.safemode.duration=0 sec
>>
>>
>> Slave settings:
>>
>> # cluster node properties (only configure for cluster nodes) #
>>
>> nifi.cluster.is.node=true
>>
>> nifi.cluster.node.address=10.233.2.42
>>
>> nifi.cluster.node.protocol.port=3002
>>
>> nifi.cluster.node.protocol.threads=2
>>
>> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
>> values as nifi.cluster.manager.xxx #
>>
>> nifi.cluster.node.unicast.manager.address=10.233.2.40
>>
>> nifi.cluster.node.unicast.manager.protocol.port=3001
>>
>>
>>
>>
>> ------------------------------
>> The information contained in this transmission may contain privileged and
>> confidential information. It is intended only for the use of the person(s)
>> named above. If you are not the intended recipient, you are hereby notified
>> that any review, dissemination, distribution or duplication of this
>> communication is strictly prohibited. If you are not the intended
>> recipient, please contact the sender by reply email and destroy all copies
>> of the original message.
>> ------------------------------
>>
>
> ------------------------------
> The information contained in this transmission may contain privileged and
> confidential information. It is intended only for the use of the person(s)
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> ------------------------------
>
Re: nifi Cluster setup issue
Posted by Chakrader Dewaragatla <Ch...@lifelock.com>.
Aldrin - I redeployed with nifi with default settings and modified the required settings needed for cluster setup documented in https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html.
I tried to change nifi.web.http.host property on Node (slave) with its ip. On slave I notice following error:
2015-09-30 00:44:21,855 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: [id=75baec43-adf1-4e17-98fd-49111a5a0c76, apiAddress=10.233.2.42, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002]
On Master:
As usual:
2015-09-30 00:44:51,036 INFO [Process Pending Heartbeats] org.apache.nifi.cluster.heartbeat Received heartbeat for node [id=644370b1-4d8f-4004-ac6c-8bd614a1890b, apiAddress=localhost, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002].
Here is my complete conf file :
Master conf file:
# Core Properties #
nifi.version=0.3.0
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.authority.provider.configuration.file=./conf/authority-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=1
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, ContentType, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.socket.host=
nifi.remote.input.socket.port=
nifi.remote.input.secure=true
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.https.host=
nifi.web.https.port=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.security.keystore=
nifi.security.keystoreType=
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=
nifi.security.truststoreType=
nifi.security.truststorePasswd=
nifi.security.needClientAuth=
nifi.security.user.credential.cache.duration=24 hours
nifi.security.user.authority.provider=file-provider
nifi.security.support.new.account.requests=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# cluster common properties (cluster manager and nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
nifi.cluster.protocol.socket.timeout=30 sec
nifi.cluster.protocol.connection.handshake.timeout=45 sec
# if multicast is used, then nifi.cluster.protocol.multicast.xxx properties must be configured #
nifi.cluster.protocol.use.multicast=false
nifi.cluster.protocol.multicast.address=
nifi.cluster.protocol.multicast.port=
nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
nifi.cluster.protocol.multicast.service.locator.attempts=3
nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=false
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=
nifi.cluster.node.unicast.manager.protocol.port=
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=true
nifi.cluster.manager.address=
nifi.cluster.manager.protocol.port=3001
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
# kerberos #
nifi.kerberos.krb5.file=
Slave conf file :
# Core Properties #
nifi.version=0.3.0
nifi.flow.configuration.file=./conf/flow.xml.gz
nifi.flow.configuration.archive.dir=./conf/archive/
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowservice.writedelay.interval=500 ms
nifi.administrative.yield.duration=30 sec
# If a component has no work to do (is "bored"), how long should we wait before checking again for work?
nifi.bored.yield.duration=10 millis
nifi.authority.provider.configuration.file=./conf/authority-providers.xml
nifi.templates.directory=./conf/templates
nifi.ui.banner.text=
nifi.ui.autorefresh.interval=30 sec
nifi.nar.library.directory=./lib
nifi.nar.working.directory=./work/nar/
nifi.documentation.working.directory=./work/docs/components
# H2 Settings
nifi.database.directory=./database_repository
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
# FlowFile Repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.partitions=256
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.always.sync=false
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.queue.swap.threshold=20000
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4
# Content Repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.directory.default=./content_repository
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.archive.enabled=true
nifi.content.repository.always.sync=false
nifi.content.viewer.url=/nifi-content-viewer/
# Provenance Repository Properties
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
# Persistent Provenance Repository Properties
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.rollover.time=30 secs
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.index.threads=1
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.journal.count=16
# Comma-separated list of fields. Fields that are not indexed will not be searchable. Valid fields are:
# EventType, FlowFileUUID, Filename, TransitURI, ProcessorID, AlternateIdentifierURI, ContentType, Relationship, Details
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
# FlowFile Attributes that should be indexed and made searchable
nifi.provenance.repository.indexed.attributes=
# Large values for the shard size will result in more Java heap usage when searching the Provenance Repository
# but should provide better performance
nifi.provenance.repository.index.shard.size=500 MB
# Indicates the maximum length that a FlowFile attribute can be when retrieving a Provenance Event from
# the repository. If the length of any attribute exceeds this value, it will be truncated when the event is retrieved.
nifi.provenance.repository.max.attribute.length=65536
# Volatile Provenance Respository Properties
nifi.provenance.repository.buffer.size=100000
# Component Status Repository
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.repository.buffer.size=1440
nifi.components.status.snapshot.frequency=1 min
# Site to Site properties
nifi.remote.input.socket.host=
nifi.remote.input.socket.port=
nifi.remote.input.secure=true
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=10.233.2.42
nifi.web.http.port=8080
nifi.web.https.host=
nifi.web.https.port=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
# security properties #
nifi.sensitive.props.key=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.security.keystore=
nifi.security.keystoreType=
nifi.security.keystorePasswd=
nifi.security.keyPasswd=
nifi.security.truststore=
nifi.security.truststoreType=
nifi.security.truststorePasswd=
nifi.security.needClientAuth=
nifi.security.user.credential.cache.duration=24 hours
nifi.security.user.authority.provider=file-provider
nifi.security.support.new.account.requests=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
# cluster common properties (cluster manager and nodes must have same values) #
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=false
nifi.cluster.protocol.socket.timeout=30 sec
nifi.cluster.protocol.connection.handshake.timeout=45 sec
# if multicast is used, then nifi.cluster.protocol.multicast.xxx properties must be configured #
nifi.cluster.protocol.use.multicast=false
nifi.cluster.protocol.multicast.address=
nifi.cluster.protocol.multicast.port=
nifi.cluster.protocol.multicast.service.broadcast.delay=500 ms
nifi.cluster.protocol.multicast.service.locator.attempts=3
nifi.cluster.protocol.multicast.service.locator.attempts.delay=1 sec
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.233.2.42
nifi.cluster.node.protocol.port=3002
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.node.unicast.manager.protocol.port=3001
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=false
nifi.cluster.manager.address=
nifi.cluster.manager.protocol.port=
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
# kerberos #
nifi.kerberos.krb5.file=
From: Aldrin Piri <al...@gmail.com>>
Reply-To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Date: Tuesday, September 29, 2015 at 4:26 PM
To: "users@nifi.apache.org<ma...@nifi.apache.org>" <us...@nifi.apache.org>>
Subject: Re: nifi Cluster setup issue
Chakrader,
I suspect that the nifi.web.http.host property is not using the same address as that specified and is transmitting "localhost" (the system's response to a localhost hostname lookup from Java). While the clustering protocol communicates via the properties you list, the actual command-control and replication of requests from master to slave nodes is carried out via the REST API which also runs on the web tier. The system's hostname, as previously determined, is transmitted as part of the clustering handshake.
Either the system needs to have it report a valid hostname or a host needs to be specified for nifi.web.http.host. In either case of hostname or specified host, each must be network reachable from the master and able to be bound to locally within your server.
Let us know if you need additional direction and we'd be happy to help you through the process.
Thanks!
On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <Ch...@lifelock.com>> wrote:
Hi – We are exploring nifi for our workflow management, I have a cluster setup with 3 nodes. One as master and rest as slaves.
I see following error when I try to access the nifi workflow webpage.
2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost, apiPort=8080, socketAddress=localhost, socketPort=3002] encountered exception: java.util.concurrent.ExecutionException: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
2015-09-29 22:46:13,263 WARN [NiFi Web Server-23] o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost, apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered exception: java.util.concurrent.ExecutionException: com.sun.jersey.api.client.ClientHandlerException: java.net.ConnectException: Connection refused
2015-09-29 22:46:13,264 INFO [NiFi Web Server-23] o.a.n.c.m.e.NoConnectedNodesException org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No nodes were able to process this request.. Returning Conflict response.
Master is not hybrid, I wonder why it is trying to self connect 3002.
Master settings:
# cluster manager properties (only configure for cluster manager) #
nifi.cluster.is.manager=true
nifi.cluster.manager.address=10.233.2.40
nifi.cluster.manager.protocol.port=3001
nifi.cluster.manager.node.firewall.file=
nifi.cluster.manager.node.event.history.size=10
nifi.cluster.manager.node.api.connection.timeout=30 sec
nifi.cluster.manager.node.api.read.timeout=30 sec
nifi.cluster.manager.node.api.request.threads=10
nifi.cluster.manager.flow.retrieval.delay=5 sec
nifi.cluster.manager.protocol.threads=10
nifi.cluster.manager.safemode.duration=0 sec
Slave settings:
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=10.233.2.42
nifi.cluster.node.protocol.port=3002
nifi.cluster.node.protocol.threads=2
# if multicast is not used, nifi.cluster.node.unicast.xxx must have same values as nifi.cluster.manager.xxx #
nifi.cluster.node.unicast.manager.address=10.233.2.40
nifi.cluster.node.unicast.manager.protocol.port=3001
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
________________________________
The information contained in this transmission may contain privileged and confidential information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
________________________________
Re: nifi Cluster setup issue
Posted by Aldrin Piri <al...@gmail.com>.
Chakrader,
I suspect that the nifi.web.http.host property is not using the same
address as that specified and is transmitting "localhost" (the system's
response to a localhost hostname lookup from Java). While the clustering
protocol communicates via the properties you list, the actual
command-control and replication of requests from master to slave nodes is
carried out via the REST API which also runs on the web tier. The system's
hostname, as previously determined, is transmitted as part of the
clustering handshake.
Either the system needs to have it report a valid hostname or a host needs
to be specified for nifi.web.http.host. In either case of hostname or
specified host, each must be network reachable from the master and able to
be bound to locally within your server.
Let us know if you need additional direction and we'd be happy to help you
through the process.
Thanks!
On Tue, Sep 29, 2015 at 6:56 PM, Chakrader Dewaragatla <
Chakrader.Dewaragatla@lifelock.com> wrote:
> Hi – We are exploring nifi for our workflow management, I have a cluster
> setup with 3 nodes. One as master and rest as slaves.
>
> I see following error when I try to access the nifi workflow webpage.
>
> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
> [id=7481fca5-930c-4d4b-84a3-66cc62b4e2d3, apiAddress=localhost,
> apiPort=8080, socketAddress=localhost, socketPort=3002] encountered
> exception: java.util.concurrent.ExecutionException:
> com.sun.jersey.api.client.ClientHandlerException:
> java.net.ConnectException: Connection refused
>
> 2015-09-29 22:46:13,263 WARN [NiFi Web Server-23]
> o.a.n.c.m.impl.HttpRequestReplicatorImpl Node request for
> [id=0abd8295-34a3-4bf7-ab06-1b6b94014740, apiAddress=localhost,
> apiPort=8080, socketAddress=10.233.2.42, socketPort=3002] encountered
> exception: java.util.concurrent.ExecutionException:
> com.sun.jersey.api.client.ClientHandlerException:
> java.net.ConnectException: Connection refused
>
> 2015-09-29 22:46:13,264 INFO [NiFi Web Server-23]
> o.a.n.c.m.e.NoConnectedNodesException
> org.apache.nifi.cluster.manager.exception.NoResponseFromNodesException: No
> nodes were able to process this request.. Returning Conflict response.
>
>
> Master is not hybrid, I wonder why it is trying to self connect 3002.
>
>
> Master settings:
>
> # cluster manager properties (only configure for cluster manager) #
>
> nifi.cluster.is.manager=true
>
> nifi.cluster.manager.address=10.233.2.40
>
> nifi.cluster.manager.protocol.port=3001
>
> nifi.cluster.manager.node.firewall.file=
>
> nifi.cluster.manager.node.event.history.size=10
>
> nifi.cluster.manager.node.api.connection.timeout=30 sec
>
> nifi.cluster.manager.node.api.read.timeout=30 sec
>
> nifi.cluster.manager.node.api.request.threads=10
>
> nifi.cluster.manager.flow.retrieval.delay=5 sec
>
> nifi.cluster.manager.protocol.threads=10
>
> nifi.cluster.manager.safemode.duration=0 sec
>
>
> Slave settings:
>
> # cluster node properties (only configure for cluster nodes) #
>
> nifi.cluster.is.node=true
>
> nifi.cluster.node.address=10.233.2.42
>
> nifi.cluster.node.protocol.port=3002
>
> nifi.cluster.node.protocol.threads=2
>
> # if multicast is not used, nifi.cluster.node.unicast.xxx must have same
> values as nifi.cluster.manager.xxx #
>
> nifi.cluster.node.unicast.manager.address=10.233.2.40
>
> nifi.cluster.node.unicast.manager.protocol.port=3001
>
>
>
>
> ------------------------------
> The information contained in this transmission may contain privileged and
> confidential information. It is intended only for the use of the person(s)
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> ------------------------------
>