You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by "Karthik Kothareddy (karthikk) [CONT - Type 2]" <ka...@micron.com> on 2017/11/03 06:56:53 UTC

NiFi Slowness after thousand processors

Hello All,

We are currently running NiFi 1.3.0 on Linux(RHEL) box (standalone instance), we are facing some strange issues with this instance. Whenever the total processor count exceeds 1500 the whole instance is slowing down (I think there is no limit on the number of processors an instance can have as of I know). The UI becomes unresponsive and slows down to a point where navigating to a certain Processor Group takes up to 10-15 secs. I cross-verified this behavior by making REST calls to see if it's a UI issue and found the same behavior there. System diagnostics takes upto 15-30 seconds and flowStatus takes also takes 20-30 seconds to return results. At this point all the feeds start to slow down as well.

Strangely another machine with same configuration and same processor count (a mirror instance) is performing good. I checked all the metrics like system-diagnostics, maximum thread count, repository usage etc. but everything is under normal usage. Also the hardware and the usage on the underlying machine checks out, nothing suspicious there. Can anyone please suggest what might have been the root cause for this? Am I missing anything basic in setting up an instance that can run tens of thousands of processors without any issue?  Below are the hardware specs for the machine.

Cores - 48
Memory - 800 GB
Disk Space - 2.8 TB one physical partition for logs, content, flowFile repositories (RAID5 ssd's)

Any help around this will be much appreciated, Thanks for your time on this.

-Karthik



Re: NiFi Slowness after thousand processors

Posted by Michael Moser <mo...@gmail.com>.
Greetings!

One thing that you could check is how many processors you have in the
STOPPED state.  The NiFi framework will perform periodic validation on all
STOPPED processors, including all controller services referenced by those
processors.  This can have the side effect of slowing UI responsiveness.
The solution is to place unused processors into the DISABLED state.  The
framework will skip validation of processors in the DISABLED and RUNNING
state.

This was the subject of NIFI-2996 [1].

Regards,
-- Mike

[1] - https://issues.apache.org/jira/browse/NIFI-2996



On Fri, Nov 3, 2017 at 2:56 AM, Karthik Kothareddy (karthikk) [CONT - Type
2] <ka...@micron.com> wrote:

> Hello All,
>
>
>
> We are currently running NiFi 1.3.0 on Linux(RHEL) box (standalone
> instance), we are facing some strange issues with this instance. Whenever
> the total processor count exceeds 1500 the whole instance is slowing down
> (I think there is no limit on the number of processors an instance can have
> as of I know). The UI becomes unresponsive and slows down to a point where
> navigating to a certain Processor Group takes up to 10-15 secs. I
> cross-verified this behavior by making REST calls to see if it’s a UI issue
> and found the same behavior there. System diagnostics takes upto 15-30
> seconds and flowStatus takes also takes 20-30 seconds to return results. At
> this point all the feeds start to slow down as well.
>
>
>
> Strangely another machine with same configuration and same processor count
> (a mirror instance) is performing good. I checked all the metrics like
> system-diagnostics, maximum thread count, repository usage etc. but
> everything is under normal usage. Also the hardware and the usage on the
> underlying machine checks out, nothing suspicious there. Can anyone please
> suggest what might have been the root cause for this? Am I missing anything
> basic in setting up an instance that can run tens of thousands of
> processors without any issue?  Below are the hardware specs for the machine.
>
>
>
> Cores – 48
>
> Memory – 800 GB
>
> Disk Space – 2.8 TB one physical partition for logs, content, flowFile
> repositories (RAID5 ssd’s)
>
>
>
> Any help around this will be much appreciated, Thanks for your time on
> this.
>
>
>
> -Karthik
>
>
>
>
>