You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Nadeem (Jira)" <ji...@apache.org> on 2020/09/02 04:27:00 UTC

[jira] [Commented] (NIFI-6202) NiFI flow states one file struck in process and not getting cleared

    [ https://issues.apache.org/jira/browse/NIFI-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17188966#comment-17188966 ] 

Nadeem commented on NIFI-6202:
------------------------------

Are you using MergeContent/MergeRecord Processor in your DataFlow? This could possibly happen when MergeX processors are waiting for more flowfiles based on Defragment strategy. Flowfile repository gets huge due to antipattern store of data payloads in flowfile attributes which makes NiFi difficult to serialize off memory when the rate of flowfiles arriving gets higher. Eventually, leading to canvas not accessible as the system run OOM quickly. I have myself seen flowfile repository getting 100+ GB in our customer dataflow. You should probably fix filthy dataflow. That should pretty much solve the problem. 

> NiFI flow states one file struck in process and not getting cleared 
> --------------------------------------------------------------------
>
>                 Key: NIFI-6202
>                 URL: https://issues.apache.org/jira/browse/NIFI-6202
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Core UI
>    Affects Versions: 1.8.0
>            Reporter: sayee
>            Priority: Blocker
>         Attachments: defect image.PNG
>
>
> Processing large files through NiFi:
> 1. Running a large number of data files through NiFi processor causes 1 file struck in Q and not getting cleared.
> 2. only way to clean is delete flow files from repository
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)