You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Joe Witt (Jira)" <ji...@apache.org> on 2022/01/04 16:42:00 UTC

[jira] [Updated] (NIFI-8605) ExecuteSQLRecord processor consumes a large heap volume when use with PostgreSQL JDBC driver

     [ https://issues.apache.org/jira/browse/NIFI-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joe Witt updated NIFI-8605:
---------------------------
    Fix Version/s: 1.15.3

> ExecuteSQLRecord processor consumes a large heap volume when use with PostgreSQL JDBC driver
> --------------------------------------------------------------------------------------------
>
>                 Key: NIFI-8605
>                 URL: https://issues.apache.org/jira/browse/NIFI-8605
>             Project: Apache NiFi
>          Issue Type: Bug
>            Reporter: Vibhath Arunapriya Ileperuma
>            Assignee: Vibhath Arunapriya Ileperuma
>            Priority: Major
>              Labels: Beginner, beginner
>             Fix For: 1.16.0, 1.15.3
>
>         Attachments: GC.LOG
>
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> I'm using ExecuteSQLRecord processor to query from PostgreSQL. A 'select' query I'm using can return more than 60 million rows. I have configured the fetch size to 1000 to avoid fetching all the data into memory at once.
> But when the processor is started, heap starts to grow very fast. I have configured to NIFI to have 50GB heap size and even that amount is filled within minutes. When the heap is filled Garbage collector tries to clean the heap blocking other threads. 
> It seems like NIFI loads all the data to memory even though fetch size is set to 1000. I have attached the NIFI's GC log here with this ticket for reference.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)