You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Dylan Forciea (Jira)" <ji...@apache.org> on 2020/10/07 16:26:00 UTC

[jira] [Created] (FLINK-19522) Add ability to set auto commit on jdbc driver from Table/SQL API

Dylan Forciea created FLINK-19522:
-------------------------------------

             Summary: Add ability to set auto commit on jdbc driver from Table/SQL API
                 Key: FLINK-19522
                 URL: https://issues.apache.org/jira/browse/FLINK-19522
             Project: Flink
          Issue Type: Improvement
          Components: Connectors / JDBC
    Affects Versions: 1.11.2
            Reporter: Dylan Forciea
         Attachments: Screen Shot 2020-10-01 at 5.03.24 PM.png, Screen Shot 2020-10-01 at 5.03.31 PM.png

When I tried to stream data from postgres via the JDBC source connector in the SQL api, it was loading the entirety of the table into memory before starting streaming. This is due to the postgres JDBC driver requiring the autoCommit flag to be set to true for streaming to take place.

FLINK-12198 provided the means to do this with the JDBCInputSource, but this did not extend to the SQL description. This option should be added.

To reproduce, create a very large table and try to read it in with the SQL api. You will see a large spike of memory usage and no data streaming, and then it will start all at once. I will attach a couple of graphs before and after I made a patch to the code myself to set auto-commit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)