You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Vadim (JIRA)" <ji...@apache.org> on 2018/11/05 10:53:00 UTC

[jira] [Created] (NIFI-5788) Introduce batch size limit in PutDatabaseRecord processor

Vadim created NIFI-5788:
---------------------------

             Summary: Introduce batch size limit in PutDatabaseRecord processor
                 Key: NIFI-5788
                 URL: https://issues.apache.org/jira/browse/NIFI-5788
             Project: Apache NiFi
          Issue Type: Bug
          Components: Core Framework
    Affects Versions: 1.8.0
         Environment: Teradata DB
            Reporter: Vadim
             Fix For: 1.8.0


Certain JDBC drivers do not support unlimited batch size in INSERT/UPDATE prepared SQL statements. Specifically, Teradata JDBC driver ([https://downloads.teradata.com/download/connectivity/jdbc-driver)] would fail SQL statement when the batch overflows the internal limits.

Dividing data into smaller chunks before the PutDatabaseRecord is applied can work around the issue in certain scenarios, but generally, this solution is not perfect because the SQL statements would be executed in different transaction contexts and data integrity would not be preserved.

The solution suggests the following:
 * introduce a new optional parameter in *PutDatabaseRecord* processor, *batch_size* which defines the maximum size of the bulk in INSERT/UPDATE statement; its default value is -1 (INFINITY) preserves the old behavior
 * divide the input into batches of the specified size and invoke PreparedStatement.executeBatch()  for each batch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)