You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/09/15 09:00:32 UTC

spark git commit: [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts

Repository: spark
Updated Branches:
  refs/heads/master ad79fc0a8 -> 71a65825c


[SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts

## What changes were proposed in this pull request?

Optimize a while loop during batch inserts

## How was this patch tested?

Unit tests were done, specifically "mvn  test" for sql

Author: John Muller <jm...@us.imshealth.com>

Closes #15098 from blue666man/SPARK-17536.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/71a65825
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/71a65825
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/71a65825

Branch: refs/heads/master
Commit: 71a65825c5d5d0886ac3e11f9945cfcb39573ac3
Parents: ad79fc0
Author: John Muller <jm...@us.imshealth.com>
Authored: Thu Sep 15 10:00:28 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Sep 15 10:00:28 2016 +0100

----------------------------------------------------------------------
 .../apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala    | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/71a65825/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
index 132472a..b09fd51 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
@@ -590,12 +590,12 @@ object JdbcUtils extends Logging {
       val stmt = insertStatement(conn, table, rddSchema, dialect)
       val setters: Array[JDBCValueSetter] = rddSchema.fields.map(_.dataType)
         .map(makeSetter(conn, dialect, _)).toArray
+      val numFields = rddSchema.fields.length
 
       try {
         var rowCount = 0
         while (iterator.hasNext) {
           val row = iterator.next()
-          val numFields = rddSchema.fields.length
           var i = 0
           while (i < numFields) {
             if (row.isNullAt(i)) {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org