You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by sureshthalamati <gi...@git.apache.org> on 2017/02/11 04:37:09 UTC

[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

GitHub user sureshthalamati opened a pull request:

    https://github.com/apache/spark/pull/16891

    [SPARK-19318][SQL] Fix to treat JDBC connection properties specified by the user in case-sensitive manner.

    ## What changes were proposed in this pull request?
    The reason for test failure is that the property \u201coracle.jdbc.mapDateToTimestamp\u201d set by the test was getting converted into all lower case. Oracle database expects this property in case-sensitive manner.
    
    This test was passing in previous releases because connection properties were sent as user specified for the test case scenario. Fixes to handle all option uniformly in case-insensitive manner, converted the JDBC connection properties also to lower case.
    
    This PR  enhances CaseInsensitiveMap to keep track of input case-sensitive keys , and uses those when creating connection properties that are passed to the JDBC connection. 
    
    Alternative approach PR https://github.com/apache/spark/pull/16847  is to pass original input keys to JDBC data source by adding check in the  Data source class and handle case-insensitivity in the JDBC source code.  
    
    ## How was this patch tested?
    Added new test cases to JdbcSuite , and OracleIntegrationSuite. Ran docker integration tests passed on my laptop, all tests passed successfully.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/sureshthalamati/spark jdbc_case_senstivity_props_fix-SPARK-19318

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/16891.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #16891
    
----
commit b2eba09bff1015d9ecccffff15a1c2ed7c09b6a9
Author: sureshthalamati <su...@gmail.com>
Date:   2017-02-09T22:36:54Z

    [SPARK-19318][SQL} Fix to keep track of JDBC connection properties in the user specified options in case-sensitive manner.

commit 41d336251cb70d7be26171c6a1f484e742ba83bd
Author: sureshthalamati <su...@gmail.com>
Date:   2017-02-11T04:28:00Z

    removed unnecessary getOrelse as the map is internal one, and the should exist

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
     Merged build triggered.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    thanks, merging to master!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72866/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72731 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72731/testReport)** for PR 16891 at commit [`41d3362`](https://github.com/apache/spark/commit/41d336251cb70d7be26171c6a1f484e742ba83bd).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72892 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72892/testReport)** for PR 16891 at commit [`27338c9`](https://github.com/apache/spark/commit/27338c9451f78d89cade769759cf6c597f959b1a).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100910764
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -75,7 +75,7 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
           s"""
             |CREATE OR REPLACE TEMPORARY VIEW PEOPLE1
             |USING org.apache.spark.sql.jdbc
    -        |OPTIONS (url '$url1', dbtable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    +        |OPTIONS (url '$url1', dbTable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    --- End diff --
    
    This is not obvious. We might remove it in the future. How about adding a dedicated test case in JDBCWriteSuite.scala? https://github.com/sureshthalamati/spark/blob/a1560742f2196ba04c14ad50e955bdcc839c4ad8/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala#L249-L259
    
    You can make the key name more obvious. Something like `DbTaBlE`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72731 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72731/testReport)** for PR 16891 at commit [`41d3362`](https://github.com/apache/spark/commit/41d336251cb70d7be26171c6a1f484e742ba83bd).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72892 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72892/testReport)** for PR 16891 at commit [`27338c9`](https://github.com/apache/spark/commit/27338c9451f78d89cade769759cf6c597f959b1a).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72866 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72866/testReport)** for PR 16891 at commit [`9e31ec3`](https://github.com/apache/spark/commit/9e31ec304974a4764df9173e724e397291ebf91d).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100872945
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
       with Serializable {
     
    -  val baseMap = map.map(kv => kv.copy(_1 = kv._1.toLowerCase))
    +  val baseMap = caseSensitiveMap.map(kv => kv.copy(_1 = kv._1.toLowerCase))
     
       override def get(k: String): Option[String] = baseMap.get(k.toLowerCase)
     
       override def contains(k: String): Boolean = baseMap.contains(k.toLowerCase)
     
    -  override def + [B1 >: String](kv: (String, B1)): Map[String, B1] =
    -    baseMap + kv.copy(_1 = kv._1.toLowerCase)
    +  override def +[B1 >: String](kv: (String, B1)): Map[String, B1] = {
    +    new CaseInsensitiveMap(caseSensitiveMap + kv.copy(_2 = kv._2.asInstanceOf[String]))
    --- End diff --
    
    how about
    ```
    class CaseInsensitiveMap[T](originalMap: Map[String, T]) extends Map[String, T] {
      ...
      override def +[B1 >: T](kv: (String, B1)): Map[String, B1] = {
        new CaseInsesitveMap(originalMap + kv)
      }
    }
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72820 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72820/testReport)** for PR 16891 at commit [`a156074`](https://github.com/apache/spark/commit/a1560742f2196ba04c14ad50e955bdcc839c4ad8).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds the following public classes _(experimental)_:
      * `class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976184
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
       with Serializable {
     
    -  val baseMap = map.map(kv => kv.copy(_1 = kv._1.toLowerCase))
    +  val baseMap = caseSensitiveMap.map(kv => kv.copy(_1 = kv._1.toLowerCase))
     
       override def get(k: String): Option[String] = baseMap.get(k.toLowerCase)
     
       override def contains(k: String): Boolean = baseMap.contains(k.toLowerCase)
     
    -  override def + [B1 >: String](kv: (String, B1)): Map[String, B1] =
    -    baseMap + kv.copy(_1 = kv._1.toLowerCase)
    +  override def +[B1 >: String](kv: (String, B1)): Map[String, B1] = {
    +    new CaseInsensitiveMap(caseSensitiveMap + kv.copy(_2 = kv._2.asInstanceOf[String]))
    +  }
     
       override def iterator: Iterator[(String, String)] = baseMap.iterator
     
    -  override def -(key: String): Map[String, String] = baseMap - key.toLowerCase
    +  override def -(key: String): Map[String, String] = {
    +    new CaseInsensitiveMap(caseSensitiveMap.filterKeys(k => k.toLowerCase != key.toLowerCase))
    --- End diff --
    
    done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976122
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -75,7 +75,7 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
           s"""
             |CREATE OR REPLACE TEMPORARY VIEW PEOPLE1
             |USING org.apache.spark.sql.jdbc
    -        |OPTIONS (url '$url1', dbtable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    +        |OPTIONS (url '$url1', dbTable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    --- End diff --
    
    sure. Added a new test case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100737505
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -23,16 +23,30 @@ package org.apache.spark.sql.catalyst.util
     class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    --- End diff --
    
    Good question. For some reason I was hung up on making  only the case-sensitive key available to the caller. Changed the  code to expose the original map , it made code simpler. Thank you very much  for the suggestion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/16891


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100869567
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
       with Serializable {
     
    -  val baseMap = map.map(kv => kv.copy(_1 = kv._1.toLowerCase))
    +  val baseMap = caseSensitiveMap.map(kv => kv.copy(_1 = kv._1.toLowerCase))
     
       override def get(k: String): Option[String] = baseMap.get(k.toLowerCase)
     
       override def contains(k: String): Boolean = baseMap.contains(k.toLowerCase)
     
    -  override def + [B1 >: String](kv: (String, B1)): Map[String, B1] =
    -    baseMap + kv.copy(_1 = kv._1.toLowerCase)
    +  override def +[B1 >: String](kv: (String, B1)): Map[String, B1] = {
    +    new CaseInsensitiveMap(caseSensitiveMap + kv.copy(_2 = kv._2.asInstanceOf[String]))
    --- End diff --
    
    copy is unnecessary, but I do need the cast, otherwise getting compiler error :
    Error:(34, 47) type mismatch;
     found   : (String, B1)
     required: (String, String)
        new CaseInsensitiveMap(caseSensitiveMap + kv )
    
    I  am thinking of changing it to the following :
    new CaseInsensitiveMap(caseSensitiveMap + kv.asInstanceOf[(String, String)])
    
    Thank you  for the feedback. 
                                                  ^


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100867361
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    --- End diff --
    
    can you simply this test a bit? e.g.
    ```
    val parameters = Map("dbTAblE" -> "t1", "cutomKey" -> "a-value")
    def testJdbcOption(option: JDBCOption): Unit = {
      // Spark JDBC data source options are case-insensitive
      assert(options.table == "t1")
      // When we convert it to properties, it should be case-sensitive.
      assert(options.asProperties.get("customkey") == null)
      assert(options.asProperties.get("customKey") == "a-value")
      assert(options.asConnectionProperties.get("customkey") == null)
      assert(options.asConnectionProperties.get("customKey") == "a-value")
    }
    
    val parameters = Map("dbTAblE" -> "t1", "cutomKey" -> "a-value")
    testJdbcOption(new JDBCOption(parameters))
    testJdbcOption(new JDBCOption(new CaseInsensitiveMap(parameters)))
    test add/remove key-value from the case-insensitive map ...
    ```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100737351
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -62,6 +62,12 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
       }
     
       override def dataPreparation(conn: Connection): Unit = {
    +    conn.prepareStatement("CREATE TABLE datetime (id NUMBER(10), d DATE, t TIMESTAMP)")
    +      .executeUpdate()
    +    conn.prepareStatement("INSERT INTO datetime VALUES ("
    +      + "1, {d '1991-11-09'}, {ts '1996-01-01 01:23:45'})").executeUpdate()
    +    conn.prepareStatement("CREATE TABLE datetime1 (id NUMBER(10), d DATE, t TIMESTAMP)")
    --- End diff --
    
    Thank you for reviewing the patch. I think cleanup is not required, these tables are not persistent across the test runs. They are  cleaned up when docker container is removed at the end of the test.  Currently I did notice any setup in the afterAll() to do it after the test.
    
    I moved up creation of temporary views also  to the same place, to keep them together. And possibly any future  tests can also use these tables. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72807/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976151
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
       with Serializable {
     
    -  val baseMap = map.map(kv => kv.copy(_1 = kv._1.toLowerCase))
    +  val baseMap = caseSensitiveMap.map(kv => kv.copy(_1 = kv._1.toLowerCase))
     
       override def get(k: String): Option[String] = baseMap.get(k.toLowerCase)
     
       override def contains(k: String): Boolean = baseMap.contains(k.toLowerCase)
     
    -  override def + [B1 >: String](kv: (String, B1)): Map[String, B1] =
    -    baseMap + kv.copy(_1 = kv._1.toLowerCase)
    +  override def +[B1 >: String](kv: (String, B1)): Map[String, B1] = {
    +    new CaseInsensitiveMap(caseSensitiveMap + kv.copy(_2 = kv._2.asInstanceOf[String]))
    --- End diff --
    
    I made this change it worked.  It does touch more files , i hope that is ok.
    Thanks a lot for the suggestion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r101130810
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -349,4 +349,17 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
         assert(e.contains("Invalid value `0` for parameter `numPartitions` in table writing " +
           "via JDBC. The minimum value is 1."))
       }
    +
    +  test("SPARK-19318 temporary view data source option keys should be case-insensitive") {
    +    withTempView("people_view") {
    +      sql(
    +        s"""
    +           |CREATE TEMPORARY VIEW people_view
    +           |USING org.apache.spark.sql.jdbc
    +           |OPTIONS (uRl '$url1', DbTaBlE 'TEST.PEOPLE1', User 'testUser', PassWord 'testPass')
    +      """.stripMargin.replaceAll("\n", " "))
    --- End diff --
    
    Fixed. Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Thank you for the feedback @cloud-fan @gatorsmile . Addressed the review comments, please let me know if it requires any other changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976177
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    +    val parameters = Map(
    +      "url" -> urlWithUserAndPass,
    +      "dbTable" -> "t1",
    +      "numPartitions" -> "10",
    +      "oracle.jdbc.mapDateToTimestamp" -> "false"
    +    )
    +
    +    assert(new JDBCOptions(parameters).asConnectionProperties.keySet()
    +      .toArray()(0) == "oracle.jdbc.mapDateToTimestamp")
    --- End diff --
    
    removed as part of test case cleanup.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100864094
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    +    val parameters = Map(
    +      "url" -> urlWithUserAndPass,
    +      "dbTable" -> "t1",
    +      "numPartitions" -> "10",
    +      "oracle.jdbc.mapDateToTimestamp" -> "false"
    +    )
    +
    +    assert(new JDBCOptions(parameters).asConnectionProperties.keySet()
    +      .toArray()(0) == "oracle.jdbc.mapDateToTimestamp")
    +
    +    val caseInsensitiveMap = new CaseInsensitiveMap(parameters)
    +    assert(new JDBCOptions(caseInsensitiveMap).asConnectionProperties.keySet()
    +      .toArray()(0) == "oracle.jdbc.mapDateToTimestamp")
    +    assert(caseInsensitiveMap.get("dbtable").get == "t1")
    +
    +    // add new key-value to case-insensitive map
    +    val caseInsensitiveMap1 = caseInsensitiveMap + ("connTimeOut" -> "60")
    +    val connProps = new JDBCOptions(caseInsensitiveMap1).asConnectionProperties
    +    assert(connProps.get("oracle.jdbc.mapDateToTimestamp") == "false")
    +    assert(connProps.get("connTimeOut") == "60")
    +    assert(caseInsensitiveMap1.get("dbtable").get == "t1")
    +
    +    // remove key from case-insensitive map
    --- End diff --
    
    key removing is case insensitive, this test doesn't reflect it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72892/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r101097919
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -349,4 +349,17 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
         assert(e.contains("Invalid value `0` for parameter `numPartitions` in table writing " +
           "via JDBC. The minimum value is 1."))
       }
    +
    +  test("SPARK-19318 temporary view data source option keys should be case-insensitive") {
    +    withTempView("people_view") {
    +      sql(
    +        s"""
    +           |CREATE TEMPORARY VIEW people_view
    +           |USING org.apache.spark.sql.jdbc
    +           |OPTIONS (uRl '$url1', DbTaBlE 'TEST.PEOPLE1', User 'testUser', PassWord 'testPass')
    +      """.stripMargin.replaceAll("\n", " "))
    --- End diff --
    
    ```
          s"""
            |CREATE TEMPORARY VIEW people_view
            |USING org.apache.spark.sql.jdbc
            |OPTIONS (uRl '$url1', DbTaBlE 'TEST.PEOPLE1', User 'testUser', PassWord 'testPass')
           """.stripMargin.replaceAll("\n", " "))
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100863061
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala ---
    @@ -31,7 +31,12 @@ class JDBCOptions(
     
       import JDBCOptions._
     
    -  def this(parameters: Map[String, String]) = this(new CaseInsensitiveMap(parameters))
    +  def this(params: Map[String, String]) = {
    +    this(params match {
    +      case cMap: CaseInsensitiveMap => cMap
    +      case _ => new CaseInsensitiveMap(params)
    --- End diff --
    
    I think this is a general problem, let's create an `object CaseInsensitiveMap` and put this logic there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100681778
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -149,4 +155,29 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
         assert(values.getDate(9).equals(dateVal))
         assert(values.getTimestamp(10).equals(timestampVal))
       }
    +
    +  test("SPARK-19318: connection property keys should be case-sensitive") {
    +    sql(
    +      s"""
    +         |CREATE TEMPORARY TABLE datetime
    +         |USING org.apache.spark.sql.jdbc
    +         |OPTIONS (url '$jdbcUrl', dbTable 'datetime', oracle.jdbc.mapDateToTimestamp 'false')
    +      """.stripMargin.replaceAll("\n", " "))
    +    val row = sql("SELECT * FROM datetime where id = 1").collect()(0)
    --- End diff --
    
    nit: use `.head` instead of `(0)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r101097830
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -349,4 +349,17 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
         assert(e.contains("Invalid value `0` for parameter `numPartitions` in table writing " +
           "via JDBC. The minimum value is 1."))
       }
    +
    +  test("SPARK-19318 temporary view data source option keys should be case-insensitive") {
    +    withTempView("people_view") {
    +      sql(
    +        s"""
    +           |CREATE TEMPORARY VIEW people_view
    +           |USING org.apache.spark.sql.jdbc
    +           |OPTIONS (uRl '$url1', DbTaBlE 'TEST.PEOPLE1', User 'testUser', PassWord 'testPass')
    +      """.stripMargin.replaceAll("\n", " "))
    --- End diff --
    
    The indents are not right. Could you fix all of them?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100737377
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -149,4 +155,29 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
         assert(values.getDate(9).equals(dateVal))
         assert(values.getTimestamp(10).equals(timestampVal))
       }
    +
    +  test("SPARK-19318: connection property keys should be case-sensitive") {
    +    sql(
    +      s"""
    +         |CREATE TEMPORARY TABLE datetime
    --- End diff --
    
    done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build started.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100862539
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
    --- End diff --
    
    nit: let's name it `originalMap`, and rename `baseMap` to `keyLowercasedMap`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100863267
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    +    val parameters = Map(
    +      "url" -> urlWithUserAndPass,
    +      "dbTable" -> "t1",
    +      "numPartitions" -> "10",
    +      "oracle.jdbc.mapDateToTimestamp" -> "false"
    +    )
    +
    +    assert(new JDBCOptions(parameters).asConnectionProperties.keySet()
    +      .toArray()(0) == "oracle.jdbc.mapDateToTimestamp")
    --- End diff --
    
    how about we test it with `xxx.keySet().contains("xxx")`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Thank you for reviewing the PR @cloud-fan.  Addressed the review comments, please let me know if it requires any further changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100681754
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -62,6 +62,12 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
       }
     
       override def dataPreparation(conn: Connection): Unit = {
    +    conn.prepareStatement("CREATE TABLE datetime (id NUMBER(10), d DATE, t TIMESTAMP)")
    +      .executeUpdate()
    +    conn.prepareStatement("INSERT INTO datetime VALUES ("
    +      + "1, {d '1991-11-09'}, {ts '1996-01-01 01:23:45'})").executeUpdate()
    +    conn.prepareStatement("CREATE TABLE datetime1 (id NUMBER(10), d DATE, t TIMESTAMP)")
    --- End diff --
    
    do we need to clean up these 2 tables?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100681839
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -75,7 +75,7 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
           s"""
             |CREATE OR REPLACE TEMPORARY VIEW PEOPLE1
             |USING org.apache.spark.sql.jdbc
    -        |OPTIONS (url '$url1', dbtable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    +        |OPTIONS (url '$url1', dbTable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    --- End diff --
    
    why change this? I think the spark specific properties should still be case insensitive


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72820 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72820/testReport)** for PR 16891 at commit [`a156074`](https://github.com/apache/spark/commit/a1560742f2196ba04c14ad50e955bdcc839c4ad8).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100681828
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -23,16 +23,30 @@ package org.apache.spark.sql.catalyst.util
     class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    --- End diff --
    
    why not we just expose this original map?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100910994
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -149,4 +172,16 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
         assert(values.getDate(9).equals(dateVal))
         assert(values.getTimestamp(10).equals(timestampVal))
       }
    +
    +  test("SPARK-19318: connection property keys should be case-sensitive") {
    +    val row = sql("SELECT * FROM datetime where id = 1").head()
    +    assert(row.getDate(1).equals(Date.valueOf("1991-11-09")))
    +    assert(row.getTimestamp(2).equals(Timestamp.valueOf("1996-01-01 01:23:45")))
    +
    +    sql("INSERT INTO TABLE datetime1 SELECT * FROM datetime where id = 1")
    +    val row1 = sql("SELECT * FROM datetime1 where id = 1").head()
    +    assert(row1.getInt(0) == 1)
    +    assert(row1.getDate(1).equals(Date.valueOf("1991-11-09")))
    +    assert(row1.getTimestamp(2).equals(Timestamp.valueOf("1996-01-01 01:23:45")))
    +  }
    --- End diff --
    
    After addressing all the comments, I will run the docker tests in my local environment. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976210
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    --- End diff --
    
    Thank you, that looks much better. Updated the test case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build started.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72731/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100737390
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -149,4 +155,29 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
         assert(values.getDate(9).equals(dateVal))
         assert(values.getTimestamp(10).equals(timestampVal))
       }
    +
    +  test("SPARK-19318: connection property keys should be case-sensitive") {
    +    sql(
    +      s"""
    +         |CREATE TEMPORARY TABLE datetime
    +         |USING org.apache.spark.sql.jdbc
    +         |OPTIONS (url '$jdbcUrl', dbTable 'datetime', oracle.jdbc.mapDateToTimestamp 'false')
    +      """.stripMargin.replaceAll("\n", " "))
    +    val row = sql("SELECT * FROM datetime where id = 1").collect()(0)
    --- End diff --
    
    done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100681762
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -149,4 +155,29 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
         assert(values.getDate(9).equals(dateVal))
         assert(values.getTimestamp(10).equals(timestampVal))
       }
    +
    +  test("SPARK-19318: connection property keys should be case-sensitive") {
    +    sql(
    +      s"""
    +         |CREATE TEMPORARY TABLE datetime
    --- End diff --
    
    use `CREATE TEMPORARY VIEW` please, `CREATE TEMPORARY TABLE` is deprecated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72866 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72866/testReport)** for PR 16891 at commit [`9e31ec3`](https://github.com/apache/spark/commit/9e31ec304974a4764df9173e724e397291ebf91d).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds the following public classes _(experimental)_:
      * `class FileStreamOptions(parameters: CaseInsensitiveMap[String]) extends Logging `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    I ran the docker tests in my local computer. Now, finally, all the tests can pass! :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100863864
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
       with Serializable {
     
    -  val baseMap = map.map(kv => kv.copy(_1 = kv._1.toLowerCase))
    +  val baseMap = caseSensitiveMap.map(kv => kv.copy(_1 = kv._1.toLowerCase))
     
       override def get(k: String): Option[String] = baseMap.get(k.toLowerCase)
     
       override def contains(k: String): Boolean = baseMap.contains(k.toLowerCase)
     
    -  override def + [B1 >: String](kv: (String, B1)): Map[String, B1] =
    -    baseMap + kv.copy(_1 = kv._1.toLowerCase)
    +  override def +[B1 >: String](kv: (String, B1)): Map[String, B1] = {
    +    new CaseInsensitiveMap(caseSensitiveMap + kv.copy(_2 = kv._2.asInstanceOf[String]))
    +  }
     
       override def iterator: Iterator[(String, String)] = baseMap.iterator
     
    -  override def -(key: String): Map[String, String] = baseMap - key.toLowerCase
    +  override def -(key: String): Map[String, String] = {
    +    new CaseInsensitiveMap(caseSensitiveMap.filterKeys(k => k.toLowerCase != key.toLowerCase))
    --- End diff --
    
    nit: `String.equalsIgnoreCase` can help here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100862751
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
       with Serializable {
     
    -  val baseMap = map.map(kv => kv.copy(_1 = kv._1.toLowerCase))
    +  val baseMap = caseSensitiveMap.map(kv => kv.copy(_1 = kv._1.toLowerCase))
     
       override def get(k: String): Option[String] = baseMap.get(k.toLowerCase)
     
       override def contains(k: String): Boolean = baseMap.contains(k.toLowerCase)
     
    -  override def + [B1 >: String](kv: (String, B1)): Map[String, B1] =
    -    baseMap + kv.copy(_1 = kv._1.toLowerCase)
    +  override def +[B1 >: String](kv: (String, B1)): Map[String, B1] = {
    +    new CaseInsensitiveMap(caseSensitiveMap + kv.copy(_2 = kv._2.asInstanceOf[String]))
    --- End diff --
    
    why `kv.copy(_2 = kv._2.asInstanceOf[String])`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100737587
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
    @@ -75,7 +75,7 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter {
           s"""
             |CREATE OR REPLACE TEMPORARY VIEW PEOPLE1
             |USING org.apache.spark.sql.jdbc
    -        |OPTIONS (url '$url1', dbtable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    +        |OPTIONS (url '$url1', dbTable 'TEST.PEOPLE1', user 'testUser', password 'testPass')
    --- End diff --
    
    Yes, they should be case-insensitive.  Just additional case-sensitivity test case.
    During testing of my fix I did not notice a test in the write suite for data source table for case-sensitivity checking during insert. I flipped the "dbTable" to  make sure case-insensitivity is not broken in this case.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    **[Test build #72807 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72807/testReport)** for PR 16891 at commit [`a156074`](https://github.com/apache/spark/commit/a1560742f2196ba04c14ad50e955bdcc839c4ad8).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100867554
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    --- End diff --
    
    then we can remove `https://github.com/apache/spark/pull/16891/files#diff-dc4b58851b084b274df6fe6b189db84dR960`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976136
  
    --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala ---
    @@ -18,21 +18,25 @@
     package org.apache.spark.sql.catalyst.util
     
     /**
    - * Builds a map in which keys are case insensitive
    + * Builds a map in which keys are case insensitive. Input map can be accessed for cases where
    + * case-sensitive information is also required.
      */
    -class CaseInsensitiveMap(map: Map[String, String]) extends Map[String, String]
    +class CaseInsensitiveMap(val caseSensitiveMap: Map[String, String]) extends Map[String, String]
    --- End diff --
    
    done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976164
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala ---
    @@ -31,7 +31,12 @@ class JDBCOptions(
     
       import JDBCOptions._
     
    -  def this(parameters: Map[String, String]) = this(new CaseInsensitiveMap(parameters))
    +  def this(params: Map[String, String]) = {
    +    this(params match {
    +      case cMap: CaseInsensitiveMap => cMap
    +      case _ => new CaseInsensitiveMap(params)
    --- End diff --
    
    Moved it to CaseInsenstiveMap and marked the constructor private to avoid any nested created of the case-insensitive maps.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Thank you  @cloud-fan , @gatorsmile 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976190
  
    --- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala ---
    @@ -925,4 +925,53 @@ class JDBCSuite extends SparkFunSuite
         assert(res.generatedRows.isEmpty)
         assert(res.outputRows === foobarCnt :: Nil)
       }
    +
    +  test("SPARK-19318: Connection properties keys should be case-sensitivie.") {
    +    val parameters = Map(
    +      "url" -> urlWithUserAndPass,
    +      "dbTable" -> "t1",
    +      "numPartitions" -> "10",
    +      "oracle.jdbc.mapDateToTimestamp" -> "false"
    +    )
    +
    +    assert(new JDBCOptions(parameters).asConnectionProperties.keySet()
    +      .toArray()(0) == "oracle.jdbc.mapDateToTimestamp")
    +
    +    val caseInsensitiveMap = new CaseInsensitiveMap(parameters)
    +    assert(new JDBCOptions(caseInsensitiveMap).asConnectionProperties.keySet()
    +      .toArray()(0) == "oracle.jdbc.mapDateToTimestamp")
    +    assert(caseInsensitiveMap.get("dbtable").get == "t1")
    +
    +    // add new key-value to case-insensitive map
    +    val caseInsensitiveMap1 = caseInsensitiveMap + ("connTimeOut" -> "60")
    +    val connProps = new JDBCOptions(caseInsensitiveMap1).asConnectionProperties
    +    assert(connProps.get("oracle.jdbc.mapDateToTimestamp") == "false")
    +    assert(connProps.get("connTimeOut") == "60")
    +    assert(caseInsensitiveMap1.get("dbtable").get == "t1")
    +
    +    // remove key from case-insensitive map
    --- End diff --
    
    done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72820/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16891: [SPARK-19318][SQL] Fix to treat JDBC connection p...

Posted by sureshthalamati <gi...@git.apache.org>.
Github user sureshthalamati commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16891#discussion_r100976219
  
    --- Diff: external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala ---
    @@ -149,4 +172,16 @@ class OracleIntegrationSuite extends DockerJDBCIntegrationSuite with SharedSQLCo
         assert(values.getDate(9).equals(dateVal))
         assert(values.getTimestamp(10).equals(timestampVal))
       }
    +
    +  test("SPARK-19318: connection property keys should be case-sensitive") {
    +    val row = sql("SELECT * FROM datetime where id = 1").head()
    +    assert(row.getDate(1).equals(Date.valueOf("1991-11-09")))
    +    assert(row.getTimestamp(2).equals(Timestamp.valueOf("1996-01-01 01:23:45")))
    +
    +    sql("INSERT INTO TABLE datetime1 SELECT * FROM datetime where id = 1")
    +    val row1 = sql("SELECT * FROM datetime1 where id = 1").head()
    +    assert(row1.getInt(0) == 1)
    +    assert(row1.getDate(1).equals(Date.valueOf("1991-11-09")))
    +    assert(row1.getTimestamp(2).equals(Timestamp.valueOf("1996-01-01 01:23:45")))
    +  }
    --- End diff --
    
    Thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16891: [SPARK-19318][SQL] Fix to treat JDBC connection properti...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16891
  
     Merged build triggered.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org