You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by gatorsmile <gi...@git.apache.org> on 2018/06/14 15:59:33 UTC
[GitHub] spark pull request #21379: [SPARK-24327][SQL] Verify and normalize a partiti...
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21379#discussion_r195479735
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -309,7 +309,8 @@ class DataFrameReader private[sql](sparkSession: SparkSession) extends Logging {
val parts: Array[Partition] = predicates.zipWithIndex.map { case (part, i) =>
JDBCPartition(part, i) : Partition
}
- val relation = JDBCRelation(parts, options)(sparkSession)
+ val schema = JDBCRelation.getSchema(sparkSession.sessionState.conf.resolver, options)
+ val relation = JDBCRelation(schema, parts, options)(sparkSession)
--- End diff --
We do not need to change this. Add an apply function to object JDBCRelation.scala
```
def apply(parts: Array[Partition], jdbcOptions: JDBCOptions)(
sparkSession: SparkSession): JDBCRelation = {
val schema = getSchema(jdbcOptions, sparkSession.sessionState.conf.resolver)
JDBCRelation(schema, parts, jdbcOptions)(sparkSession)
}
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org