You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/03/07 15:15:59 UTC

[GitHub] [spark] cloud-fan commented on a change in pull request #35726: [SPARK-37895][SQL] Filter push down column with quoted columns

cloud-fan commented on a change in pull request #35726:
URL: https://github.com/apache/spark/pull/35726#discussion_r820804485



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
##########
@@ -97,7 +97,15 @@ object JDBCRDD extends Logging {
    * Returns None for an unhandled filter.
    */
   def compileFilter(f: Filter, dialect: JdbcDialect): Option[String] = {
-    def quote(colName: String): String = dialect.quoteIdentifier(colName)
+    def isEnclosedInBackticks(colName: String): Boolean =
+      colName.startsWith("`") && colName.endsWith("`")

Review comment:
       I checked `V2ScanRelationPushDown`. For v2 sources, Spark always quote the column name if it contains special chars for filter pushdown. So I think here we can just invoke the sql parser to parse it
   ```
   val nameParts = SparkSession.active.sessionState.sqlParser.parseMultipartIdentifier(colName)
   assert(nameParts.length == 1) // or throw a user-facing exception if nested column can reach here
   dialect.quoteIdentifier(nameParts.head)
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org