You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/12/18 10:03:00 UTC

[jira] [Commented] (SPARK-26384) CSV schema inferring does not respect spark.sql.legacy.timeParser.enabled

    [ https://issues.apache.org/jira/browse/SPARK-26384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723886#comment-16723886 ] 

ASF GitHub Bot commented on SPARK-26384:
----------------------------------------

MaxGekk opened a new pull request #23345: [SPARK-26384][SQL] Propagate SQL configs for CSV schema inferring
URL: https://github.com/apache/spark/pull/23345
 
 
   ## What changes were proposed in this pull request?
   
   Currently, SQL configs are not propagated to executors while schema inferring in CSV datasource. For example, changing of `spark.sql.legacy.timeParser.enabled` does not impact on inferring timestamp types. In the PR, I propose to fix the issue by wrapping schema inferring action using `SQLExecution.withSQLConfPropagated`.
   
   ## How was this patch tested?
   
   Added logging to `TimestampFormatter`:
   ```patch
   -object TimestampFormatter {
   +object TimestampFormatter extends Logging {
      def apply(format: String, timeZone: TimeZone, locale: Locale): TimestampFormatter = {
        if (SQLConf.get.legacyTimeParserEnabled) {
   +      logError("LegacyFallbackTimestampFormatter is being used")
          new LegacyFallbackTimestampFormatter(format, timeZone, locale)
        } else {
   +      logError("Iso8601TimestampFormatter is being used")
          new Iso8601TimestampFormatter(format, timeZone, locale)
        }
      }
   ```
   and run the command in `spark-shell`:
   ```shell
   $ ./bin/spark-shell --conf spark.sql.legacy.timeParser.enabled=true
   ```
   ```scala
   scala> Seq("2010|10|10").toDF.repartition(1).write.mode("overwrite").text("/tmp/foo")
   scala> spark.read.option("inferSchema", "true").option("header", "false").option("timestampFormat", "yyyy|MM|dd").csv("/tmp/foo").printSchema()
   18/12/18 10:47:27 ERROR TimestampFormatter: LegacyFallbackTimestampFormatter is being used
   root
    |-- _c0: timestamp (nullable = true)
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> CSV schema inferring does not respect spark.sql.legacy.timeParser.enabled
> -------------------------------------------------------------------------
>
>                 Key: SPARK-26384
>                 URL: https://issues.apache.org/jira/browse/SPARK-26384
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Maxim Gekk
>            Priority: Major
>
> Starting from the commit [https://github.com/apache/spark/commit/f982ca07e80074bdc1e3b742c5e21cf368e4ede2] , add logging like in the comment https://github.com/apache/spark/pull/23150#discussion_r242021998 and run:
> {code:shell}
> $ ./bin/spark-shell --conf spark.sql.legacy.timeParser.enabled=true
> {code}
> and in the shell:
> {code:scala}
> scala> spark.conf.get("spark.sql.legacy.timeParser.enabled")
> res0: String = true
> scala> Seq("2010|10|10").toDF.repartition(1).write.mode("overwrite").text("/tmp/foo")
> scala> spark.read.option("inferSchema", "true").option("header", "false").option("timestampFormat", "yyyy|MM|dd").csv("/tmp/foo").printSchema()
> 18/12/17 12:11:47 ERROR TimestampFormatter: Iso8601TimestampFormatter is being used
> root
>  |-- _c0: timestamp (nullable = true)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org