You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Christopher Hoshino-Fish (Jira)" <ji...@apache.org> on 2019/09/04 20:04:01 UTC

[jira] [Created] (SPARK-28977) JDBC Dataframe Reader Doc Doesn't Match JDBC Data Source Page

Christopher Hoshino-Fish created SPARK-28977:
------------------------------------------------

             Summary: JDBC Dataframe Reader Doc Doesn't Match JDBC Data Source Page
                 Key: SPARK-28977
                 URL: https://issues.apache.org/jira/browse/SPARK-28977
             Project: Spark
          Issue Type: Documentation
          Components: Documentation
    Affects Versions: 2.4.3
            Reporter: Christopher Hoshino-Fish
             Fix For: 2.4.3


[https://spark.apache.org/docs/2.4.3/sql-data-sources-jdbc.html]

Specifically in the partitionColumn section, this page says:

"{{partitionColumn}} must be a numeric, date, or timestamp column from the table in question."

 

But then in this doc: [https://spark.apache.org/docs/2.4.3/api/scala/index.html#org.apache.spark.sql.DataFrameReader]

in def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int, connectionProperties: Properties): DataFrame

we have:

columnName

the name of a column of integral type that will be used for partitioning.

 

This appears to go back pretty far, to 1.6.3, but I'm not sure when this was accurate.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org