You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/05/12 23:59:52 UTC

[GitHub] [spark] dtenedor commented on a diff in pull request #36501: [SPARK-39143][SQL] Support CSV scans with DEFAULT values

dtenedor commented on code in PR #36501:
URL: https://github.com/apache/spark/pull/36501#discussion_r871901106


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala:
##########
@@ -511,6 +511,30 @@ case class StructType(fields: Array[StructField]) extends DataType with Seq[Stru
   @transient
   private[sql] lazy val interpretedOrdering =
     InterpretedOrdering.forSchema(this.fields.map(_.dataType))
+
+  /**
+   * Parses the text representing constant-folded default column literal values.
+   * @return a sequence of either (1) NULL, if the column had no default value, or (2) an object of
+   *         Any type suitable for assigning into a row using the InternalRow.update method.
+   */
+  lazy val defaultValues: Seq[Any] =

Review Comment:
   Yeah, that's not a bad idea since we will be accessing this for every row of data source scans. Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org