You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/05/18 15:15:23 UTC
[GitHub] [spark] gengliangwang commented on a diff in pull request #36501: [SPARK-39143][SQL] Support CSV scans with DEFAULT values
gengliangwang commented on code in PR #36501:
URL: https://github.com/apache/spark/pull/36501#discussion_r876028176
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala:
##########
@@ -511,6 +511,30 @@ case class StructType(fields: Array[StructField]) extends DataType with Seq[Stru
@transient
private[sql] lazy val interpretedOrdering =
InterpretedOrdering.forSchema(this.fields.map(_.dataType))
+
+ /**
+ * Parses the text representing constant-folded default column literal values.
+ * @return a sequence of either (1) NULL, if the column had no default value, or (2) an object of
+ * Any type suitable for assigning into a row using the InternalRow.update method.
+ */
+ private [sql] lazy val defaultValues: Array[Any] =
Review Comment:
Shall we rename this as existenceDefaultValues instead?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org