You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/05/03 02:23:15 UTC

[GitHub] [spark] ravwojdyla commented on a diff in pull request #36430: [WIP][SPARK-38904] Select by schema

ravwojdyla commented on code in PR #36430:
URL: https://github.com/apache/spark/pull/36430#discussion_r863329395


##########
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##########
@@ -1593,6 +1593,35 @@ class Dataset[T] private[sql](
   @scala.annotation.varargs
   def select(col: String, cols: String*): DataFrame = select((col +: cols).map(Column(_)) : _*)
 
+  /**
+   * Selects a set of columns via schema object.
+   */
+  def select(schema: StructType): DataFrame = {
+    val attrs = logicalPlan.output
+    val attrs_map = attrs.map { a => (a.name, a) }.toMap
+    val new_attrs = AttributeMap(schema.map { f =>

Review Comment:
   So this current code swaps the `dataType` without checking if it's safe. Do you or @jiangxb1987 have any tips how to safely handle nested data here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org