You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "ted-jenks (via GitHub)" <gi...@apache.org> on 2023/02/16 14:16:11 UTC

[GitHub] [spark] ted-jenks commented on a diff in pull request #39907: [SPARK-42359][SQL] Support row skipping when reading CSV files

ted-jenks commented on code in PR #39907:
URL: https://github.com/apache/spark/pull/39907#discussion_r1108533646


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVUtils.scala:
##########
@@ -25,21 +25,28 @@ import org.apache.spark.sql.functions._
 
 object CSVUtils {
   /**
-   * Filter ignorable rows for CSV dataset (lines empty and starting with `comment`).
-   * This is currently being used in CSV schema inference.
+   * Filter blank lines, remove comments, and skip specified rows from a CSV iterator. Then blank
+   * entries (and comments if set) are removed. This is currently being used in CSV schema
+   * inference.
    */
-  def filterCommentAndEmpty(lines: Dataset[String], options: CSVOptions): Dataset[String] = {
+  def filterUnwantedLines(lines: Dataset[String], options: CSVOptions): Dataset[String] = {
     // Note that this was separately made by SPARK-18362. Logically, this should be the same
-    // with the one below, `filterCommentAndEmpty` but execution path is different. One of them
+    // with the one below, `filterUnwantedLines` but execution path is different. One of them
     // might have to be removed in the near future if possible.
     import lines.sqlContext.implicits._
     val aliased = lines.toDF("value")
     val nonEmptyLines = aliased.filter(length(trim($"value")) > 0)
-    if (options.isCommentSet) {
-      nonEmptyLines.filter(!$"value".startsWith(options.comment.toString)).as[String]
-    } else {
-      nonEmptyLines.as[String]
+    val commentFilteredLines = {
+      if (options.isCommentSet) {
+        nonEmptyLines.filter(!$"value".startsWith(options.comment.toString)).as[String]
+      } else {
+        nonEmptyLines.as[String]
+      }
     }
+    commentFilteredLines.rdd.zipWithIndex().toDF("value", "order")

Review Comment:
   Understood, fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org