You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/06/07 04:57:05 UTC

[GitHub] [spark] mridulm commented on a diff in pull request #36775: [SPARK-39389]Filesystem closed should not be considered as corrupt files

mridulm commented on code in PR #36775:
URL: https://github.com/apache/spark/pull/36775#discussion_r890765675


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala:
##########
@@ -253,6 +253,9 @@ class FileScanRDD(
                   // Throw FileNotFoundException even if `ignoreCorruptFiles` is true
                   case e: FileNotFoundException if !ignoreMissingFiles => throw e
                   case e @ (_: RuntimeException | _: IOException) if ignoreCorruptFiles =>
+                    if (e.getMessage.contains("Filesystem closed")) {

Review Comment:
   +1 to @JoshRosen's proposal.
   Given that hadoop is throwing a generic exception here, and given the lack of principled alternatives available - walking the stack allows us to reasonably detect if the cause is due to hadoop filesystem being closed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org