You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/02/23 07:39:11 UTC

[GitHub] [spark] AngersZhuuuu opened a new pull request #35620: [SPArK-38294][SQL] DDLUtils.verifyNotReadPath should check target is subDir

AngersZhuuuu opened a new pull request #35620:
URL: https://github.com/apache/spark/pull/35620


   ### What changes were proposed in this pull request?
   When user define a table A and  table location is another table B 's partition path. If user want to insert data to A table with B table's this partition's data, will directly clean the data under table A's location, an cause B's data loss.
    
   ```
   [info]   Cause: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 15) (10.12.190.176 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
   [info] 	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:577)
   [info] 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:345)
   [info] 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$20(FileFormatWriter.scala:252)
   [info] 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
   [info] 	at org.apache.spark.scheduler.Task.run(Task.scala:136)
   [info] 	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:507)
   [info] 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1475)
   [info] 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:510)
   [info] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [info] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [info] 	at java.lang.Thread.run(Thread.java:748)
   [info] Caused by: java.io.FileNotFoundException:
   [info] File file:/Users/yi.zhu/Documents/project/Angerszhuuuu/spark/target/tmp/spark-f1c6b035-e585-4c0e-9b83-17ad54e85978/dt=2020-09-10/part-00000-855b7af4-fe2b-4933-807a-6bf40eab11ba.c000.snappy.parquet does not exist
   [info]
   [info] It is possible the underlying files have been updated. You can explicitly invalidate
   [info] the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by
   [info] recreating the Dataset/DataFrame involved.
   [info]
   [info] 	at org.apache.spark.sql.errors.QueryExecutionErrors$.readCurrentFileNotFoundError(QueryExecutionErrors.scala:583)
   [info] 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:212)
   [info] 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:270)
   [info] 	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
   [info] 	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:548)
   [info] 	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
   [info] 	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
   [info] 	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
   [info] 	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
   [info] 	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:91)
   [info] 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:328)
   [info] 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1509)
   [info] 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:335)
   [info] 	... 9 more
   [info]
   ```
   
   When verify path, we should check if target path is read path's subdir too
   
   ### Why are the changes needed?
   Avoid potential data loss.
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   User can't insert overwrite table under read path
   
   
   ### How was this patch tested?
   Added UT
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org