You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/06/10 09:26:20 UTC

[jira] [Commented] (SPARK-15849) FileNotFoundException on _temporary while doing saveAsTable to S3

    [ https://issues.apache.org/jira/browse/SPARK-15849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15324132#comment-15324132 ] 

Sean Owen commented on SPARK-15849:
-----------------------------------

This is a duplicate of SPARK-2984 and we should probably keep the conversation there.
I think this is pretty much a known issue with S3? the object store isn't consistent fast enough to show the files that have just been written?

> FileNotFoundException on _temporary while doing saveAsTable to S3
> -----------------------------------------------------------------
>
>                 Key: SPARK-15849
>                 URL: https://issues.apache.org/jira/browse/SPARK-15849
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.1
>         Environment: AWS EC2 with spark on yarn and s3 storage
>            Reporter: Sandeep
>
> When submitting spark jobs to yarn cluster, I occasionally see these error messages while doing saveAsTable. I have tried doing this with spark.speculation=false, and get the same error. These errors are similar to SPARK-2984, but my jobs are writing to S3(s3n) :
> Caused by: java.io.FileNotFoundException: File s3n://xxxxxxx/_temporary/0/task_201606080516_0004_m_000079 does not exist.
> at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:506)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:360)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:310)
> at org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)
> at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:230)
> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
> ... 42 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org