You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Vijay Pawnarkar <vi...@gmail.com> on 2015/07/08 21:52:21 UTC

RDD saveAsTextFile() to local disk

Getting exception when wrting RDD to local disk using following function

 saveAsTextFile("file:////home/someuser/dir2/testupload/20150708/")

The dir (/home/someuser/dir2/testupload/) was created before running the
job. The error message is misleading.


org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
0.0 (TID 6, xxx.yyy.com): org.apache.hadoop.fs.ParentNotDirectoryException:
Parent path is not a directory: file:/home/someuser/dir2
        at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:418)
        at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:426)
        at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:426)
        at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:426)
        at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:426)
        at
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:426)
        at
org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:588)
        at
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:439)
        at
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:799)
        at
org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
        at
org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:90)
        at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:1060)
        at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:1051)
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
        at org.apache.spark.scheduler.Task.run(Task.scala:56)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

-- 
-Vijay