You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Han JU <ju...@gmail.com> on 2014/05/06 18:05:43 UTC
No space left on device error when pulling data from s3
Hi,
I've a `no space left on device` exception when pulling some 22GB data from
s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
spark-ec2 script with 4 m1.large.
The code is basically:
val in = sc.textFile("s3://...")
in.saveAsTextFile("hdfs://...")
Spark creates 750 input partitions based on the input splits, when it
begins throwing this exception, there's no space left on the root file
system on some worker machine:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 8256952 0 100% /
tmpfs 3816808 0 3816808 0% /dev/shm
/dev/xvdb 433455904 29840684 381596916 8% /mnt
/dev/xvdf 433455904 29437000 382000600 8% /mnt2
Before the job begins, only 35% is used.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 2832256 5340840 35% /
tmpfs 3816808 0 3816808 0% /dev/shm
/dev/xvdb 433455904 29857768 381579832 8% /mnt
/dev/xvdf 433455904 29470104 381967496 8% /mnt2
Some suggestions on this problem? Does Spark caches/stores some data before
writing to HDFS?
Full stacktrace:
---------------------
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:210)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy8.retrieveBlock(Unknown Source)
at org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
at
org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:92)
at
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:156)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
at
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
--
*JU Han*
Data Engineer @ Botify.com
+33 0619608888
Re: No space left on device error when pulling data from s3
Posted by Han JU <ju...@gmail.com>.
After some investigation, I found out that there's lots of temp files under
/tmp/hadoop-root/s3/
But this is strange since in both conf files,
~/ephemeral-hdfs/conf/core-site.xml and ~/spark/conf/core-site.xml, the
setting `hadoop.tmp.dir` is set to `/mnt/ephemeral-hdfs/`. Why spark jobs
still write temp files to /tmp/hadoop-root ?
2014-05-06 18:05 GMT+02:00 Han JU <ju...@gmail.com>:
> Hi,
>
> I've a `no space left on device` exception when pulling some 22GB data
> from s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
> spark-ec2 script with 4 m1.large.
>
> The code is basically:
> val in = sc.textFile("s3://...")
> in.saveAsTextFile("hdfs://...")
>
> Spark creates 750 input partitions based on the input splits, when it
> begins throwing this exception, there's no space left on the root file
> system on some worker machine:
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/xvda1 8256952 8256952 0 100% /
> tmpfs 3816808 0 3816808 0% /dev/shm
> /dev/xvdb 433455904 29840684 381596916 8% /mnt
> /dev/xvdf 433455904 29437000 382000600 8% /mnt2
>
> Before the job begins, only 35% is used.
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/xvda1 8256952 2832256 5340840 35% /
> tmpfs 3816808 0 3816808 0% /dev/shm
> /dev/xvdb 433455904 29857768 381579832 8% /mnt
> /dev/xvdf 433455904 29470104 381967496 8% /mnt2
>
>
> Some suggestions on this problem? Does Spark caches/stores some data
> before writing to HDFS?
>
>
> Full stacktrace:
> ---------------------
> java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:210)
> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at com.sun.proxy.$Proxy8.retrieveBlock(Unknown Source)
> at
> org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
> at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
> at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:92)
> at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:156)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
> at org.apache.spark.scheduler.Task.run(Task.scala:53)
> at
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
> at
> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
>
>
> --
> *JU Han*
>
> Data Engineer @ Botify.com
>
> +33 0619608888
>
--
*JU Han*
Data Engineer @ Botify.com
+33 0619608888
Re: No space left on device error when pulling data from s3
Posted by Han JU <ju...@gmail.com>.
Set `hadoop.tmp.dir` in `spark-env.sh` solved the problem. Spark job no
longer writes tmp files in /tmp/hadoop-root/.
SPARK_JAVA_OPTS+=" -Dspark.local.dir=/mnt/spark,/mnt2/spark
-Dhadoop.tmp.dir=/mnt/ephemeral-hdfs"
export SPARK_JAVA_OPTS
I'm wondering if we need to permanently add this in the spark-ec2 script.
Writing lots of tmp files in the 8g `/` is not a gread idea.
2014-05-06 18:59 GMT+02:00 Akhil Das <ak...@sigmoidanalytics.com>:
> I wonder why is your / is full. Try clearing out /tmp and also make sure
> in the spark-env.sh you have put SPARK_JAVA_OPTS+="
> -Dspark.local.dir=/mnt/spark"
>
> Thanks
> Best Regards
>
>
> On Tue, May 6, 2014 at 9:35 PM, Han JU <ju...@gmail.com> wrote:
>
>> Hi,
>>
>> I've a `no space left on device` exception when pulling some 22GB data
>> from s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
>> spark-ec2 script with 4 m1.large.
>>
>> The code is basically:
>> val in = sc.textFile("s3://...")
>> in.saveAsTextFile("hdfs://...")
>>
>> Spark creates 750 input partitions based on the input splits, when it
>> begins throwing this exception, there's no space left on the root file
>> system on some worker machine:
>>
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> /dev/xvda1 8256952 8256952 0 100% /
>> tmpfs 3816808 0 3816808 0% /dev/shm
>> /dev/xvdb 433455904 29840684 381596916 8% /mnt
>> /dev/xvdf 433455904 29437000 382000600 8% /mnt2
>>
>> Before the job begins, only 35% is used.
>>
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> /dev/xvda1 8256952 2832256 5340840 35% /
>> tmpfs 3816808 0 3816808 0% /dev/shm
>> /dev/xvdb 433455904 29857768 381579832 8% /mnt
>> /dev/xvdf 433455904 29470104 381967496 8% /mnt2
>>
>>
>> Some suggestions on this problem? Does Spark caches/stores some data
>> before writing to HDFS?
>>
>>
>> Full stacktrace:
>> ---------------------
>> java.io.IOException: No space left on device
>> at java.io.FileOutputStream.writeBytes(Native Method)
>> at java.io.FileOutputStream.write(FileOutputStream.java:345)
>> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>> at
>> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:210)
>> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> at com.sun.proxy.$Proxy8.retrieveBlock(Unknown Source)
>> at
>> org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
>> at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
>> at java.io.DataInputStream.read(DataInputStream.java:100)
>> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
>> at
>> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:92)
>> at
>> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
>> at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:156)
>> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
>> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
>> at org.apache.spark.scheduler.Task.run(Task.scala:53)
>> at
>> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
>> at
>> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:744)
>>
>>
>> --
>> *JU Han*
>>
>> Data Engineer @ Botify.com
>>
>> +33 0619608888
>>
>
>
--
*JU Han*
Data Engineer @ Botify.com
+33 0619608888
Re: No space left on device error when pulling data from s3
Posted by darkjh <ju...@gmail.com>.
Set `hadoop.tmp.dir` in `spark-env.sh` solved the problem. Spark job no
longer writes tmp files in /tmp/hadoop-root/.
SPARK_JAVA_OPTS+=" -Dspark.local.dir=/mnt/spark,/mnt2/spark
-Dhadoop.tmp.dir=/mnt/ephemeral-hdfs"
export SPARK_JAVA_OPTS
I'm wondering if we need to permanently add this in the spark-ec2 script.
Writing lots of tmp files in the 8GB `/` is not a great idea.
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/No-space-left-on-device-error-when-pulling-data-from-s3-tp5450p5518.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Re: No space left on device error when pulling data from s3
Posted by Akhil Das <ak...@sigmoidanalytics.com>.
I wonder why is your / is full. Try clearing out /tmp and also make sure in
the spark-env.sh you have put SPARK_JAVA_OPTS+="
-Dspark.local.dir=/mnt/spark"
Thanks
Best Regards
On Tue, May 6, 2014 at 9:35 PM, Han JU <ju...@gmail.com> wrote:
> Hi,
>
> I've a `no space left on device` exception when pulling some 22GB data
> from s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
> spark-ec2 script with 4 m1.large.
>
> The code is basically:
> val in = sc.textFile("s3://...")
> in.saveAsTextFile("hdfs://...")
>
> Spark creates 750 input partitions based on the input splits, when it
> begins throwing this exception, there's no space left on the root file
> system on some worker machine:
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/xvda1 8256952 8256952 0 100% /
> tmpfs 3816808 0 3816808 0% /dev/shm
> /dev/xvdb 433455904 29840684 381596916 8% /mnt
> /dev/xvdf 433455904 29437000 382000600 8% /mnt2
>
> Before the job begins, only 35% is used.
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/xvda1 8256952 2832256 5340840 35% /
> tmpfs 3816808 0 3816808 0% /dev/shm
> /dev/xvdb 433455904 29857768 381579832 8% /mnt
> /dev/xvdf 433455904 29470104 381967496 8% /mnt2
>
>
> Some suggestions on this problem? Does Spark caches/stores some data
> before writing to HDFS?
>
>
> Full stacktrace:
> ---------------------
> java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
> at
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:210)
> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at com.sun.proxy.$Proxy8.retrieveBlock(Unknown Source)
> at
> org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
> at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
> at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:92)
> at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:156)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
> at org.apache.spark.scheduler.Task.run(Task.scala:53)
> at
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
> at
> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
>
>
> --
> *JU Han*
>
> Data Engineer @ Botify.com
>
> +33 0619608888
>