You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Imran Rashid (JIRA)" <ji...@apache.org> on 2018/10/29 16:11:00 UTC

[jira] [Created] (SPARK-25871) Streaming WAL should not use hdfs erasure coding, regardless of FS defaults

Imran Rashid created SPARK-25871:
------------------------------------

             Summary: Streaming WAL should not use hdfs erasure coding, regardless of FS defaults
                 Key: SPARK-25871
                 URL: https://issues.apache.org/jira/browse/SPARK-25871
             Project: Spark
          Issue Type: Improvement
          Components: DStreams
    Affects Versions: 2.4.0
            Reporter: Imran Rashid


The {{FileBasedWriteAheadLogWriter}} expects the output stream for the WAL to support {{hflush()}}, but hdfs erasure coded files do not support that.

https://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html#Limitations

otherwise you get exceptions like:

{noformat}
17/10/17 17:31:34 ERROR executor.Executor: Exception in task 0.2 in stage 6.0 (TID 85)
org.apache.spark.SparkException: Could not read data from write ahead log record FileBasedWriteAheadLogSegment(hdfs://quasar-yxckyb-1.vpc.cloudera.com:8020/tmp/__spark__a10be3a3-85ec-4d4f-8782-a4760df4cc4c/88657/checkpoints/receivedData/0/log-1508286672978-1508286732978,1321921,189000)
	at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD.org$apache$spark$streaming$rdd$WriteAheadLogBackedBlockRDD$$getBlockFromWriteAheadLog$1(WriteAheadLogBackedBlockRDD.scala:145)
	at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD$$anonfun$compute$1.apply(WriteAheadLogBackedBlockRDD.scala:173)
	at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD$$anonfun$compute$1.apply(WriteAheadLogBackedBlockRDD.scala:173)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD.compute(WriteAheadLogBackedBlockRDD.scala:173)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException: Cannot seek after EOF
	at org.apache.hadoop.hdfs.DFSStripedInputStream.seek(DFSStripedInputStream.java:331)
	at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
	at org.apache.spark.streaming.util.FileBasedWriteAheadLogRandomReader.read(FileBasedWriteAheadLogRandomReader.scala:37)
	at org.apache.spark.streaming.util.FileBasedWriteAheadLog.read(FileBasedWriteAheadLog.scala:120)
	at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD.org$apache$spark$streaming$rdd$WriteAheadLogBackedBlockRDD$$getBlockFromWriteAheadLog$1(WriteAheadLogBackedBlockRDD.scala:142)
	... 18 more
{noformat}

HDFS allows you to force a file to be replicated, regardless of the FS defaults -- we should do that for the WAL.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org