You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/28 12:37:00 UTC

[jira] [Commented] (SPARK-29534) Hanging tasks in DiskBlockObjectWriter.commitAndGet while calling native FileDispatcherImpl.size0

    [ https://issues.apache.org/jira/browse/SPARK-29534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961001#comment-16961001 ] 

Hyukjin Kwon commented on SPARK-29534:
--------------------------------------

Can you post the codes you ran as well?

>  Hanging tasks in DiskBlockObjectWriter.commitAndGet while calling native FileDispatcherImpl.size0
> --------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-29534
>                 URL: https://issues.apache.org/jira/browse/SPARK-29534
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.2
>            Reporter: Bogdan
>            Priority: Major
>
> Tasks are hanging in native _FileDispatcherImpl.size0_ with calling site from _DiskBlockObjectWriter.commitAndGet_. Maybe enhance _DiskBlockObjectWriter_ to handle this type of behaviour? 
>  
> The behaviour is consistent and happens on a daily basis. It has been temporarily addressed by using speculative execution. However, for longer tasks (1h+) the impact in runtime is significant.
>  
> |sun.nio.ch.FileDispatcherImpl.size0(Native Method)
>  sun.nio.ch.FileDispatcherImpl.size(FileDispatcherImpl.java:88)
>  sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:264) => holding Monitor(java.lang.Object@853892634})
>  org.apache.spark.storage.DiskBlockObjectWriter.commitAndGet(DiskBlockObjectWriter.scala:183)
>  org.apache.spark.shuffle.sort.ShuffleExternalSorter.writeSortedFile(ShuffleExternalSorter.java:204)
>  org.apache.spark.shuffle.sort.ShuffleExternalSorter.spill(ShuffleExternalSorter.java:272)
>  org.apache.spark.memory.MemoryConsumer.spill(MemoryConsumer.java:65)
>  org.apache.spark.shuffle.sort.ShuffleExternalSorter.insertRecord(ShuffleExternalSorter.java:403)
>  org.apache.spark.shuffle.sort.UnsafeShuffleWriter.insertRecordIntoSorter(UnsafeShuffleWriter.java:267)
>  org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:188)
>  org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>  org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
>  org.apache.spark.scheduler.Task.run(Task.scala:121)
>  org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>  org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>  java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org