You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by jinxing64 <gi...@git.apache.org> on 2017/02/27 13:04:06 UTC

[GitHub] spark issue #16867: [WIP][SPARK-16929] Improve performance when check specul...

Github user jinxing64 commented on the issue:

    https://github.com/apache/spark/pull/16867
  
    @squito
    Thanks a lot for your comments : )
    >When check speculatable tasks in TaskSetManager, current code scan all task infos and sort durations of successful tasks in O(NlogN) time complexity.
    
    `checkSpeculatableTasks` is scheduled every 100ms by `scheduleAtFixedRate`(not `scheduleWithFixedDelay `), thus the interval can be less than 100ms. In my cluster(yarn-cluster mode), if size of the task set is over 300000 and the driver is running on some machine with poor cpu performance, the `Arrays.sort` can take over than 100ms easily. Since `checkSpeculatableTasks` will synchronize `TaskSchedulerImpl`, I suspect that's why my driver hang.
    
    I get median duration by `TreeSet.slice`, which comes from `IterableLike` and cannot jump to the mid position unluckily. The time complexity is O(n) in this pr.
    I can get the mid position by reflection, but I don't want to do that, I think that is harmful for code clarity.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org