You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by zhuoliu <gi...@git.apache.org> on 2016/02/03 22:37:30 UTC

[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

GitHub user zhuoliu opened a pull request:

    https://github.com/apache/spark/pull/11060

    [SPARK-11316] setupGroups in coalescedRDD causes super long delay.

    In coalescedRDD, the setupGroups causes super long delay due to the O(n^2) loop in the second while. That while is used to make sure that each PartitionGroup contains at least one partition. In some cases, this while will take O(n^2) to complete. If number of partitions is very large, this would take tens of minutes or even hours to complete.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/zhuoliu/spark 11316

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/11060.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #11060
    
----
commit e202f579c994be49b255edd4014c0e7cfebaad78
Author: zhuol <zh...@yahoo-inc.com>
Date:   2016-02-03T21:21:50Z

    [SPARK-11316] setupGroups in coalesceRDD causes super long delay due to the O(n^2) loop.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11060#issuecomment-179674747
  
    **[Test build #2512 has started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2512/consoleFull)** for PR 11060 at commit [`0113217`](https://github.com/apache/spark/commit/0113217ce85dbee59873cea5651b2f924ef64720).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11060#issuecomment-179485322
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by zhuoliu <gi...@git.apache.org>.
Github user zhuoliu commented on the pull request:

    https://github.com/apache/spark/pull/11060#issuecomment-188343621
  
    @tgravescs Now Closing this one since we have another pull request up for this issue in:
    https://github.com/apache/spark/pull/11327


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11060#discussion_r52003141
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
    @@ -273,7 +273,8 @@ private class PartitionCoalescer(maxPartitions: Int, prev: RDD[_], balanceSlack:
           groupArr += pgroup
           groupHash.getOrElseUpdate(nxt_replica, ArrayBuffer()) += pgroup
           var tries = 0
    -      while (!addPartToPGroup(nxt_part, pgroup) && tries < targetLen) { // ensure at least one part
    +      // ensure at least one part
    +      while (pgroup.arr.isEmpty && !addPartToPGroup(nxt_part, pgroup) && tries < targetLen) {
    --- End diff --
    
    Oops thanks ignore this then 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11060#discussion_r51861250
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
    @@ -273,7 +273,8 @@ private class PartitionCoalescer(maxPartitions: Int, prev: RDD[_], balanceSlack:
           groupArr += pgroup
           groupHash.getOrElseUpdate(nxt_replica, ArrayBuffer()) += pgroup
           var tries = 0
    -      while (!addPartToPGroup(nxt_part, pgroup) && tries < targetLen) { // ensure at least one part
    +      // ensure at least one part
    +      while (pgroup.arr.isEmpty && !addPartToPGroup(nxt_part, pgroup) && tries < targetLen) {
    --- End diff --
    
    Is pgroup.arr an array ? (Sorry, on a phone). Use .length == 0 to avoid an implicit conversion if speed is essential


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by holdenk <gi...@git.apache.org>.
Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11060#discussion_r51912966
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
    @@ -273,7 +273,8 @@ private class PartitionCoalescer(maxPartitions: Int, prev: RDD[_], balanceSlack:
           groupArr += pgroup
           groupHash.getOrElseUpdate(nxt_replica, ArrayBuffer()) += pgroup
           var tries = 0
    -      while (!addPartToPGroup(nxt_part, pgroup) && tries < targetLen) { // ensure at least one part
    +      // ensure at least one part
    +      while (pgroup.arr.isEmpty && !addPartToPGroup(nxt_part, pgroup) && tries < targetLen) {
    --- End diff --
    
    @srowen It is MutableArrayBuffer, looking at http://www.scala-lang.org/api/2.11.0/index.html#scala.collection.mutable.ArrayBuffer , I think isEmpty should be "safe" since it explicitly inherits from IndexedSeqOptimized which provides a concrete isEmpty without an implicit conversion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by tgravescs <gi...@git.apache.org>.
Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11060#discussion_r52244984
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
    @@ -273,7 +273,8 @@ private class PartitionCoalescer(maxPartitions: Int, prev: RDD[_], balanceSlack:
           groupArr += pgroup
           groupHash.getOrElseUpdate(nxt_replica, ArrayBuffer()) += pgroup
           var tries = 0
    -      while (!addPartToPGroup(nxt_part, pgroup) && tries < targetLen) { // ensure at least one part
    +      // ensure at least one part
    +      while (pgroup.arr.isEmpty && !addPartToPGroup(nxt_part, pgroup) && tries < targetLen) {
    --- End diff --
    
    I was looking at this some more and this isn't the right solution. The issue we are seeing is because it runs out of partitions to hand out but it still loops ~n^2 times and in our case it is 300*2000 and for each of those times it gets partition information from hdfs. This takes about 3 seconds to do the inner 2000 iterations and doing that 300 times adding up to like 15 minutes.
    
    We shouldn't do the inner while loop if all partitions have been given out, but I want to understand a few things better to make sure that is all.
    
     Working with @zhuoliu 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11060#discussion_r51835677
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
    @@ -273,7 +273,8 @@ private class PartitionCoalescer(maxPartitions: Int, prev: RDD[_], balanceSlack:
           groupArr += pgroup
           groupHash.getOrElseUpdate(nxt_replica, ArrayBuffer()) += pgroup
           var tries = 0
    -      while (!addPartToPGroup(nxt_part, pgroup) && tries < targetLen) { // ensure at least one part
    +      // ensure at least one part
    +      while (pgroup.arr.isEmpty && !addPartToPGroup(nxt_part, pgroup) && tries < targetLen) {
    --- End diff --
    
    thanks for doing this. can you explain why this is a correct fix and doesn't change behavior, so reviewers don't need to go read and understand the whole block?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11060#issuecomment-179714901
  
    **[Test build #2512 has finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2512/consoleFull)** for PR 11060 at commit [`0113217`](https://github.com/apache/spark/commit/0113217ce85dbee59873cea5651b2f924ef64720).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by holdenk <gi...@git.apache.org>.
Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11060#discussion_r51815781
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
    @@ -273,7 +273,8 @@ private class PartitionCoalescer(maxPartitions: Int, prev: RDD[_], balanceSlack:
           groupArr += pgroup
           groupHash.getOrElseUpdate(nxt_replica, ArrayBuffer()) += pgroup
           var tries = 0
    -      while (!addPartToPGroup(nxt_part, pgroup) && tries < targetLen) { // ensure at least one part
    +      // ensure at least one part
    +      while (pgroup.arr.size <= 0 && !addPartToPGroup(nxt_part, pgroup) && tries < targetLen) {
    --- End diff --
    
    maybe could simplify `.size <= 0` to just `isEmpty`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by zhuoliu <gi...@git.apache.org>.
Github user zhuoliu closed the pull request at:

    https://github.com/apache/spark/pull/11060


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-11316] setupGroups in coalescedRDD caus...

Posted by zhuoliu <gi...@git.apache.org>.
Github user zhuoliu commented on the pull request:

    https://github.com/apache/spark/pull/11060#issuecomment-179589375
  
    @alig Hi Ali, would you mind having a look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org