You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by CrazyJvm <gi...@git.apache.org> on 2014/05/14 09:47:44 UTC

[GitHub] spark pull request: default task number misleading in several plac...

GitHub user CrazyJvm opened a pull request:

    https://github.com/apache/spark/pull/766

    default task number misleading in several places

      private[streaming] def defaultPartitioner(numPartitions: Int = self.ssc.sc.defaultParallelism){
        new HashPartitioner(numPartitions)
      }
    
    it represents that the default task number in Spark Streaming relies on the variable defaultParallelism in SparkContext, which is decided by the config property spark.default.parallelism
    
    the property "spark.default.parallelism" refers to https://github.com/apache/spark/pull/389

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/CrazyJvm/spark patch-7

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/766.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #766
    
----
commit cc5b66c1883eca8862b8f37ef50d64cc0408c54c
Author: Chen Chao <cr...@gmail.com>
Date:   2014-05-14T07:45:10Z

    default task number misleading in several places 
    
    <code>
      private[streaming] def defaultPartitioner(numPartitions: Int = self.ssc.sc.defaultParallelism){
        new HashPartitioner(numPartitions)
      }
    </code>
    
    it represents that the default task number in Spark Streaming relies on the variable defaultParallelism in SparkContext, which is decided by the config property spark.default.parallelism
    
    the property "spark.default.parallelism" refers to https://github.com/apache/spark/pull/389

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/766#discussion_r12655504
  
    --- Diff: docs/streaming-programming-guide.md ---
    @@ -956,9 +957,10 @@ before further processing.
     ### Level of Parallelism in Data Processing
     Cluster resources maybe under-utilized if the number of parallel tasks used in any stage of the
     computation is not high enough. For example, for distributed reduce operations like `reduceByKey`
    -and `reduceByKeyAndWindow`, the default number of parallel tasks is 8. You can pass the level of
    -parallelism as an argument (see the
    -[`PairDStreamFunctions`](api/scala/index.html#org.apache.spark.streaming.dstream.PairDStreamFunctions)
    +and `reduceByKeyAndWindow`, the default number of parallel tasks is decided by the [config property]
    +(configuration.html#spark-properties) `spark.default.parallelism`. You can pass the level of
    +parallelism as an argument (see the [`PairDStreamFunctions`]
    --- End diff --
    
    remove the "the" before [`PairDStreamFunctions`]


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by CrazyJvm <gi...@git.apache.org>.
Github user CrazyJvm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/766#discussion_r12670482
  
    --- Diff: docs/streaming-programming-guide.md ---
    @@ -522,9 +522,9 @@ common ones are as follows.
       <td> <b>reduceByKey</b>(<i>func</i>, [<i>numTasks</i>]) </td>
       <td> When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the
       values for each key are aggregated using the given reduce function. <b>Note:</b> By default,
    -  this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluster) to
    -  do the grouping. You can pass an optional <code>numTasks</code> argument to set a different
    -  number of tasks.</td>
    +  this uses Spark's default number of parallel tasks (local mode is 2, while cluster mode is
    --- End diff --
    
    it's good i think : )


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43054810
  
    All automated tests passed.
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14971/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43051614
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43054809
  
    Merged build finished. All automated tests passed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/766#discussion_r12655483
  
    --- Diff: docs/streaming-programming-guide.md ---
    @@ -522,9 +522,9 @@ common ones are as follows.
       <td> <b>reduceByKey</b>(<i>func</i>, [<i>numTasks</i>]) </td>
       <td> When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the
       values for each key are aggregated using the given reduce function. <b>Note:</b> By default,
    -  this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluster) to
    -  do the grouping. You can pass an optional <code>numTasks</code> argument to set a different
    -  number of tasks.</td>
    +  this uses Spark's default number of parallel tasks (local mode is 2, while cluster mode is
    --- End diff --
    
    how about "2 for local mode, and in cluster mode the number is determined by ..."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43159077
  
    Merged build started. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43161045
  
    All automated tests passed.
    Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15001/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43159068
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43159422
  
    Thanks. I've merged this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43161044
  
    Merged build finished. All automated tests passed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/766#issuecomment-43051596
  
     Merged build triggered. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] spark pull request: default task number misleading in several plac...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/766


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---