You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by chenghao-intel <gi...@git.apache.org> on 2015/02/03 15:02:19 UTC

[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

GitHub user chenghao-intel opened a pull request:

    https://github.com/apache/spark/pull/4336

    [SQL] Minor changes for dataframe implementation

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/chenghao-intel/spark dataframe_minor

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/4336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4336
    
----
commit 3293408b812944735e03f2a41221851faffb3669
Author: Cheng Hao <ha...@intel.com>
Date:   2015-02-03T13:53:38Z

    minor changes

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24053977
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    @marmbrus is correct. rdd.count() doesn't go through the optimizer. The original solution goes through the optimizer.
    
    Maybe a better change is to add some inline comment to explain this makes sure it goes through the optimizer, etc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/4336#issuecomment-72667971
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/26653/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by marmbrus <gi...@git.apache.org>.
Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24055512
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    You should always go through the optimizer :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by chenghao-intel <gi...@git.apache.org>.
Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24055684
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    Ok, that makes sense, thanks for the explanation. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by chenghao-intel <gi...@git.apache.org>.
Github user chenghao-intel closed the pull request at:

    https://github.com/apache/spark/pull/4336


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24055445
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    As an example of a query that can take advantage of the optimizer:
    
    df.count()
    
    If you run count from rdd, then all columns are extracted. If you run count as is, no actual columns are read.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/4336#issuecomment-72655586
  
      [Test build #26653 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26653/consoleFull) for   PR 4336 at commit [`3293408`](https://github.com/apache/spark/commit/3293408b812944735e03f2a41221851faffb3669).
     * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by chenghao-intel <gi...@git.apache.org>.
Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24053791
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    Oh? If I understand correctly, I think the rdd.count() is the most optimized (partial aggregation is done in before shuffling). @rxin , can you confirm that? Sorry If I am wrong.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/4336#issuecomment-72667963
  
      [Test build #26653 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26653/consoleFull) for   PR 4336 at commit [`3293408`](https://github.com/apache/spark/commit/3293408b812944735e03f2a41221851faffb3669).
     * This patch **passes all tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by marmbrus <gi...@git.apache.org>.
Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24034304
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    Are these changes correct?  Or are you removing the optimizations that we have in place for count and collects?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SQL] Minor changes for dataframe implementati...

Posted by chenghao-intel <gi...@git.apache.org>.
Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4336#discussion_r24055322
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala ---
    @@ -260,11 +260,11 @@ private[sql] class DataFrameImpl protected[sql](
     
       override def take(n: Int): Array[Row] = head(n)
     
    -  override def collect(): Array[Row] = queryExecution.executedPlan.executeCollect()
    +  override def collect(): Array[Row] = rdd.collect()
     
       override def collectAsList(): java.util.List[Row] = java.util.Arrays.asList(rdd.collect() :_*)
     
    -  override def count(): Long = groupBy().count().rdd.collect().head.getLong(0)
    +  override def count(): Long = rdd.count()
    --- End diff --
    
    Hmm, but the `rdd.count()` is not necessary to go through the Catalyst optimizer, isn't it? It's already an parallel processing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org