You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by rxin <gi...@git.apache.org> on 2016/03/15 20:12:29 UTC

[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

GitHub user rxin opened a pull request:

    https://github.com/apache/spark/pull/11737

    [SPARK-13898][SQL] Merge DatasetHolder and DataFrameHolder

    ## What changes were proposed in this pull request?
    This patch merges DatasetHolder and DataFrameHolder. This makes more sense because DataFrame/Dataset are now one class.
    
    ## How was this patch tested?
    Updated existing unit tests that test these implicits.
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/rxin/spark SPARK-13898

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/11737.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #11737
    
----

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199386215
  
    **[Test build #53687 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53687/consoleFull)** for PR 11737 at commit [`776e1a1`](https://github.com/apache/spark/commit/776e1a1b14ee2270133505874555f7ba5c49e25e).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11737#discussion_r56879102
  
    --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala ---
    @@ -729,7 +729,7 @@ class SQLQuerySuite extends QueryTest with SQLTestUtils with TestHiveSingleton {
       }
     
       test("SPARK-5203 union with different decimal precision") {
    -    Seq.empty[(Decimal, Decimal)]
    +    Seq.empty[(java.math.BigDecimal, java.math.BigDecimal)]
    --- End diff --
    
    Decimal's an internal type. (although I think we should expose it from now on)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197179628
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197029327
  
    **[Test build #53215 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53215/consoleFull)** for PR 11737 at commit [`371f4e8`](https://github.com/apache/spark/commit/371f4e8184fbd2e7b96067b65a051d1b78f3ecf3).
     * This patch **fails Spark unit tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196990391
  
    **[Test build #53215 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53215/consoleFull)** for PR 11737 at commit [`371f4e8`](https://github.com/apache/spark/commit/371f4e8184fbd2e7b96067b65a051d1b78f3ecf3).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197029783
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53215/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197702816
  
    **[Test build #53394 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53394/consoleFull)** for PR 11737 at commit [`59cae95`](https://github.com/apache/spark/commit/59cae95a34fb8bd8cfee0da5b34fc5d27b0f85d2).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196983491
  
    **[Test build #53211 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53211/consoleFull)** for PR 11737 at commit [`b7c88cd`](https://github.com/apache/spark/commit/b7c88cd253b952fb7dc6035c1e7f6f26d096f212).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199528337
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53705/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197151199
  
    **[Test build #53272 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53272/consoleFull)** for PR 11737 at commit [`e422f52`](https://github.com/apache/spark/commit/e422f529e94506905563d0b487851a891b146605).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196981467
  
    **[Test build #53210 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53210/consoleFull)** for PR 11737 at commit [`837a2ba`](https://github.com/apache/spark/commit/837a2ba281f9fdbdb4ea218b6479f2ac9a5db81b).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197734599
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53394/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197029780
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199527952
  
    **[Test build #53705 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53705/consoleFull)** for PR 11737 at commit [`501c9b9`](https://github.com/apache/spark/commit/501c9b95223b939522516d4edd8960cd7fbcfd03).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds the following public classes _(experimental)_:
      * `        // `INSTANCE()` method to get the single instance of class `$read`. Then call `$iw()``


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196986237
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199514653
  
    **[Test build #2655 has started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2655/consoleFull)** for PR 11737 at commit [`501c9b9`](https://github.com/apache/spark/commit/501c9b95223b939522516d4edd8960cd7fbcfd03).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197735040
  
    @jodersky there are still two failures here. I probably won't have time to look at it until next week. If you have some time, please go for it! I think some are legitimate bugs in dataset encoders exposed by this change.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199563620
  
    **[Test build #2655 has finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2655/consoleFull)** for PR 11737 at commit [`501c9b9`](https://github.com/apache/spark/commit/501c9b95223b939522516d4edd8960cd7fbcfd03).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds the following public classes _(experimental)_:
      * `        // `INSTANCE()` method to get the single instance of class `$read`. Then call `$iw()``


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199442385
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by liancheng <gi...@git.apache.org>.
Github user liancheng commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199352833
  
    PR #11816 has been merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197734353
  
    **[Test build #53394 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53394/consoleFull)** for PR 11737 at commit [`59cae95`](https://github.com/apache/spark/commit/59cae95a34fb8bd8cfee0da5b34fc5d27b0f85d2).
     * This patch **fails Spark unit tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197734597
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199442387
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53687/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by cloud-fan <gi...@git.apache.org>.
Github user cloud-fan commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199552664
  
    LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199553500
  
    Merging in master.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196986007
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53211/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199442065
  
    **[Test build #53687 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53687/consoleFull)** for PR 11737 at commit [`776e1a1`](https://github.com/apache/spark/commit/776e1a1b14ee2270133505874555f7ba5c49e25e).
     * This patch **fails Spark unit tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-198059695
  
    cc @cloud-fan can you take a look at the failure for failing to serialize?
    
    The exception is
    ```
    [info] - Read/write all types with non-primitive type *** FAILED *** (442 milliseconds)
    [info]   org.apache.spark.SparkException: Task not serializable
    [info]   at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
    [info]   at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
    [info]   at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
    [info]   at org.apache.spark.SparkContext.clean(SparkContext.scala:1930)
    [info]   at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:364)
    [info]   at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:363)
    [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    [info]   at org.apache.spark.rdd.RDD.withScope(RDD.scala:356)
    [info]   at org.apache.spark.rdd.RDD.map(RDD.scala:363)
    [info]   at org.apache.spark.sql.SQLContext.createDataset(SQLContext.scala:465)
    [info]   at org.apache.spark.sql.SQLImplicits.rddToDatasetHolder(SQLImplicits.scala:133)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:40)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:126)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withTempPath(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$class.withOrcFile(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withOrcFile(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply$mcV$sp(OrcQuerySuite.scala:92)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
    [info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
    [info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
    [info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
    [info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:54)
    [info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
    [info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
    [info]   at scala.collection.immutable.List.foreach(List.scala:381)
    [info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
    [info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
    [info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
    [info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
    [info]   at org.scalatest.Suite$class.run(Suite.scala:1424)
    [info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
    [info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
    [info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
    [info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
    [info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
    [info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
    [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    [info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    [info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    [info]   at java.lang.Thread.run(Thread.java:745)
    [info]   Cause: java.io.IOException: unexpected exception type
    [info]   at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1538)
    [info]   at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:994)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1495)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    [info]   at scala.collection.immutable.List$SerializationProxy.writeObject(List.scala:468)
    [info]   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
    [info]   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [info]   at java.lang.reflect.Method.invoke(Method.java:606)
    [info]   at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:988)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1495)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    [info]   at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43)
    [info]   at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    [info]   at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
    [info]   at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
    [info]   at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
    [info]   at org.apache.spark.SparkContext.clean(SparkContext.scala:1930)
    [info]   at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:364)
    [info]   at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:363)
    [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    [info]   at org.apache.spark.rdd.RDD.withScope(RDD.scala:356)
    [info]   at org.apache.spark.rdd.RDD.map(RDD.scala:363)
    [info]   at org.apache.spark.sql.SQLContext.createDataset(SQLContext.scala:465)
    [info]   at org.apache.spark.sql.SQLImplicits.rddToDatasetHolder(SQLImplicits.scala:133)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:40)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:126)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withTempPath(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$class.withOrcFile(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withOrcFile(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply$mcV$sp(OrcQuerySuite.scala:92)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
    [info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
    [info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
    [info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
    [info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:54)
    [info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
    [info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
    [info]   at scala.collection.immutable.List.foreach(List.scala:381)
    [info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
    [info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
    [info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
    [info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
    [info]   at org.scalatest.Suite$class.run(Suite.scala:1424)
    [info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
    [info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
    [info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
    [info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
    [info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
    [info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
    [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    [info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    [info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    [info]   at java.lang.Thread.run(Thread.java:745)
    [info]   Cause: org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to dataType on unresolved object, tree: 'data
    [info]   at org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.dataType(unresolved.scala:60)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField.childSchema$lzycompute(complexTypeExtractors.scala:109)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField.childSchema(complexTypeExtractors.scala:109)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField$$anonfun$toString$1.apply(complexTypeExtractors.scala:113)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField$$anonfun$toString$1.apply(complexTypeExtractors.scala:113)
    [info]   at scala.Option.getOrElse(Option.scala:121)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField.toString(complexTypeExtractors.scala:113)
    [info]   at java.lang.String.valueOf(String.java:2849)
    [info]   at scala.collection.mutable.StringBuilder.append(StringBuilder.scala:200)
    [info]   at scala.collection.TraversableOnce$$anonfun$addString$1.apply(TraversableOnce.scala:357)
    [info]   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    [info]   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    [info]   at scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:355)
    [info]   at scala.collection.AbstractIterator.addString(Iterator.scala:1194)
    [info]   at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:321)
    [info]   at scala.collection.AbstractIterator.mkString(Iterator.scala:1194)
    [info]   at org.apache.spark.sql.catalyst.expressions.Expression.toString(Expression.scala:197)
    [info]   at java.lang.String.valueOf(String.java:2849)
    [info]   at scala.collection.mutable.StringBuilder.append(StringBuilder.scala:200)
    [info]   at scala.collection.TraversableOnce$$anonfun$addString$1.apply(TraversableOnce.scala:362)
    [info]   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    [info]   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    [info]   at scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:355)
    [info]   at scala.collection.AbstractIterator.addString(Iterator.scala:1194)
    [info]   at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:321)
    [info]   at scala.collection.AbstractIterator.mkString(Iterator.scala:1194)
    [info]   at org.apache.spark.sql.catalyst.expressions.Expression.toString(Expression.scala:197)
    [info]   at java.lang.String.valueOf(String.java:2849)
    [info]   at java.lang.StringBuilder.append(StringBuilder.java:128)
    [info]   at scala.StringContext.standardInterpolator(StringContext.scala:125)
    [info]   at scala.StringContext.s(StringContext.scala:95)
    [info]   at org.apache.spark.sql.catalyst.expressions.Invoke.toString(objects.scala:177)
    [info]   at java.lang.String.valueOf(String.java:2849)
    [info]   at scala.collection.mutable.StringBuilder.append(StringBuilder.scala:200)
    [info]   at scala.collection.TraversableOnce$$anonfun$addString$1.apply(TraversableOnce.scala:362)
    [info]   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    [info]   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    [info]   at scala.collection.TraversableOnce$class.addString(TraversableOnce.scala:355)
    [info]   at scala.collection.AbstractIterator.addString(Iterator.scala:1194)
    [info]   at scala.collection.TraversableOnce$class.mkString(TraversableOnce.scala:321)
    [info]   at scala.collection.AbstractIterator.mkString(Iterator.scala:1194)
    [info]   at org.apache.spark.sql.catalyst.expressions.Expression.toString(Expression.scala:197)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1418)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    [info]   at scala.collection.immutable.List$SerializationProxy.writeObject(List.scala:468)
    [info]   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
    [info]   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [info]   at java.lang.reflect.Method.invoke(Method.java:606)
    [info]   at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:988)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1495)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    [info]   at scala.collection.immutable.List$SerializationProxy.writeObject(List.scala:468)
    [info]   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
    [info]   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    [info]   at java.lang.reflect.Method.invoke(Method.java:606)
    [info]   at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:988)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1495)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
    [info]   at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
    [info]   at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
    [info]   at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
    [info]   at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    [info]   at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43)
    [info]   at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    [info]   at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
    [info]   at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
    [info]   at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
    [info]   at org.apache.spark.SparkContext.clean(SparkContext.scala:1930)
    [info]   at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:364)
    [info]   at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:363)
    [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    [info]   at org.apache.spark.rdd.RDD.withScope(RDD.scala:356)
    [info]   at org.apache.spark.rdd.RDD.map(RDD.scala:363)
    [info]   at org.apache.spark.sql.SQLContext.createDataset(SQLContext.scala:465)
    [info]   at org.apache.spark.sql.SQLImplicits.rddToDatasetHolder(SQLImplicits.scala:133)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:40)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:126)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withTempPath(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$class.withOrcFile(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withOrcFile(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply$mcV$sp(OrcQuerySuite.scala:92)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
    [info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
    [info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
    [info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
    [info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:54)
    [info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
    [info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
    [info]   at scala.collection.immutable.List.foreach(List.scala:381)
    [info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
    [info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
    [info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
    [info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
    [info]   at org.scalatest.Suite$class.run(Suite.scala:1424)
    [info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
    [info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
    [info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
    [info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
    [info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
    [info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
    [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    [info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    [info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    [info]   at java.lang.Thread.run(Thread.java:745)
    ```
    
    After changing the toString of GetStructField to the following:
    ```
      override def toString: String = {
        if (resolved) {
          s"$child.${name.getOrElse(childSchema(ordinal).name)}"
        } else {
          s"$child.unknownName"
        }
      }
    ```
    
    A different exception appears:
    ```
    [info] - Read/write all types with non-primitive type *** FAILED *** (412 milliseconds)
    [info]   org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to dataType on unresolved object, tree: 'data
    [info]   at org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.dataType(unresolved.scala:60)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField.childSchema$lzycompute(complexTypeExtractors.scala:111)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField.childSchema(complexTypeExtractors.scala:111)
    [info]   at org.apache.spark.sql.catalyst.expressions.GetStructField.dataType(complexTypeExtractors.scala:113)
    [info]   at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$validate$3.apply(ExpressionEncoder.scala:301)
    [info]   at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$validate$3.apply(ExpressionEncoder.scala:299)
    [info]   at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    [info]   at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    [info]   at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    [info]   at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    [info]   at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    [info]   at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.validate(ExpressionEncoder.scala:299)
    [info]   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:163)
    [info]   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:134)
    [info]   at org.apache.spark.sql.Dataset$.apply(Dataset.scala:53)
    [info]   at org.apache.spark.sql.SQLContext.createDataset(SQLContext.scala:468)
    [info]   at org.apache.spark.sql.SQLImplicits.rddToDatasetHolder(SQLImplicits.scala:133)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:40)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$$anonfun$withOrcFile$1.apply(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:126)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withTempPath(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcTest$class.withOrcFile(OrcTest.scala:39)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite.withOrcFile(OrcQuerySuite.scala:54)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply$mcV$sp(OrcQuerySuite.scala:92)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.apache.spark.sql.hive.orc.OrcQuerySuite$$anonfun$3.apply(OrcQuerySuite.scala:81)
    [info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
    [info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
    [info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
    [info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
    [info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
    [info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:54)
    [info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
    [info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
    [info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
    [info]   at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
    [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
    [info]   at scala.collection.immutable.List.foreach(List.scala:381)
    [info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
    [info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
    [info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
    [info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
    [info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
    [info]   at org.scalatest.Suite$class.run(Suite.scala:1424)
    [info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
    [info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
    [info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
    [info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
    [info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
    [info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:26)
    [info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
    [info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
    [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
    [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    [info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    [info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    [info]   at java.lang.Thread.run(Thread.java:745)
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196985985
  
    **[Test build #53211 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53211/consoleFull)** for PR 11737 at commit [`b7c88cd`](https://github.com/apache/spark/commit/b7c88cd253b952fb7dc6035c1e7f6f26d096f212).
     * This patch **fails to build**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199528331
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-199441399
  
    **[Test build #53705 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53705/consoleFull)** for PR 11737 at commit [`501c9b9`](https://github.com/apache/spark/commit/501c9b95223b939522516d4edd8960cd7fbcfd03).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196986238
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53210/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197178335
  
    **[Test build #53272 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53272/consoleFull)** for PR 11737 at commit [`e422f52`](https://github.com/apache/spark/commit/e422f529e94506905563d0b487851a891b146605).
     * This patch **fails Spark unit tests**.
     * This patch merges cleanly.
     * This patch adds the following public classes _(experimental)_:
      * `case class LogEntry(filename: String, message: String)`
      * `case class LogFile(name: String)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/11737


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by liancheng <gi...@git.apache.org>.
Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11737#discussion_r56846172
  
    --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala ---
    @@ -729,7 +729,7 @@ class SQLQuerySuite extends QueryTest with SQLTestUtils with TestHiveSingleton {
       }
     
       test("SPARK-5203 union with different decimal precision") {
    -    Seq.empty[(Decimal, Decimal)]
    +    Seq.empty[(java.math.BigDecimal, java.math.BigDecimal)]
    --- End diff --
    
    Are all these decimal related changes because merging `DataFrameHolder` and `DatasetHolder` breaks compilation, or just because `Decimal` is internal type and shouldn't be exposed?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196986005
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-196986207
  
    **[Test build #53210 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53210/consoleFull)** for PR 11737 at commit [`837a2ba`](https://github.com/apache/spark/commit/837a2ba281f9fdbdb4ea218b6479f2ac9a5db81b).
     * This patch **fails MiMa tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13898][SQL] Merge DatasetHolder and Dat...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11737#issuecomment-197179633
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53272/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org