You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by RD1991 <gi...@git.apache.org> on 2016/02/02 09:47:52 UTC

[GitHub] spark pull request: Branch 1.6

GitHub user RD1991 opened a pull request:

    https://github.com/apache/spark/pull/11024

    Branch 1.6

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/apache/spark branch-1.6

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/11024.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #11024
    
----
commit d79dd971d01b69f8065b802fb5a78023ca905c7c
Author: Jeroen Schot <je...@surfsara.nl>
Date:   2015-12-02T09:40:07Z

    [SPARK-3580][CORE] Add Consistent Method To Get Number of RDD Partitions Across Different Languages
    
    I have tried to address all the comments in pull request https://github.com/apache/spark/pull/2447.
    
    Note that the second commit (using the new method in all internal code of all components) is quite intrusive and could be omitted.
    
    Author: Jeroen Schot <je...@surfsara.nl>
    
    Closes #9767 from schot/master.
    
    (cherry picked from commit 128c29035b4e7383cc3a9a6c7a9ab6136205ac6c)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit f449a407f6f152c676524d4348bbe34d4d3fbfca
Author: Cheng Lian <li...@databricks.com>
Date:   2015-12-02T17:36:12Z

    [SPARK-12094][SQL] Prettier tree string for TreeNode
    
    When examining plans of complex queries with multiple joins, a pain point of mine is that, it's hard to immediately see the sibling node of a specific query plan node. This PR adds tree lines for the tree string of a `TreeNode`, so that the result can be visually more intuitive.
    
    Author: Cheng Lian <li...@databricks.com>
    
    Closes #10099 from liancheng/prettier-tree-string.
    
    (cherry picked from commit a1542ce2f33ad365ff437d2d3014b9de2f6670e5)
    Signed-off-by: Yin Huai <yh...@databricks.com>

commit bf525845cef159d2d4c9f4d64e158f037179b5c4
Author: Patrick Wendell <pw...@gmail.com>
Date:   2015-12-02T17:54:10Z

    Preparing Spark release v1.6.0-rc1

commit 5d915fed300b47a51b7614d28bd8ea7795b4e841
Author: Patrick Wendell <pw...@gmail.com>
Date:   2015-12-02T17:54:15Z

    Preparing development version 1.6.0-SNAPSHOT

commit 911259e9af6f9a81e775b1aa6d82fa44956bf993
Author: Yu ISHIKAWA <yu...@gmail.com>
Date:   2015-12-02T22:15:54Z

    [SPARK-10266][DOCUMENTATION, ML] Fixed @Since annotation for ml.tunning
    
    cc mengxr noel-smith
    
    I worked on this issues based on https://github.com/apache/spark/pull/8729.
    ehsanmok  thank you for your contricution!
    
    Author: Yu ISHIKAWA <yu...@gmail.com>
    Author: Ehsan M.Kermani <eh...@gmail.com>
    
    Closes #9338 from yu-iskw/JIRA-10266.
    
    (cherry picked from commit de07d06abecf3516c95d099b6c01a86e0c8cfd8c)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit cb142fd1e6d98b140de3813775c5a58ea624b1d4
Author: Yadong Qi <qi...@gmail.com>
Date:   2015-12-03T00:48:49Z

    [SPARK-12093][SQL] Fix the error of comment in DDLParser
    
    Author: Yadong Qi <qi...@gmail.com>
    
    Closes #10096 from watermen/patch-1.
    
    (cherry picked from commit d0d7ec533062151269b300ed455cf150a69098c0)
    Signed-off-by: Reynold Xin <rx...@databricks.com>

commit 656d44e2021d2f637d724c1d71ecdca1f447a4be
Author: Xiangrui Meng <me...@databricks.com>
Date:   2015-12-03T01:19:31Z

    [SPARK-12000] do not specify arg types when reference a method in ScalaDoc
    
    This fixes SPARK-12000, verified on my local with JDK 7. It seems that `scaladoc` try to match method names and messed up with annotations.
    
    cc: JoshRosen jkbradley
    
    Author: Xiangrui Meng <me...@databricks.com>
    
    Closes #10114 from mengxr/SPARK-12000.2.
    
    (cherry picked from commit 9bb695b7a82d837e2c7a724514ea6b203efb5364)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit 6914ee9f0a063b880a0329365f465dcbe96e1adb
Author: Josh Rosen <jo...@databricks.com>
Date:   2015-12-03T03:12:02Z

    [SPARK-12082][FLAKY-TEST] Increase timeouts in NettyBlockTransferSecuritySuite
    
    We should try increasing a timeout in NettyBlockTransferSecuritySuite in order to reduce that suite's flakiness in Jenkins.
    
    Author: Josh Rosen <jo...@databricks.com>
    
    Closes #10113 from JoshRosen/SPARK-12082.
    
    (cherry picked from commit ae402533738be06ac802914ed3e48f0d5fa54cbe)
    Signed-off-by: Reynold Xin <rx...@databricks.com>

commit 6674fd8aa9b04966bd7d19650754805cd241e399
Author: Yin Huai <yh...@databricks.com>
Date:   2015-12-03T03:21:24Z

    [SPARK-12109][SQL] Expressions's simpleString should delegate to its toString.
    
    https://issues.apache.org/jira/browse/SPARK-12109
    
    The change of https://issues.apache.org/jira/browse/SPARK-11596 exposed the problem.
    In the sql plan viz, the filter shows
    
    ![image](https://cloud.githubusercontent.com/assets/2072857/11547075/1a285230-9906-11e5-8481-2bb451e35ef1.png)
    
    After changes in this PR, the viz is back to normal.
    ![image](https://cloud.githubusercontent.com/assets/2072857/11547080/2bc570f4-9906-11e5-8897-3b3bff173276.png)
    
    Author: Yin Huai <yh...@databricks.com>
    
    Closes #10111 from yhuai/SPARK-12109.
    
    (cherry picked from commit ec2b6c26c9b6bd59d29b5d7af2742aca7e6e0b07)
    Signed-off-by: Reynold Xin <rx...@databricks.com>

commit 5826096ac1377c8fad4c2cabefee2f340008e828
Author: Huaxin Gao <hu...@oc0558782468.ibm.com>
Date:   2015-12-03T08:42:21Z

    [SPARK-12088][SQL] check connection.isClosed before calling connection…
    
    In Java Spec java.sql.Connection, it has
    boolean getAutoCommit() throws SQLException
    Throws:
    SQLException - if a database access error occurs or this method is called on a closed connection
    
    So if conn.getAutoCommit is called on a closed connection, a SQLException will be thrown. Even though the code catch the SQLException and program can continue, I think we should check conn.isClosed before calling conn.getAutoCommit to avoid the unnecessary SQLException.
    
    Author: Huaxin Gao <hu...@oc0558782468.ibm.com>
    
    Closes #10095 from huaxingao/spark-12088.
    
    (cherry picked from commit 5349851f368a1b5dab8a99c0d51c9638ce7aec56)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 93b69ec45124611107a930af4c9b6413e7b3da62
Author: Jeff Zhang <zj...@apache.org>
Date:   2015-12-03T15:36:28Z

    [DOCUMENTATION][MLLIB] typo in mllib doc
    
    \cc mengxr
    
    Author: Jeff Zhang <zj...@apache.org>
    
    Closes #10093 from zjffdu/mllib_typo.
    
    (cherry picked from commit 7470d9edbb0a45e714c96b5d55eff30724c0653a)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 84cbed17e752af2e8d6e68f74f4912a52ba2da0f
Author: microwishing <we...@kaiyuandao.com>
Date:   2015-12-03T16:09:05Z

    [DOCUMENTATION][KAFKA] fix typo in kafka/OffsetRange.scala
    
    this is to fix some typo in external/kafka/src/main/scala/org/apache/spark/streaming/kafka/OffsetRange.scala
    
    Author: microwishing <we...@kaiyuandao.com>
    
    Closes #10121 from microwishing/master.
    
    (cherry picked from commit 95b3cf125b905b4ef8705c46f2ef255377b0a9dc)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit f7ae62c45ebbe1370e7ead3d8a31c42e3a2d1468
Author: felixcheung <fe...@hotmail.com>
Date:   2015-12-03T17:22:21Z

    [SPARK-12116][SPARKR][DOCS] document how to workaround function name conflicts with dplyr
    
    shivaram
    
    Author: felixcheung <fe...@hotmail.com>
    
    Closes #10119 from felixcheung/rdocdplyrmasked.
    
    (cherry picked from commit 43c575cb1766b32c74db17216194a8a74119b759)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit 8865d87f736e3c97d2193e609afb4e9b8f772aa2
Author: jerryshao <ss...@hortonworks.com>
Date:   2015-12-03T19:05:12Z

    [SPARK-12059][CORE] Avoid assertion error when unexpected state transition met in Master
    
    Downgrade to warning log for unexpected state transition.
    
    andrewor14 please review, thanks a lot.
    
    Author: jerryshao <ss...@hortonworks.com>
    
    Closes #10091 from jerryshao/SPARK-12059.
    
    (cherry picked from commit 7bc9e1db2c47387ee693bcbeb4a8a2cbe11909cf)
    Signed-off-by: Andrew Or <an...@databricks.com>

commit 4c84f6e91d61a358c179b04bf6d1bc8b9559b6d0
Author: Shixiong Zhu <sh...@databricks.com>
Date:   2015-12-03T19:06:25Z

    [SPARK-12101][CORE] Fix thread pools that cannot cache tasks in Worker and AppClient
    
    `SynchronousQueue` cannot cache any task. This issue is similar to #9978. It's an easy fix. Just use the fixed `ThreadUtils.newDaemonCachedThreadPool`.
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #10108 from zsxwing/fix-threadpool.
    
    (cherry picked from commit 649be4fa4532dcd3001df8345f9f7e970a3fbc65)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit bf8b95fa45f52a621d13333080516d62b690a022
Author: Andrew Or <an...@databricks.com>
Date:   2015-12-03T19:09:29Z

    [SPARK-12108] Make event logs smaller
    
    **Problem.** Event logs in 1.6 were much bigger than 1.5. I ran page rank and the event log size in 1.6 was almost 5x that in 1.5. I did a bisect to find that the RDD callsite added in #9398 is largely responsible for this.
    
    **Solution.** This patch removes the long form of the callsite (which is not used!) from the event log. This reduces the size of the event log significantly.
    
    *Note on compatibility*: if this patch is to be merged into 1.6.0, then it won't break any compatibility. Otherwise, if it is merged into 1.6.1, then we might need to add more backward compatibility handling logic (currently does not exist yet).
    
    Author: Andrew Or <an...@databricks.com>
    
    Closes #10115 from andrewor14/smaller-event-logs.
    
    (cherry picked from commit 688e521c2833a00069272a6749153d721a0996f6)
    Signed-off-by: Andrew Or <an...@databricks.com>

commit e0577f542878d582651aad7c65dc33c47014b4fb
Author: Yanbo Liang <yb...@gmail.com>
Date:   2015-12-03T19:37:34Z

    [MINOR][ML] Use coefficients replace weights
    
    Use ```coefficients``` replace ```weights```, I wish they are the last two.
    mengxr
    
    Author: Yanbo Liang <yb...@gmail.com>
    
    Closes #10065 from yanboliang/coefficients.
    
    (cherry picked from commit d576e76bbaa818480d31d2b8fbbe4b15718307d9)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit 9d698fc57888c38779a3aba73e9fda42f0933bc7
Author: Nicholas Chammas <ni...@gmail.com>
Date:   2015-12-03T19:59:10Z

    [SPARK-12107][EC2] Update spark-ec2 versions
    
    I haven't created a JIRA. If we absolutely need one I'll do it, but I'm fine with not getting mentioned in the release notes if that's the only purpose it'll serve.
    
    cc marmbrus - We should include this in 1.6-RC2 if there is one. I can open a second PR against branch-1.6 if necessary.
    
    Author: Nicholas Chammas <ni...@gmail.com>
    
    Closes #10109 from nchammas/spark-ec2-versions.
    
    (cherry picked from commit ad7cea6f776a39801d6bb5bb829d1800b175b2ab)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit b1a27d61666680b44ea2411a97ff648a6a6856a7
Author: Tathagata Das <ta...@gmail.com>
Date:   2015-12-03T20:00:09Z

    [FLAKY-TEST-FIX][STREAMING][TEST] Make sure StreamingContexts are shutdown after test
    
    Author: Tathagata Das <ta...@gmail.com>
    
    Closes #10124 from tdas/InputStreamSuite-flaky-test.
    
    (cherry picked from commit a02d47277379e1e82d0ee41b2205434f9ffbc3e5)
    Signed-off-by: Tathagata Das <ta...@gmail.com>

commit 355bd72e03c9dd2642323cfdce70e329763934ed
Author: felixcheung <fe...@hotmail.com>
Date:   2015-12-03T21:25:20Z

    [SPARK-12019][SPARKR] Support character vector for sparkR.init(), check param and fix doc
    
    and add tests.
    Spark submit expects comma-separated list
    
    Author: felixcheung <fe...@hotmail.com>
    
    Closes #10034 from felixcheung/sparkrinitdoc.
    
    (cherry picked from commit 2213441e5e0fba01e05826257604aa427cdf2598)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit 9e8a8f71fb62f55d8fcd8f319c8c25407a8d0010
Author: Anderson de Andrade <ad...@verticalscope.com>
Date:   2015-12-04T00:37:00Z

    [SPARK-12056][CORE] Create a TaskAttemptContext only after calling setConf.
    
    TaskAttemptContext's constructor will clone the configuration instead of referencing it. Calling setConf after creating TaskAttemptContext makes any changes to the configuration made inside setConf unperceived by RecordReader instances.
    
    As an example, Titan's InputFormat will change conf when calling setConf. They wrap their InputFormat around Cassandra's ColumnFamilyInputFormat, and append Cassandra's configuration. This change fixes the following error when using Titan's CassandraInputFormat with Spark:
    
    *java.lang.RuntimeException: org.apache.thrift.protocol.TProtocolException: Required field 'keyspace' was not present! Struct: set_key space_args(keyspace:null)*
    
    There's a discussion of this error here: https://groups.google.com/forum/#!topic/aureliusgraphs/4zpwyrYbGAE
    
    Author: Anderson de Andrade <ad...@verticalscope.com>
    
    Closes #10046 from adeandrade/newhadooprdd-fix.
    
    (cherry picked from commit f434f36d508eb4dcade70871611fc022ae0feb56)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit 2d7c4f6af459c29948476d26a8b2ac514a51c59c
Author: Sun Rui <ru...@intel.com>
Date:   2015-12-04T05:11:10Z

    [SPARK-12104][SPARKR] collect() does not handle multiple columns with same name.
    
    Author: Sun Rui <ru...@intel.com>
    
    Closes #10118 from sun-rui/SPARK-12104.
    
    (cherry picked from commit 5011f264fb53705c528250bd055acbc2eca2baaa)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit 8f784b8642441d00f12835736109b2560eab0de6
Author: Tathagata Das <ta...@gmail.com>
Date:   2015-12-04T09:42:29Z

    [SPARK-12122][STREAMING] Prevent batches from being submitted twice after recovering StreamingContext from checkpoint
    
    Author: Tathagata Das <ta...@gmail.com>
    
    Closes #10127 from tdas/SPARK-12122.
    
    (cherry picked from commit 4106d80fb6a16713a6cd2f15ab9d60f2527d9be5)
    Signed-off-by: Tathagata Das <ta...@gmail.com>

commit 3fd757c8896df8cc3b184522c8d11da0be5ebbc3
Author: Nong <no...@cloudera.com>
Date:   2015-12-04T18:01:20Z

    [SPARK-12089] [SQL] Fix memory corrupt due to freeing a page being referenced
    
    When the spillable sort iterator was spilled, it was mistakenly keeping
    the last page in memory rather than the current page. This causes the
    current record to get corrupted.
    
    Author: Nong <no...@cloudera.com>
    
    Closes #10142 from nongli/spark-12089.
    
    (cherry picked from commit 95296d9b1ad1d9e9396d7dfd0015ef27ce1cf341)
    Signed-off-by: Davies Liu <da...@gmail.com>

commit 39d5cc8adbb09e2d76fe85ccd51c3ffcf3d5b9f5
Author: Burak Yavuz <br...@gmail.com>
Date:   2015-12-04T20:08:42Z

    [SPARK-12058][STREAMING][KINESIS][TESTS] fix Kinesis python tests
    
    Python tests require access to the `KinesisTestUtils` file. When this file exists under src/test, python can't access it, since it is not available in the assembly jar.
    
    However, if we move KinesisTestUtils to src/main, we need to add the KinesisProducerLibrary as a dependency. In order to avoid this, I moved KinesisTestUtils to src/main, and extended it with ExtendedKinesisTestUtils which is under src/test that adds support for the KPL.
    
    cc zsxwing tdas
    
    Author: Burak Yavuz <br...@gmail.com>
    
    Closes #10050 from brkyvz/kinesis-py.

commit 57d16403edcb4f770174404f8ed7f5697e4fdc26
Author: Sun Rui <ru...@intel.com>
Date:   2015-12-05T23:49:51Z

    [SPARK-11774][SPARKR] Implement struct(), encode(), decode() functions in SparkR.
    
    Author: Sun Rui <ru...@intel.com>
    
    Closes #9804 from sun-rui/SPARK-11774.
    
    (cherry picked from commit c8d0e160dadf3b23c5caa379ba9ad5547794eaa0)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit 664694b289a7847807a2be022985c9ed39dbe142
Author: felixcheung <fe...@hotmail.com>
Date:   2015-12-06T00:00:12Z

    [SPARK-11715][SPARKR] Add R support corr for Column Aggregration
    
    Need to match existing method signature
    
    Author: felixcheung <fe...@hotmail.com>
    
    Closes #9680 from felixcheung/rcorr.
    
    (cherry picked from commit 895b6c474735d7e0a38283f92292daa5c35875ee)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit 04dfaa6d58bd9ce18a141a976a4a96218e5ee9e0
Author: Yanbo Liang <yb...@gmail.com>
Date:   2015-12-06T00:39:01Z

    [SPARK-12115][SPARKR] Change numPartitions() to getNumPartitions() to be consistent with Scala/Python
    
    Change ```numPartitions()``` to ```getNumPartitions()``` to be consistent with Scala/Python.
    <del>Note: If we can not catch up with 1.6 release, it will be breaking change for 1.7 that we also need to explain in release note.<del>
    
    cc sun-rui felixcheung shivaram
    
    Author: Yanbo Liang <yb...@gmail.com>
    
    Closes #10123 from yanboliang/spark-12115.
    
    (cherry picked from commit 6979edf4e1a93caafa8d286692097dd377d7616d)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit 2feac49fbca2e2f309c857f10511be2b2c1948cc
Author: Yanbo Liang <yb...@gmail.com>
Date:   2015-12-06T06:51:05Z

    [SPARK-12044][SPARKR] Fix usage of isnan, isNaN
    
    1, Add ```isNaN``` to ```Column``` for SparkR. ```Column``` should has three related variable functions: ```isNaN, isNull, isNotNull```.
    2, Replace ```DataFrame.isNaN``` with ```DataFrame.isnan``` at SparkR side. Because ```DataFrame.isNaN``` has been deprecated and will be removed at Spark 2.0.
    <del>3, Add ```isnull``` to ```DataFrame``` for SparkR. ```DataFrame``` should has two related functions: ```isnan, isnull```.<del>
    
    cc shivaram sun-rui felixcheung
    
    Author: Yanbo Liang <yb...@gmail.com>
    
    Closes #10037 from yanboliang/spark-12044.
    
    (cherry picked from commit b6e8e63a0dbe471187a146c96fdaddc6b8a8e55e)
    Signed-off-by: Shivaram Venkataraman <sh...@cs.berkeley.edu>

commit c8747a9db718deefa5f61cc4dc692c439d4d5ab6
Author: gcc <sp...@condor.rhaag.ip>
Date:   2015-12-06T16:27:40Z

    [SPARK-12048][SQL] Prevent to close JDBC resources twice
    
    Author: gcc <sp...@condor.rhaag.ip>
    
    Closes #10101 from rh99/master.
    
    (cherry picked from commit 04b6799932707f0a4aa4da0f2fc838bdb29794ce)
    Signed-off-by: Sean Owen <so...@cloudera.com>

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: Branch 1.6

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11024#issuecomment-178453419
  
    Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: Branch 1.6

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/11024


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: Branch 1.6

Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the pull request:

    https://github.com/apache/spark/pull/11024#issuecomment-178474395
  
    Could you close this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: Branch 1.6

Posted by andrewor14 <gi...@git.apache.org>.
Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/11024#issuecomment-178923431
  
    @RD1991 please close this PR


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org