You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by gentlewangyu <gi...@git.apache.org> on 2018/05/24 06:59:12 UTC

[GitHub] spark pull request #21419: Branch 2.2

GitHub user gentlewangyu opened a pull request:

    https://github.com/apache/spark/pull/21419

    Branch 2.2

    ## What changes were proposed in this pull request?
    
    compiling spark with scala-2.10 should use the -P parameter instead of -D
    
    ## How was this patch tested?
    
     ./build/mvn -Pyarn -Pscala-2.10 -DskipTests clean package


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/apache/spark branch-2.2

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/21419.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #21419
    
----
commit 9949fed1c45865b6e5e8ebe610789c5fb9546052
Author: Corey Woodfield <co...@...>
Date:   2017-07-19T22:21:38Z

    [SPARK-21333][DOCS] Removed invalid joinTypes from javadoc of Dataset#joinWith
    
    ## What changes were proposed in this pull request?
    
    Two invalid join types were mistakenly listed in the javadoc for joinWith, in the Dataset class. I presume these were copied from the javadoc of join, but since joinWith returns a Dataset\<Tuple2\>, left_semi and left_anti are invalid, as they only return values from one of the datasets, instead of from both
    
    ## How was this patch tested?
    
    I ran the following code :
    ```
    public static void main(String[] args) {
    	SparkSession spark = new SparkSession(new SparkContext("local[*]", "Test"));
    	Dataset<Row> one = spark.createDataFrame(Arrays.asList(new Bean(1), new Bean(2), new Bean(3), new Bean(4), new Bean(5)), Bean.class);
    	Dataset<Row> two = spark.createDataFrame(Arrays.asList(new Bean(4), new Bean(5), new Bean(6), new Bean(7), new Bean(8), new Bean(9)), Bean.class);
    
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "inner").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "cross").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "full").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "full_outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left_outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "right").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "right_outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left_semi").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left_anti").show();} catch (Exception e) {e.printStackTrace();}
    }
    ```
    which tests all the different join types, and the last two (left_semi and left_anti) threw exceptions. The same code using join instead of joinWith did fine. The Bean class was just a java bean with a single int field, x.
    
    Author: Corey Woodfield <co...@gmail.com>
    
    Closes #18462 from coreywoodfield/master.
    
    (cherry picked from commit 8cd9cdf17a7a4ad6f2eecd7c4b388ca363c20982)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 88dccda393bc79dc6032f71b6acf8eb2b4b152be
Author: Dhruve Ashar <dh...@...>
Date:   2017-07-21T19:03:46Z

    [SPARK-21243][CORE] Limit no. of map outputs in a shuffle fetch
    
    For configurations with external shuffle enabled, we have observed that if a very large no. of blocks are being fetched from a remote host, it puts the NM under extra pressure and can crash it. This change introduces a configuration `spark.reducer.maxBlocksInFlightPerAddress` , to limit the no. of map outputs being fetched from a given remote address. The changes applied here are applicable for both the scenarios - when external shuffle is enabled as well as disabled.
    
    Ran the job with the default configuration which does not change the existing behavior and ran it with few configurations of lower values -10,20,50,100. The job ran fine and there is no change in the output. (I will update the metrics related to NM in some time.)
    
    Author: Dhruve Ashar <dhruveashargmail.com>
    
    Closes #18487 from dhruve/impr/SPARK-21243.
    
    Author: Dhruve Ashar <dh...@gmail.com>
    
    Closes #18691 from dhruve/branch-2.2.

commit da403b95353f064c24da25236fa7f905fa8ddca1
Author: Holden Karau <ho...@...>
Date:   2017-07-21T23:50:47Z

    [SPARK-21434][PYTHON][DOCS] Add pyspark pip documentation.
    
    Update the Quickstart and RDD programming guides to mention pip.
    
    Built docs locally.
    
    Author: Holden Karau <ho...@us.ibm.com>
    
    Closes #18698 from holdenk/SPARK-21434-add-pyspark-pip-documentation.
    
    (cherry picked from commit cc00e99d5396893b2d3d50960161080837cf950a)
    Signed-off-by: Holden Karau <ho...@us.ibm.com>

commit 62ca13dcaf79b85fca02de5628b607196534c605
Author: Marcelo Vanzin <va...@...>
Date:   2017-07-23T15:23:13Z

    [SPARK-20904][CORE] Don't report task failures to driver during shutdown.
    
    Executors run a thread pool with daemon threads to run tasks. This means
    that those threads remain active when the JVM is shutting down, meaning
    those tasks are affected by code that runs in shutdown hooks.
    
    So if a shutdown hook messes with something that the task is using (e.g.
    an HDFS connection), the task will fail and will report that failure to
    the driver. That will make the driver mark the task as failed regardless
    of what caused the executor to shut down. So, for example, if YARN pre-empted
    that executor, the driver would consider that task failed when it should
    instead ignore the failure.
    
    This change avoids reporting failures to the driver when shutdown hooks
    are executing; this fixes the YARN preemption accounting, and doesn't really
    change things much for other scenarios, other than reporting a more generic
    error ("Executor lost") when the executor shuts down unexpectedly - which
    is arguably more correct.
    
    Tested with a hacky app running on spark-shell that tried to cause failures
    only when shutdown hooks were running, verified that preemption didn't cause
    the app to fail because of task failures exceeding the threshold.
    
    Author: Marcelo Vanzin <va...@cloudera.com>
    
    Closes #18594 from vanzin/SPARK-20904.
    
    (cherry picked from commit cecd285a2aabad4e7db5a3d18944b87fbc4eee6c)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit e5ec3390cbbef87fca8a27bea701a225e18b98ea
Author: DjvuLee <li...@...>
Date:   2017-07-25T17:21:18Z

    [SPARK-21383][YARN] Fix the YarnAllocator allocates more Resource
    
    When NodeManagers launching Executors,
    the `missing` value will exceed the
    real value when the launch is slow, this can lead to YARN allocates more resource.
    
    We add the `numExecutorsRunning` when calculate the `missing` to avoid this.
    
    Test by experiment.
    
    Author: DjvuLee <li...@bytedance.com>
    
    Closes #18651 from djvulee/YarnAllocate.
    
    (cherry picked from commit 8de080d9f9d3deac7745f9b3428d97595975701d)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit c91191bed186f816b760af98218392f9a178942b
Author: Eric Vandenberg <er...@...>
Date:   2017-07-25T18:45:35Z

    [SPARK-21447][WEB UI] Spark history server fails to render compressed
    
    inprogress history file in some cases.
    
    Add failure handling for EOFException that can be thrown during
    decompression of an inprogress spark history file, treat same as case
    where can't parse the last line.
    
    ## What changes were proposed in this pull request?
    
    Failure handling for case of EOFException thrown within the ReplayListenerBus.replay method to handle the case analogous to json parse fail case.  This path can arise in compressed inprogress history files since an incomplete compression block could be read (not flushed by writer on a block boundary).  See the stack trace of this occurrence in the jira ticket (https://issues.apache.org/jira/browse/SPARK-21447)
    
    ## How was this patch tested?
    
    Added a unit test that specifically targets validating the failure handling path appropriately when maybeTruncated is true and false.
    
    Author: Eric Vandenberg <er...@fb.com>
    
    Closes #18673 from ericvandenbergfb/fix_inprogress_compr_history_file.
    
    (cherry picked from commit 06a9793793ca41dcef2f10ca06af091a57c721c4)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit 1bfd1a83b5e18f42bf76c1d72cd0347ff578e9cd
Author: Marcelo Vanzin <va...@...>
Date:   2017-07-26T00:57:26Z

    [SPARK-21494][NETWORK] Use correct app id when authenticating to external service.
    
    There was some code based on the old SASL handler in the new auth client that
    was incorrectly using the SASL user as the user to authenticate against the
    external shuffle service. This caused the external service to not be able to
    find the correct secret to authenticate the connection, failing the connection.
    
    In the course of debugging, I found that some log messages from the YARN shuffle
    service were a little noisy, so I silenced some of them, and also added a couple
    of new ones that helped find this issue. On top of that, I found that a check
    in the code that records app secrets was wrong, causing more log spam and also
    using an O(n) operation instead of an O(1) call.
    
    Also added a new integration suite for the YARN shuffle service with auth on,
    and verified it failed before, and passes now.
    
    Author: Marcelo Vanzin <va...@cloudera.com>
    
    Closes #18706 from vanzin/SPARK-21494.
    
    (cherry picked from commit 300807c6e3011e4d78c6cf750201d0ab8e5bdaf5)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit 06b2ef01ed87add681144fe1d801718caba271af
Author: aokolnychyi <an...@...>
Date:   2017-07-27T23:49:42Z

    [SPARK-21538][SQL] Attribute resolution inconsistency in the Dataset API
    
    ## What changes were proposed in this pull request?
    
    This PR contains a tiny update that removes an attribute resolution inconsistency in the Dataset API. The following example is taken from the ticket description:
    
    ```
    spark.range(1).withColumnRenamed("id", "x").sort(col("id"))  // works
    spark.range(1).withColumnRenamed("id", "x").sort($"id")  // works
    spark.range(1).withColumnRenamed("id", "x").sort('id) // works
    spark.range(1).withColumnRenamed("id", "x").sort("id") // fails with:
    org.apache.spark.sql.AnalysisException: Cannot resolve column name "id" among (x);
    ```
    The above `AnalysisException` happens because the last case calls `Dataset.apply()` to convert strings into columns, which triggers attribute resolution. To make the API consistent between overloaded methods, this PR defers the resolution and constructs columns directly.
    
    Author: aokolnychyi <an...@sap.com>
    
    Closes #18740 from aokolnychyi/spark-21538.
    
    (cherry picked from commit f44ead89f48f040b7eb9dfc88df0ec995b47bfe9)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 93790313b2e36e5e5ac4dfe13b285f03c42da111
Author: Yan Facai (颜发才) <fa...@...>
Date:   2017-07-28T02:10:35Z

    [SPARK-21306][ML] OneVsRest should support setWeightCol
    
    ## What changes were proposed in this pull request?
    
    add `setWeightCol` method for OneVsRest.
    
    `weightCol` is ignored if classifier doesn't inherit HasWeightCol trait.
    
    ## How was this patch tested?
    
    + [x] add an unit test.
    
    Author: Yan Facai (颜发才) <fa...@gmail.com>
    
    Closes #18554 from facaiy/BUG/oneVsRest_missing_weightCol.
    
    (cherry picked from commit a5a3189974ea4628e9489eb50099a5432174e80c)
    Signed-off-by: Yanbo Liang <yb...@gmail.com>

commit df6cd35ecb710b99911f39b9d7d16cac08468b4d
Author: Remis Haroon <re...@...>
Date:   2017-07-29T12:26:10Z

    [SPARK-21508][DOC] Fix example code provided in Spark Streaming Documentation
    
    ## What changes were proposed in this pull request?
    
    JIRA ticket : [SPARK-21508](https://issues.apache.org/jira/projects/SPARK/issues/SPARK-21508)
    
    correcting a mistake in example code provided in Spark Streaming Custom Receivers Documentation
    The example code provided in the documentation on 'Spark Streaming Custom Receivers' has an error.
    doc link : https://spark.apache.org/docs/latest/streaming-custom-receivers.html
    
    ```
    
    // Assuming ssc is the StreamingContext
    val customReceiverStream = ssc.receiverStream(new CustomReceiver(host, port))
    val words = lines.flatMap(_.split(" "))
    ...
    ```
    
    instead of `lines.flatMap(_.split(" "))`
    it should be `customReceiverStream.flatMap(_.split(" "))`
    
    ## How was this patch tested?
    this documentation change is tested manually by jekyll build , running below commands
    ```
    jekyll build
    jekyll serve --watch
    ```
    screen-shots provided below
    ![screenshot1](https://user-images.githubusercontent.com/8828470/28744636-a6de1ac6-7482-11e7-843b-ff84b5855ec0.png)
    ![screenshot2](https://user-images.githubusercontent.com/8828470/28744637-a6def496-7482-11e7-9512-7f4bbe027c6a.png)
    
    Author: Remis Haroon <Re...@insdc01.pwc.com>
    
    Closes #18770 from remisharoon/master.
    
    (cherry picked from commit c14382030b373177cf6aa3c045e27d754368a927)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 24a9bace131465bf6a177f304cf8f05b0e4fe6ed
Author: Liang-Chi Hsieh <vi...@...>
Date:   2017-07-29T17:02:56Z

    [SPARK-21555][SQL] RuntimeReplaceable should be compared semantically by its canonicalized child
    
    ## What changes were proposed in this pull request?
    
    When there are aliases (these aliases were added for nested fields) as parameters in `RuntimeReplaceable`, as they are not in the children expression, those aliases can't be cleaned up in analyzer rule `CleanupAliases`.
    
    An expression `nvl(foo.foo1, "value")` can be resolved to two semantically different expressions in a group by query because they contain different aliases.
    
    Because those aliases are not children of `RuntimeReplaceable` which is an `UnaryExpression`. So we can't trim the aliases out by simple transforming the expressions in `CleanupAliases`.
    
    If we want to replace the non-children aliases in `RuntimeReplaceable`, we need to add more codes to `RuntimeReplaceable` and modify all expressions of `RuntimeReplaceable`. It makes the interface ugly IMO.
    
    Consider those aliases will be replaced later at optimization and so they're no harm, this patch chooses to simply override `canonicalized` of `RuntimeReplaceable`.
    
    One concern is about `CleanupAliases`. Because it actually cannot clean up ALL aliases inside a plan. To make caller of this rule notice that, this patch adds a comment to `CleanupAliases`.
    
    ## How was this patch tested?
    
    Added test.
    
    Author: Liang-Chi Hsieh <vi...@gmail.com>
    
    Closes #18761 from viirya/SPARK-21555.
    
    (cherry picked from commit 9c8109ef414c92553335bb1e90e9681e142128a4)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 66fa6bd6d48b08625ecedfcb5a976678141300bd
Author: Xingbo Jiang <xi...@...>
Date:   2017-07-29T17:11:31Z

    [SPARK-19451][SQL] rangeBetween method should accept Long value as boundary
    
    ## What changes were proposed in this pull request?
    
    Long values can be passed to `rangeBetween` as range frame boundaries, but we silently convert it to Int values, this can cause wrong results and we should fix this.
    
    Further more, we should accept any legal literal values as range frame boundaries. In this PR, we make it possible for Long values, and make accepting other DataTypes really easy to add.
    
    This PR is mostly based on Herman's previous amazing work: https://github.com/hvanhovell/spark/commit/596f53c339b1b4629f5651070e56a8836a397768
    
    After this been merged, we can close #16818 .
    
    ## How was this patch tested?
    
    Add new tests in `DataFrameWindowFunctionsSuite` and `TypeCoercionSuite`.
    
    Author: Xingbo Jiang <xi...@databricks.com>
    
    Closes #18540 from jiangxb1987/rangeFrame.
    
    (cherry picked from commit 92d85637e7f382aae61c0f26eb1524d2b4c93516)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit e2062b9c1106433799d2874dfe17e181fe1ecb5e
Author: gatorsmile <ga...@...>
Date:   2017-07-30T03:35:22Z

    Revert "[SPARK-19451][SQL] rangeBetween method should accept Long value as boundary"
    
    This reverts commit 66fa6bd6d48b08625ecedfcb5a976678141300bd.

commit 174543466934c6ced5812e2dfc7e1a18793cf0b1
Author: Marcelo Vanzin <va...@...>
Date:   2017-08-01T17:06:03Z

    [SPARK-21522][CORE] Fix flakiness in LauncherServerSuite.
    
    Handle the case where the server closes the socket before the full message
    has been written by the client.
    
    Author: Marcelo Vanzin <va...@cloudera.com>
    
    Closes #18727 from vanzin/SPARK-21522.
    
    (cherry picked from commit b133501800b43fa5c538a4e5ad597c9dc7d8378e)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit 79e5805f9284c53b0c329f086190298b70f012c1
Author: Sean Owen <so...@...>
Date:   2017-08-01T18:05:55Z

    [SPARK-21593][DOCS] Fix 2 rendering errors on configuration page
    
    ## What changes were proposed in this pull request?
    
    Fix 2 rendering errors on configuration doc page, due to SPARK-21243 and SPARK-15355.
    
    ## How was this patch tested?
    
    Manually built and viewed docs with jekyll
    
    Author: Sean Owen <so...@cloudera.com>
    
    Closes #18793 from srowen/SPARK-21593.
    
    (cherry picked from commit b1d59e60dee2a41f8eff8ef29b3bcac69111e2f0)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 67c60d78e4c4562fbf86b46d14b7d635aaf67e5b
Author: Devaraj K <de...@...>
Date:   2017-08-01T20:38:55Z

    [SPARK-21339][CORE] spark-shell --packages option does not add jars to classpath on windows
    
    The --packages option jars are getting added to the classpath with the scheme as "file:///", in Unix it doesn't have problem with this since the scheme contains the Unix Path separator which separates the jar name with location in the classpath. In Windows, the jar file is not getting resolved from the classpath because of the scheme.
    
    Windows : file:///C:/Users/<user>/.ivy2/jars/<jar-name>.jar
    Unix : file:///home/<user>/.ivy2/jars/<jar-name>.jar
    
    With this PR, we are avoiding the 'file://' scheme to get added to the packages jar files.
    
    I have verified manually in Windows and Unix environments, with the change it adds the jar to classpath like below,
    
    Windows : C:\Users\<user>\.ivy2\jars\<jar-name>.jar
    Unix : /home/<user>/.ivy2/jars/<jar-name>.jar
    
    Author: Devaraj K <de...@apache.org>
    
    Closes #18708 from devaraj-kavali/SPARK-21339.
    
    (cherry picked from commit 58da1a2455258156fe8ba57241611eac1a7928ef)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit 397f904219e7617386144aba87998a057bde02e3
Author: Shixiong Zhu <sh...@...>
Date:   2017-08-02T17:59:59Z

    [SPARK-21597][SS] Fix a potential overflow issue in EventTimeStats
    
    ## What changes were proposed in this pull request?
    
    This PR fixed a potential overflow issue in EventTimeStats.
    
    ## How was this patch tested?
    
    The new unit tests
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #18803 from zsxwing/avg.
    
    (cherry picked from commit 7f63e85b47a93434030482160e88fe63bf9cff4e)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit 467ee8dff8494a730ef8c00aafc02266a794a1fe
Author: Shixiong Zhu <sh...@...>
Date:   2017-08-02T21:02:13Z

    [SPARK-21546][SS] dropDuplicates should ignore watermark when it's not a key
    
    ## What changes were proposed in this pull request?
    
    When the watermark is not a column of `dropDuplicates`, right now it will crash. This PR fixed this issue.
    
    ## How was this patch tested?
    
    The new unit test.
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #18822 from zsxwing/SPARK-21546.
    
    (cherry picked from commit 0d26b3aa55f9cc75096b0e2b309f64fe3270b9a5)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit 690f491f6e979bc960baa05de1a66306b06dc85a
Author: Bryan Cutler <cu...@...>
Date:   2017-08-03T01:28:19Z

    [SPARK-12717][PYTHON][BRANCH-2.2] Adding thread-safe broadcast pickle registry
    
    ## What changes were proposed in this pull request?
    
    When using PySpark broadcast variables in a multi-threaded environment,  `SparkContext._pickled_broadcast_vars` becomes a shared resource.  A race condition can occur when broadcast variables that are pickled from one thread get added to the shared ` _pickled_broadcast_vars` and become part of the python command from another thread.  This PR introduces a thread-safe pickled registry using thread local storage so that when python command is pickled (causing the broadcast variable to be pickled and added to the registry) each thread will have their own view of the pickle registry to retrieve and clear the broadcast variables used.
    
    ## How was this patch tested?
    
    Added a unit test that causes this race condition using another thread.
    
    Author: Bryan Cutler <cu...@gmail.com>
    
    Closes #18823 from BryanCutler/branch-2.2.

commit 1bcfa2a0ccdc1d3c3c5075bc6e2838c69f5b2f7f
Author: Christiam Camacho <ca...@...>
Date:   2017-08-03T22:40:25Z

    Fix Java SimpleApp spark application
    
    ## What changes were proposed in this pull request?
    
    Add missing import and missing parentheses to invoke `SparkSession::text()`.
    
    ## How was this patch tested?
    
    Built and the code for this application, ran jekyll locally per docs/README.md.
    
    Author: Christiam Camacho <ca...@ncbi.nlm.nih.gov>
    
    Closes #18795 from christiam/master.
    
    (cherry picked from commit dd72b10aba9997977f82605c5c1778f02dd1f91e)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit f9aae8ecde62fc6d92a4807c68d812bac6b207e2
Author: Andrew Ray <ra...@...>
Date:   2017-08-04T07:58:01Z

    [SPARK-21330][SQL] Bad partitioning does not allow to read a JDBC table with extreme values on the partition column
    
    ## What changes were proposed in this pull request?
    
    An overflow of the difference of bounds on the partitioning column leads to no data being read. This
    patch checks for this overflow.
    
    ## How was this patch tested?
    
    New unit test.
    
    Author: Andrew Ray <ra...@gmail.com>
    
    Closes #18800 from aray/SPARK-21330.
    
    (cherry picked from commit 25826c77ddf0d5753d2501d0e764111da2caa8b6)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 841bc2f86d61769057fca08cebbb72a98bde00dc
Author: liuxian <li...@...>
Date:   2017-08-05T05:55:06Z

    [SPARK-21580][SQL] Integers in aggregation expressions are wrongly taken as group-by ordinal
    
    ## What changes were proposed in this pull request?
    
    create temporary view data as select * from values
    (1, 1),
    (1, 2),
    (2, 1),
    (2, 2),
    (3, 1),
    (3, 2)
    as data(a, b);
    
    `select 3, 4, sum(b) from data group by 1, 2;`
    `select 3 as c, 4 as d, sum(b) from data group by c, d;`
    When running these two cases, the following exception occurred:
    `Error in query: GROUP BY position 4 is not in select list (valid range is [1, 3]); line 1 pos 10`
    
    The cause of this failure:
    If an aggregateExpression is integer, after replaced with this aggregateExpression, the
    groupExpression still considered as an ordinal.
    
    The solution:
    This bug is due to re-entrance of an analyzed plan. We can solve it by using `resolveOperators` in `SubstituteUnresolvedOrdinals`.
    
    ## How was this patch tested?
    Added unit test case
    
    Author: liuxian <li...@zte.com.cn>
    
    Closes #18779 from 10110346/groupby.
    
    (cherry picked from commit 894d5a453a3f47525408ee8c91b3b594daa43ccb)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 098aaec304a6b4c94a364f08c2d8ef18009689d8
Author: vinodkc <vi...@...>
Date:   2017-08-06T06:04:39Z

    [SPARK-21588][SQL] SQLContext.getConf(key, null) should return null
    
    ## What changes were proposed in this pull request?
    
    In SQLContext.get(key,null) for a key that is not defined in the conf, and doesn't have a default value defined, throws a NPE. Int happens only when conf has a value converter
    
    Added null check on defaultValue inside SQLConf.getConfString to avoid calling entry.valueConverter(defaultValue)
    
    ## How was this patch tested?
    Added unit test
    
    Author: vinodkc <vi...@gmail.com>
    
    Closes #18852 from vinodkc/br_Fix_SPARK-21588.
    
    (cherry picked from commit 1ba967b25e6d88be2db7a4e100ac3ead03a2ade9)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 7a04def920438ef0e08b66a95befeec981e5571e
Author: Xianyang Liu <xi...@...>
Date:   2017-08-07T09:04:53Z

    [SPARK-21621][CORE] Reset numRecordsWritten after DiskBlockObjectWriter.commitAndGet called
    
    ## What changes were proposed in this pull request?
    
    We should reset numRecordsWritten to zero after DiskBlockObjectWriter.commitAndGet called.
    Because when `revertPartialWritesAndClose` be called, we decrease the written records in `ShuffleWriteMetrics` . However, we decreased the written records to zero, this should be wrong, we should only decreased the number reords after the last `commitAndGet` called.
    
    ## How was this patch tested?
    Modified existing test.
    
    Please review http://spark.apache.org/contributing.html before opening a pull request.
    
    Author: Xianyang Liu <xi...@intel.com>
    
    Closes #18830 from ConeyLiu/DiskBlockObjectWriter.
    
    (cherry picked from commit 534a063f7c693158437d13224f50d4ae789ff6fb)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 4f0eb0c862c0362b14fc5db468f4fc08fb8a08c6
Author: Xiao Li <ga...@...>
Date:   2017-08-07T16:00:01Z

    [SPARK-21647][SQL] Fix SortMergeJoin when using CROSS
    
    ### What changes were proposed in this pull request?
    author: BoleynSu
    closes https://github.com/apache/spark/pull/18836
    
    ```Scala
    val df = Seq((1, 1)).toDF("i", "j")
    df.createOrReplaceTempView("T")
    withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> "-1") {
      sql("select * from (select a.i from T a cross join T t where t.i = a.i) as t1 " +
        "cross join T t2 where t2.i = t1.i").explain(true)
    }
    ```
    The above code could cause the following exception:
    ```
    SortMergeJoinExec should not take Cross as the JoinType
    java.lang.IllegalArgumentException: SortMergeJoinExec should not take Cross as the JoinType
    	at org.apache.spark.sql.execution.joins.SortMergeJoinExec.outputOrdering(SortMergeJoinExec.scala:100)
    ```
    
    Our SortMergeJoinExec supports CROSS. We should not hit such an exception. This PR is to fix the issue.
    
    ### How was this patch tested?
    Modified the two existing test cases.
    
    Author: Xiao Li <ga...@gmail.com>
    Author: Boleyn Su <bo...@gmail.com>
    
    Closes #18863 from gatorsmile/pr-18836.
    
    (cherry picked from commit bbfd6b5d24be5919a3ab1ac3eaec46e33201df39)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 43f9c84b6749b2ebf802e1f062238167b2b1f3bb
Author: Andrey Taptunov <ta...@...>
Date:   2017-08-05T05:40:04Z

    [SPARK-21374][CORE] Fix reading globbed paths from S3 into DF with disabled FS cache
    
    This PR replaces #18623 to do some clean up.
    
    Closes #18623
    
    Jenkins
    
    Author: Shixiong Zhu <sh...@databricks.com>
    Author: Andrey Taptunov <ta...@amazon.com>
    
    Closes #18848 from zsxwing/review-pr18623.

commit fa92a7be709e78db8e8f50dca8e13855c1034fde
Author: Jose Torres <jo...@...>
Date:   2017-08-07T19:27:16Z

    [SPARK-21565][SS] Propagate metadata in attribute replacement.
    
    ## What changes were proposed in this pull request?
    
    Propagate metadata in attribute replacement during streaming execution. This is necessary for EventTimeWatermarks consuming replaced attributes.
    
    ## How was this patch tested?
    new unit test, which was verified to fail before the fix
    
    Author: Jose Torres <jo...@databricks.com>
    
    Closes #18840 from joseph-torres/SPARK-21565.
    
    (cherry picked from commit cce25b360ee9e39d9510134c73a1761475eaf4ac)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit a1c1199e122889ed34415be5e4da67168107a595
Author: gatorsmile <ga...@...>
Date:   2017-08-07T20:04:04Z

    [SPARK-21648][SQL] Fix confusing assert failure in JDBC source when parallel fetching parameters are not properly provided.
    
    ### What changes were proposed in this pull request?
    ```SQL
    CREATE TABLE mytesttable1
    USING org.apache.spark.sql.jdbc
      OPTIONS (
      url 'jdbc:mysql://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}',
      dbtable 'mytesttable1',
      paritionColumn 'state_id',
      lowerBound '0',
      upperBound '52',
      numPartitions '53',
      fetchSize '10000'
    )
    ```
    
    The above option name `paritionColumn` is wrong. That mean, users did not provide the value for `partitionColumn`. In such case, users hit a confusing error.
    
    ```
    AssertionError: assertion failed
    java.lang.AssertionError: assertion failed
    	at scala.Predef$.assert(Predef.scala:156)
    	at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:39)
    	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:312)
    ```
    
    ### How was this patch tested?
    Added a test case
    
    Author: gatorsmile <ga...@gmail.com>
    
    Closes #18864 from gatorsmile/jdbcPartCol.
    
    (cherry picked from commit baf5cac0f8c35925c366464d7e0eb5f6023fce57)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 86609a95af4b700e83638b7416c7e3706c2d64c6
Author: Liang-Chi Hsieh <vi...@...>
Date:   2017-08-08T08:12:41Z

    [SPARK-21567][SQL] Dataset should work with type alias
    
    If we create a type alias for a type workable with Dataset, the type alias doesn't work with Dataset.
    
    A reproducible case looks like:
    
        object C {
          type TwoInt = (Int, Int)
          def tupleTypeAlias: TwoInt = (1, 1)
        }
    
        Seq(1).toDS().map(_ => ("", C.tupleTypeAlias))
    
    It throws an exception like:
    
        type T1 is not a class
        scala.ScalaReflectionException: type T1 is not a class
          at scala.reflect.api.Symbols$SymbolApi$class.asClass(Symbols.scala:275)
          ...
    
    This patch accesses the dealias of type in many places in `ScalaReflection` to fix it.
    
    Added test case.
    
    Author: Liang-Chi Hsieh <vi...@gmail.com>
    
    Closes #18813 from viirya/SPARK-21567.
    
    (cherry picked from commit ee1304199bcd9c1d5fc94f5b06fdd5f6fe7336a1)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit e87ffcaa3e5b75f8d313dc995e4801063b60cd5c
Author: Wenchen Fan <we...@...>
Date:   2017-08-08T08:32:49Z

    Revert "[SPARK-21567][SQL] Dataset should work with type alias"
    
    This reverts commit 86609a95af4b700e83638b7416c7e3706c2d64c6.

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21419: Branch 2.2

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/21419
  
    @gentlewangyu, please close this and read https://spark.apache.org/contributing.html. Questions should go to mailing list and issues should be filed in JIRA.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #21419: Branch 2.2

Posted by gentlewangyu <gi...@git.apache.org>.
Github user gentlewangyu closed the pull request at:

    https://github.com/apache/spark/pull/21419


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21419: Branch 2.2

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/21419
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21419: Branch 2.2

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/21419
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21419: Branch 2.2

Posted by kiszk <gi...@git.apache.org>.
Github user kiszk commented on the issue:

    https://github.com/apache/spark/pull/21419
  
    could you please close this PR?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org