You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by MohammedLayeeq <gi...@git.apache.org> on 2018/02/10 15:07:25 UTC

[GitHub] spark pull request #20569: Branch 2.2

GitHub user MohammedLayeeq opened a pull request:

    https://github.com/apache/spark/pull/20569

    Branch 2.2

    ## What changes were proposed in this pull request?
    
    (Please fill in changes proposed in this fix)
    
    ## How was this patch tested?
    
    (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
    (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
    
    Please review http://spark.apache.org/contributing.html before opening a pull request.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/apache/spark branch-2.2

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/20569.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #20569
    
----
commit 6e1081cbeac58826526b6ff7f2938a556b31ca9e
Author: Sumedh Wale <sw...@...>
Date:   2017-07-06T06:47:22Z

    [SPARK-21312][SQL] correct offsetInBytes in UnsafeRow.writeToStream
    
    ## What changes were proposed in this pull request?
    
    Corrects offsetInBytes calculation in UnsafeRow.writeToStream. Known failures include writes to some DataSources that have own SparkPlan implementations and cause EXCHANGE in writes.
    
    ## How was this patch tested?
    
    Extended UnsafeRowSuite.writeToStream to include an UnsafeRow over byte array having non-zero offset.
    
    Author: Sumedh Wale <sw...@snappydata.io>
    
    Closes #18535 from sumwale/SPARK-21312.
    
    (cherry picked from commit 14a3bb3a008c302aac908d7deaf0942a98c63be7)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 4e53a4edd72e372583f243c660bbcc0572205716
Author: Tathagata Das <ta...@...>
Date:   2017-07-06T07:20:26Z

    [SS][MINOR] Fix flaky test in DatastreamReaderWriterSuite. temp checkpoint dir should be deleted
    
    ## What changes were proposed in this pull request?
    
    Stopping query while it is being initialized can throw interrupt exception, in which case temporary checkpoint directories will not be deleted, and the test will fail.
    
    Author: Tathagata Das <ta...@gmail.com>
    
    Closes #18442 from tdas/DatastreamReaderWriterSuite-fix.
    
    (cherry picked from commit 60043f22458668ac7ecba94fa78953f23a6bdcec)
    Signed-off-by: Tathagata Das <ta...@gmail.com>

commit 576fd4c3a67b4affc5ac50979e27ae929472f0d9
Author: Tathagata Das <ta...@...>
Date:   2017-07-07T00:28:20Z

    [SPARK-21267][SS][DOCS] Update Structured Streaming Documentation
    
    ## What changes were proposed in this pull request?
    
    Few changes to the Structured Streaming documentation
    - Clarify that the entire stream input table is not materialized
    - Add information for Ganglia
    - Add Kafka Sink to the main docs
    - Removed a couple of leftover experimental tags
    - Added more associated reading material and talk videos.
    
    In addition, https://github.com/apache/spark/pull/16856 broke the link to the RDD programming guide in several places while renaming the page. This PR fixes those sameeragarwal cloud-fan.
    - Added a redirection to avoid breaking internal and possible external links.
    - Removed unnecessary redirection pages that were there since the separate scala, java, and python programming guides were merged together in 2013 or 2014.
    
    ## How was this patch tested?
    
    (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
    (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
    
    Please review http://spark.apache.org/contributing.html before opening a pull request.
    
    Author: Tathagata Das <ta...@gmail.com>
    
    Closes #18485 from tdas/SPARK-21267.
    
    (cherry picked from commit 0217dfd26f89133f146197359b556c9bf5aca172)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit ab12848d624f6b74d401e924255c0b4fcc535231
Author: Prashant Sharma <pr...@...>
Date:   2017-07-08T06:33:12Z

    [SPARK-21069][SS][DOCS] Add rate source to programming guide.
    
    ## What changes were proposed in this pull request?
    
    SPARK-20979 added a new structured streaming source: Rate source. This patch adds the corresponding documentation to programming guide.
    
    ## How was this patch tested?
    
    Tested by running jekyll locally.
    
    Author: Prashant Sharma <pr...@apache.org>
    Author: Prashant Sharma <pr...@in.ibm.com>
    
    Closes #18562 from ScrapCodes/spark-21069/rate-source-docs.
    
    (cherry picked from commit d0bfc6733521709e453d643582df2bdd68f28de7)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit 7d0b1c927d92cc2a4932262514ffd12c47593b80
Author: Bogdan Raducanu <bo...@...>
Date:   2017-07-08T12:14:59Z

    [SPARK-21228][SQL][BRANCH-2.2] InSet incorrect handling of structs
    
    ## What changes were proposed in this pull request?
    
    This is backport of https://github.com/apache/spark/pull/18455
    When data type is struct, InSet now uses TypeUtils.getInterpretedOrdering (similar to EqualTo) to build a TreeSet. In other cases it will use a HashSet as before (which should be faster). Similarly, In.eval uses Ordering.equiv instead of equals.
    
    ## How was this patch tested?
    New test in SQLQuerySuite.
    
    Author: Bogdan Raducanu <bo...@databricks.com>
    
    Closes #18563 from bogdanrdc/SPARK-21228-BRANCH2.2.

commit a64f10800244a8057f7f32c3d2f4a719c5080d05
Author: Dongjoon Hyun <do...@...>
Date:   2017-07-08T12:16:47Z

    [SPARK-21345][SQL][TEST][TEST-MAVEN] SparkSessionBuilderSuite should clean up stopped sessions.
    
    `SparkSessionBuilderSuite` should clean up stopped sessions. Otherwise, it leaves behind some stopped `SparkContext`s interfereing with other test suites using `ShardSQLContext`.
    
    Recently, master branch fails consequtively.
    - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
    
    Pass the Jenkins with a updated suite.
    
    Author: Dongjoon Hyun <do...@apache.org>
    
    Closes #18567 from dongjoon-hyun/SPARK-SESSION.
    
    (cherry picked from commit 0b8dd2d08460f3e6eb578727d2c336b6f11959e7)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit c8d7855b905742033b7588ce7ee28bc23de13709
Author: Marcelo Vanzin <va...@...>
Date:   2017-07-08T16:24:54Z

    [SPARK-20342][CORE] Update task accumulators before sending task end event.
    
    This makes sures that listeners get updated task information; otherwise it's
    possible to write incomplete task information into event logs, for example,
    making the information in a replayed UI inconsistent with the original
    application.
    
    Added a new unit test to try to detect the problem, but it's not guaranteed
    to fail since it's a race; but it fails pretty reliably for me without the
    scheduler changes.
    
    Author: Marcelo Vanzin <va...@cloudera.com>
    
    Closes #18393 from vanzin/SPARK-20342.try2.
    
    (cherry picked from commit 9131bdb7e12bcfb2cb699b3438f554604e28aaa8)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 964332b2879af048a95606dfcb4f2cb2e356135b
Author: jinxing <ji...@...>
Date:   2017-07-08T16:27:58Z

    [SPARK-21343] Refine the document for spark.reducer.maxReqSizeShuffleToMem.
    
    ## What changes were proposed in this pull request?
    
    In current code, reducer can break the old shuffle service when `spark.reducer.maxReqSizeShuffleToMem` is enabled. Let's refine document.
    
    Author: jinxing <ji...@126.com>
    
    Closes #18566 from jinxing64/SPARK-21343.
    
    (cherry picked from commit 062c336d06a0bd4e740a18d2349e03e311509243)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 3bfad9d4210f96dcd2270599257c3a5272cad77b
Author: Zhenhua Wang <wa...@...>
Date:   2017-07-09T10:51:06Z

    [SPARK-21083][SQL][BRANCH-2.2] Store zero size and row count when analyzing empty table
    
    ## What changes were proposed in this pull request?
    
    We should be able to store zero size and row count after analyzing empty table.
    This is a backport for https://github.com/apache/spark/commit/9fccc3627fa41d32fbae6dbbb9bd1521e43eb4f0.
    
    ## How was this patch tested?
    
    Added new test.
    
    Author: Zhenhua Wang <wa...@huawei.com>
    
    Closes #18575 from wzhfy/analyzeEmptyTable-2.2.

commit 40fd0ce7f2c2facb96fc5d613bc7b6e4b573d9f7
Author: jinxing <ji...@...>
Date:   2017-07-10T13:06:58Z

    [SPARK-21342] Fix DownloadCallback to work well with RetryingBlockFetcher.
    
    When `RetryingBlockFetcher` retries fetching blocks. There could be two `DownloadCallback`s download the same content to the same target file. It could cause `ShuffleBlockFetcherIterator` reading a partial result.
    
    This pr proposes to create and delete the tmp files in `OneForOneBlockFetcher`
    
    Author: jinxing <ji...@126.com>
    Author: Shixiong Zhu <zs...@gmail.com>
    
    Closes #18565 from jinxing64/SPARK-21342.
    
    (cherry picked from commit 6a06c4b03c4dd86241fb9d11b4360371488f0e53)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit a05edf454a67261c89f0f2ecd1fe46bb8cebc257
Author: Juliusz Sompolski <ju...@...>
Date:   2017-07-10T16:26:42Z

    [SPARK-21272] SortMergeJoin LeftAnti does not update numOutputRows
    
    ## What changes were proposed in this pull request?
    
    Updating numOutputRows metric was missing from one return path of LeftAnti SortMergeJoin.
    
    ## How was this patch tested?
    
    Non-zero output rows manually seen in metrics.
    
    Author: Juliusz Sompolski <ju...@databricks.com>
    
    Closes #18494 from juliuszsompolski/SPARK-21272.

commit edcd9fbc92683753d55ed0c69f391bf3bed59da4
Author: Shixiong Zhu <sh...@...>
Date:   2017-07-11T03:26:17Z

    [SPARK-21369][CORE] Don't use Scala Tuple2 in common/network-*
    
    ## What changes were proposed in this pull request?
    
    Remove all usages of Scala Tuple2 from common/network-* projects. Otherwise, Yarn users cannot use `spark.reducer.maxReqSizeShuffleToMem`.
    
    ## How was this patch tested?
    
    Jenkins.
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #18593 from zsxwing/SPARK-21369.
    
    (cherry picked from commit 833eab2c9bd273ee9577fbf9e480d3e3a4b7d203)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 399aa016e8f44fea4e5ef4b71a9a80484dd755f8
Author: Xingbo Jiang <xi...@...>
Date:   2017-07-11T13:52:54Z

    [SPARK-21366][SQL][TEST] Add sql test for window functions
    
    ## What changes were proposed in this pull request?
    
    Add sql test for window functions, also remove uncecessary test cases in `WindowQuerySuite`.
    
    ## How was this patch tested?
    
    Added `window.sql` and the corresponding output file.
    
    Author: Xingbo Jiang <xi...@databricks.com>
    
    Closes #18591 from jiangxb1987/window.
    
    (cherry picked from commit 66d21686556681457aab6e44e19f5614c5635f0c)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit cb6fc89ba20a427fa7d66fa5036b17c1a5d5d87f
Author: Eric Vandenberg <er...@...>
Date:   2017-07-12T06:49:15Z

    [SPARK-21219][CORE] Task retry occurs on same executor due to race co…
    
    …ndition with blacklisting
    
    There's a race condition in the current TaskSetManager where a failed task is added for retry (addPendingTask), and can asynchronously be assigned to an executor *prior* to the blacklist state (updateBlacklistForFailedTask), the result is the task might re-execute on the same executor.  This is particularly problematic if the executor is shutting down since the retry task immediately becomes a lost task (ExecutorLostFailure).  Another side effect is that the actual failure reason gets obscured by the retry task which never actually executed.  There are sample logs showing the issue in the https://issues.apache.org/jira/browse/SPARK-21219
    
    The fix is to change the ordering of the addPendingTask and updatingBlackListForFailedTask calls in TaskSetManager.handleFailedTask
    
    Implemented a unit test that verifies the task is black listed before it is added to the pending task.  Ran the unit test without the fix and it fails.  Ran the unit test with the fix and it passes.
    
    Please review http://spark.apache.org/contributing.html before opening a pull request.
    
    Author: Eric Vandenberg <ericvandenbergfb.com>
    
    Closes #18427 from ericvandenbergfb/blacklistFix.
    
    ## What changes were proposed in this pull request?
    
    This is a backport of the fix to SPARK-21219, already checked in as 96d58f2.
    
    ## How was this patch tested?
    
    Ran TaskSetManagerSuite tests locally.
    
    Author: Eric Vandenberg <er...@fb.com>
    
    Closes #18604 from jsoltren/branch-2.2.

commit 39eba3053ac99f03d9df56471bae5fc5cc9f4462
Author: Kohki Nishio <ta...@...>
Date:   2017-07-13T00:22:40Z

    [SPARK-18646][REPL] Set parent classloader as null for ExecutorClassLoader
    
    ## What changes were proposed in this pull request?
    
    `ClassLoader` will preferentially load class from `parent`. Only when `parent` is null or the load failed, that it will call the overridden `findClass` function. To avoid the potential issue caused by loading class using inappropriate class loader, we should set the `parent` of `ClassLoader` to null, so that we can fully control which class loader is used.
    
    This is take over of #17074,  the primary author of this PR is taroplus .
    
    Should close #17074 after this PR get merged.
    
    ## How was this patch tested?
    
    Add test case in `ExecutorClassLoaderSuite`.
    
    Author: Kohki Nishio <ta...@me.com>
    Author: Xingbo Jiang <xi...@databricks.com>
    
    Closes #18614 from jiangxb1987/executor_classloader.
    
    (cherry picked from commit e08d06b37bc96cc48fec1c5e40f73e0bca09c616)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit cf0719b5e99333b28bb4066b304dbcf8400c80ea
Author: Wenchen Fan <we...@...>
Date:   2017-07-13T00:34:42Z

    Revert "[SPARK-18646][REPL] Set parent classloader as null for ExecutorClassLoader"
    
    This reverts commit 39eba3053ac99f03d9df56471bae5fc5cc9f4462.

commit bfe3ba86936ffaabff9f89d03018eb368d246b4d
Author: jerryshao <ss...@...>
Date:   2017-07-13T22:25:38Z

    [SPARK-21376][YARN] Fix yarn client token expire issue when cleaning the staging files in long running scenario
    
    ## What changes were proposed in this pull request?
    
    This issue happens in long running application with yarn cluster mode, because yarn#client doesn't sync token with AM, so it will always keep the initial token, this token may be expired in the long running scenario, so when yarn#client tries to clean up staging directory after application finished, it will use this expired token and meet token expire issue.
    
    ## How was this patch tested?
    
    Manual verification is secure cluster.
    
    Author: jerryshao <ss...@hortonworks.com>
    
    Closes #18617 from jerryshao/SPARK-21376.
    
    (cherry picked from commit cb8d5cc90ff8d3c991ff33da41b136ab7634f71b)

commit 1cb4369a5b894619582e0d5ccc8c1f4ecb8ae36a
Author: Kazuaki Ishizaki <is...@...>
Date:   2017-07-15T03:16:04Z

    [SPARK-21344][SQL] BinaryType comparison does signed byte array comparison
    
    ## What changes were proposed in this pull request?
    
    This PR fixes a wrong comparison for `BinaryType`. This PR enables unsigned comparison and unsigned prefix generation for an array for `BinaryType`. Previous implementations uses signed operations.
    
    ## How was this patch tested?
    
    Added a test suite in `OrderingSuite`.
    
    Author: Kazuaki Ishizaki <is...@jp.ibm.com>
    
    Closes #18571 from kiszk/SPARK-21344.
    
    (cherry picked from commit ac5d5d795909061a17e056696cf0ef87d9e65dd1)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 8e85ce625127f62b7e2abdfab81c7bcbebcc8448
Author: Sean Owen <so...@...>
Date:   2017-07-15T08:21:29Z

    [SPARK-21267][DOCS][MINOR] Follow up to avoid referencing programming-guide redirector
    
    ## What changes were proposed in this pull request?
    
    Update internal references from programming-guide to rdd-programming-guide
    
    See https://github.com/apache/spark-website/commit/5ddf243fd84a0f0f98a5193a207737cea9cdc083 and https://github.com/apache/spark/pull/18485#issuecomment-314789751
    
    Let's keep the redirector even if it's problematic to build, but not rely on it internally.
    
    ## How was this patch tested?
    
    (Doc build)
    
    Author: Sean Owen <so...@cloudera.com>
    
    Closes #18625 from srowen/SPARK-21267.2.
    
    (cherry picked from commit 74ac1fb081e9532d77278a4edca9f3f129fd62eb)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 0ef98fd435ff77196780c2cad6e1bda377b2642f
Author: John Lee <jl...@...>
Date:   2017-07-17T18:13:35Z

    [SPARK-21321][SPARK CORE] Spark very verbose on shutdown
    
    ## What changes were proposed in this pull request?
    
    The current code is very verbose on shutdown.
    
    The changes I propose is to change the log level when the driver is shutting down and the RPC connections are closed (RpcEnvStoppedException).
    
    ## How was this patch tested?
    
    Tested with word count(deploy-mode = cluster, master = yarn, num-executors = 4) with 300GB of data.
    
    Author: John Lee <jl...@yahoo-inc.com>
    
    Closes #18547 from yoonlee95/SPARK-21321.
    
    (cherry picked from commit 0e07a29cf4a5587f939585e6885ed0f7e68c31b5)
    Signed-off-by: Tom Graves <tg...@yahoo-inc.com>

commit 83bdb04871248357ddbb665198c538f2df449006
Author: aokolnychyi <an...@...>
Date:   2017-07-18T04:07:50Z

    [SPARK-21332][SQL] Incorrect result type inferred for some decimal expressions
    
    ## What changes were proposed in this pull request?
    
    This PR changes the direction of expression transformation in the DecimalPrecision rule. Previously, the expressions were transformed down, which led to incorrect result types when decimal expressions had other decimal expressions as their operands. The root cause of this issue was in visiting outer nodes before their children. Consider the example below:
    
    ```
        val inputSchema = StructType(StructField("col", DecimalType(26, 6)) :: Nil)
        val sc = spark.sparkContext
        val rdd = sc.parallelize(1 to 2).map(_ => Row(BigDecimal(12)))
        val df = spark.createDataFrame(rdd, inputSchema)
    
        // Works correctly since no nested decimal expression is involved
        // Expected result type: (26, 6) * (26, 6) = (38, 12)
        df.select($"col" * $"col").explain(true)
        df.select($"col" * $"col").printSchema()
    
        // Gives a wrong result since there is a nested decimal expression that should be visited first
        // Expected result type: ((26, 6) * (26, 6)) * (26, 6) = (38, 12) * (26, 6) = (38, 18)
        df.select($"col" * $"col" * $"col").explain(true)
        df.select($"col" * $"col" * $"col").printSchema()
    ```
    
    The example above gives the following output:
    
    ```
    // Correct result without sub-expressions
    == Parsed Logical Plan ==
    'Project [('col * 'col) AS (col * col)#4]
    +- LogicalRDD [col#1]
    
    == Analyzed Logical Plan ==
    (col * col): decimal(38,12)
    Project [CheckOverflow((promote_precision(cast(col#1 as decimal(26,6))) * promote_precision(cast(col#1 as decimal(26,6)))), DecimalType(38,12)) AS (col * col)#4]
    +- LogicalRDD [col#1]
    
    == Optimized Logical Plan ==
    Project [CheckOverflow((col#1 * col#1), DecimalType(38,12)) AS (col * col)#4]
    +- LogicalRDD [col#1]
    
    == Physical Plan ==
    *Project [CheckOverflow((col#1 * col#1), DecimalType(38,12)) AS (col * col)#4]
    +- Scan ExistingRDD[col#1]
    
    // Schema
    root
     |-- (col * col): decimal(38,12) (nullable = true)
    
    // Incorrect result with sub-expressions
    == Parsed Logical Plan ==
    'Project [(('col * 'col) * 'col) AS ((col * col) * col)#11]
    +- LogicalRDD [col#1]
    
    == Analyzed Logical Plan ==
    ((col * col) * col): decimal(38,12)
    Project [CheckOverflow((promote_precision(cast(CheckOverflow((promote_precision(cast(col#1 as decimal(26,6))) * promote_precision(cast(col#1 as decimal(26,6)))), DecimalType(38,12)) as decimal(26,6))) * promote_precision(cast(col#1 as decimal(26,6)))), DecimalType(38,12)) AS ((col * col) * col)#11]
    +- LogicalRDD [col#1]
    
    == Optimized Logical Plan ==
    Project [CheckOverflow((cast(CheckOverflow((col#1 * col#1), DecimalType(38,12)) as decimal(26,6)) * col#1), DecimalType(38,12)) AS ((col * col) * col)#11]
    +- LogicalRDD [col#1]
    
    == Physical Plan ==
    *Project [CheckOverflow((cast(CheckOverflow((col#1 * col#1), DecimalType(38,12)) as decimal(26,6)) * col#1), DecimalType(38,12)) AS ((col * col) * col)#11]
    +- Scan ExistingRDD[col#1]
    
    // Schema
    root
     |-- ((col * col) * col): decimal(38,12) (nullable = true)
    ```
    
    ## How was this patch tested?
    
    This PR was tested with available unit tests. Moreover, there are tests to cover previously failing scenarios.
    
    Author: aokolnychyi <an...@sap.com>
    
    Closes #18583 from aokolnychyi/spark-21332.
    
    (cherry picked from commit 0be5fb41a6b7ef4da9ba36f3604ac646cb6d4ae3)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 99ce551a13f0918b440ddc094c3a32167d7ab3dd
Author: Burak Yavuz <br...@...>
Date:   2017-07-18T04:09:07Z

    [SPARK-21445] Make IntWrapper and LongWrapper in UTF8String Serializable
    
    ## What changes were proposed in this pull request?
    
    Making those two classes will avoid Serialization issues like below:
    ```
    Caused by: java.io.NotSerializableException: org.apache.spark.unsafe.types.UTF8String$IntWrapper
    Serialization stack:
        - object not serializable (class: org.apache.spark.unsafe.types.UTF8String$IntWrapper, value: org.apache.spark.unsafe.types.UTF8String$IntWrapper326450e)
        - field (class: org.apache.spark.sql.catalyst.expressions.Cast$$anonfun$castToInt$1, name: result$2, type: class org.apache.spark.unsafe.types.UTF8String$IntWrapper)
        - object (class org.apache.spark.sql.catalyst.expressions.Cast$$anonfun$castToInt$1, <function1>)
    ```
    
    ## How was this patch tested?
    
    - [x] Manual testing
    - [ ] Unit test
    
    Author: Burak Yavuz <br...@gmail.com>
    
    Closes #18660 from brkyvz/serializableutf8.
    
    (cherry picked from commit 26cd2ca0402d7d49780116d45a5622a45c79f661)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit df061fd5f93c8110107198a94e68a4e29248e345
Author: Wenchen Fan <we...@...>
Date:   2017-07-18T22:56:16Z

    [SPARK-21457][SQL] ExternalCatalog.listPartitions should correctly handle partition values with dot
    
    ## What changes were proposed in this pull request?
    
    When we list partitions from hive metastore with a partial partition spec, we are expecting exact matching according to the partition values. However, hive treats dot specially and match any single character for dot. We should do an extra filter to drop unexpected partitions.
    
    ## How was this patch tested?
    
    new regression test.
    
    Author: Wenchen Fan <we...@databricks.com>
    
    Closes #18671 from cloud-fan/hive.
    
    (cherry picked from commit f18b905f6cace7686ef169fda7de474079d0af23)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 5a0a76f1648729dfa7ed0522dd2cb41ba805a2cd
Author: jinxing <ji...@...>
Date:   2017-07-19T13:35:26Z

    [SPARK-21414] Refine SlidingWindowFunctionFrame to avoid OOM.
    
    ## What changes were proposed in this pull request?
    
    In `SlidingWindowFunctionFrame`, it is now adding all rows to the buffer for which the input row value is equal to or less than the output row upper bound, then drop all rows from the buffer for which the input row value is smaller than the output row lower bound.
    This could result in the buffer is very big though the window is small.
    For example:
    ```
    select a, b, sum(a)
    over (partition by b order by a range between 1000000 following and 1000001 following)
    from table
    ```
    We can refine the logic and just add the qualified rows into buffer.
    
    ## How was this patch tested?
    Manual test:
    Run sql
    `select shop, shopInfo, district, sum(revenue) over(partition by district order by revenue range between 100 following and 200 following) from revenueList limit 10`
    against a table with 4  columns(shop: String, shopInfo: String, district: String, revenue: Int). The biggest partition is around 2G bytes, containing 200k lines.
    Configure the executor with 2G bytes memory.
    With the change in this pr, it works find. Without this change, below exception will be thrown.
    ```
    MemoryError: Java heap space
    	at org.apache.spark.sql.catalyst.expressions.UnsafeRow.copy(UnsafeRow.java:504)
    	at org.apache.spark.sql.catalyst.expressions.UnsafeRow.copy(UnsafeRow.java:62)
    	at org.apache.spark.sql.execution.window.SlidingWindowFunctionFrame.write(WindowFunctionFrame.scala:201)
    	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.next(WindowExec.scala:365)
    	at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.next(WindowExec.scala:289)
    	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
    	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
    	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
    	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    	at org.apache.spark.scheduler.Task.run(Task.scala:108)
    	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:341)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ```
    
    Author: jinxing <ji...@126.com>
    
    Closes #18634 from jinxing64/SPARK-21414.
    
    (cherry picked from commit 4eb081cc870a9d1c42aae90418535f7d782553e9)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 4c212eed1a4a75756216b13aab211d945e14d89b
Author: donnyzone <we...@...>
Date:   2017-07-19T13:48:54Z

    [SPARK-21441][SQL] Incorrect Codegen in SortMergeJoinExec results failures in some cases
    
    ## What changes were proposed in this pull request?
    
    https://issues.apache.org/jira/projects/SPARK/issues/SPARK-21441
    
    This issue can be reproduced by the following example:
    
    ```
    val spark = SparkSession
       .builder()
       .appName("smj-codegen")
       .master("local")
       .config("spark.sql.autoBroadcastJoinThreshold", "1")
       .getOrCreate()
    val df1 = spark.createDataFrame(Seq((1, 1), (2, 2), (3, 3))).toDF("key", "int")
    val df2 = spark.createDataFrame(Seq((1, "1"), (2, "2"), (3, "3"))).toDF("key", "str")
    val df = df1.join(df2, df1("key") === df2("key"))
       .filter("int = 2 or reflect('java.lang.Integer', 'valueOf', str) = 1")
       .select("int")
       df.show()
    ```
    
    To conclude, the issue happens when:
    (1) SortMergeJoin condition contains CodegenFallback expressions.
    (2) In PhysicalPlan tree, SortMergeJoin node  is the child of root node, e.g., the Project in above example.
    
    This patch fixes the logic in `CollapseCodegenStages` rule.
    
    ## How was this patch tested?
    Unit test and manual verification in our cluster.
    
    Author: donnyzone <we...@gmail.com>
    
    Closes #18656 from DonnyZone/Fix_SortMergeJoinExec.
    
    (cherry picked from commit 6b6dd682e84d3b03d0b15fbd81a0d16729e521d2)
    Signed-off-by: Wenchen Fan <we...@databricks.com>

commit 86cd3c08871618441c0c297da0f48ac284595697
Author: Tathagata Das <ta...@...>
Date:   2017-07-19T18:02:07Z

    [SPARK-21464][SS] Minimize deprecation warnings caused by ProcessingTime class
    
    ## What changes were proposed in this pull request?
    
    Use of `ProcessingTime` class was deprecated in favor of `Trigger.ProcessingTime` in Spark 2.2. However interval uses to ProcessingTime causes deprecation warnings during compilation. This cannot be avoided entirely as even though it is deprecated as a public API, ProcessingTime instances are used internally in TriggerExecutor. This PR is to minimize the warning by removing its uses from tests as much as possible.
    
    ## How was this patch tested?
    Existing tests.
    
    Author: Tathagata Das <ta...@gmail.com>
    
    Closes #18678 from tdas/SPARK-21464.
    
    (cherry picked from commit 70fe99dc62ef636a99bcb8a580ad4de4dca95181)
    Signed-off-by: Tathagata Das <ta...@gmail.com>

commit 308bce0eb60649b15836614567532460ea73bd12
Author: DFFuture <al...@...>
Date:   2017-07-19T21:45:11Z

    [SPARK-21446][SQL] Fix setAutoCommit never executed
    
    ## What changes were proposed in this pull request?
    JIRA Issue: https://issues.apache.org/jira/browse/SPARK-21446
    options.asConnectionProperties can not have fetchsize,because fetchsize belongs to Spark-only options, and Spark-only options have been excluded in connection properities.
    So change properties of beforeFetch from  options.asConnectionProperties.asScala.toMap to options.asProperties.asScala.toMap
    
    ## How was this patch tested?
    
    Author: DFFuture <al...@gmail.com>
    
    Closes #18665 from DFFuture/sparksql_pg.
    
    (cherry picked from commit c9729187bcef78299390e53cd9af38c3e084060e)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 9949fed1c45865b6e5e8ebe610789c5fb9546052
Author: Corey Woodfield <co...@...>
Date:   2017-07-19T22:21:38Z

    [SPARK-21333][DOCS] Removed invalid joinTypes from javadoc of Dataset#joinWith
    
    ## What changes were proposed in this pull request?
    
    Two invalid join types were mistakenly listed in the javadoc for joinWith, in the Dataset class. I presume these were copied from the javadoc of join, but since joinWith returns a Dataset\<Tuple2\>, left_semi and left_anti are invalid, as they only return values from one of the datasets, instead of from both
    
    ## How was this patch tested?
    
    I ran the following code :
    ```
    public static void main(String[] args) {
    	SparkSession spark = new SparkSession(new SparkContext("local[*]", "Test"));
    	Dataset<Row> one = spark.createDataFrame(Arrays.asList(new Bean(1), new Bean(2), new Bean(3), new Bean(4), new Bean(5)), Bean.class);
    	Dataset<Row> two = spark.createDataFrame(Arrays.asList(new Bean(4), new Bean(5), new Bean(6), new Bean(7), new Bean(8), new Bean(9)), Bean.class);
    
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "inner").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "cross").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "full").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "full_outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left_outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "right").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "right_outer").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left_semi").show();} catch (Exception e) {e.printStackTrace();}
    	try {two.joinWith(one, one.col("x").equalTo(two.col("x")), "left_anti").show();} catch (Exception e) {e.printStackTrace();}
    }
    ```
    which tests all the different join types, and the last two (left_semi and left_anti) threw exceptions. The same code using join instead of joinWith did fine. The Bean class was just a java bean with a single int field, x.
    
    Author: Corey Woodfield <co...@gmail.com>
    
    Closes #18462 from coreywoodfield/master.
    
    (cherry picked from commit 8cd9cdf17a7a4ad6f2eecd7c4b388ca363c20982)
    Signed-off-by: gatorsmile <ga...@gmail.com>

commit 88dccda393bc79dc6032f71b6acf8eb2b4b152be
Author: Dhruve Ashar <dh...@...>
Date:   2017-07-21T19:03:46Z

    [SPARK-21243][CORE] Limit no. of map outputs in a shuffle fetch
    
    For configurations with external shuffle enabled, we have observed that if a very large no. of blocks are being fetched from a remote host, it puts the NM under extra pressure and can crash it. This change introduces a configuration `spark.reducer.maxBlocksInFlightPerAddress` , to limit the no. of map outputs being fetched from a given remote address. The changes applied here are applicable for both the scenarios - when external shuffle is enabled as well as disabled.
    
    Ran the job with the default configuration which does not change the existing behavior and ran it with few configurations of lower values -10,20,50,100. The job ran fine and there is no change in the output. (I will update the metrics related to NM in some time.)
    
    Author: Dhruve Ashar <dhruveashargmail.com>
    
    Closes #18487 from dhruve/impr/SPARK-21243.
    
    Author: Dhruve Ashar <dh...@gmail.com>
    
    Closes #18691 from dhruve/branch-2.2.

commit da403b95353f064c24da25236fa7f905fa8ddca1
Author: Holden Karau <ho...@...>
Date:   2017-07-21T23:50:47Z

    [SPARK-21434][PYTHON][DOCS] Add pyspark pip documentation.
    
    Update the Quickstart and RDD programming guides to mention pip.
    
    Built docs locally.
    
    Author: Holden Karau <ho...@us.ibm.com>
    
    Closes #18698 from holdenk/SPARK-21434-add-pyspark-pip-documentation.
    
    (cherry picked from commit cc00e99d5396893b2d3d50960161080837cf950a)
    Signed-off-by: Holden Karau <ho...@us.ibm.com>

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #20569: Branch 2.2

Posted by MohammedLayeeq <gi...@git.apache.org>.
Github user MohammedLayeeq closed the pull request at:

    https://github.com/apache/spark/pull/20569


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org