You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by deepaksonu <gi...@git.apache.org> on 2018/06/07 18:29:29 UTC

[GitHub] spark pull request #21507: Branch 1.6

GitHub user deepaksonu opened a pull request:

    https://github.com/apache/spark/pull/21507

    Branch 1.6

    ## What changes were proposed in this pull request?
    
    (Please fill in changes proposed in this fix)
    
    ## How was this patch tested?
    
    (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
    (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
    
    Please review http://spark.apache.org/contributing.html before opening a pull request.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/apache/spark branch-1.6

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/21507.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #21507
    
----
commit cdfb2a1410aa799596c8b751187dbac28b2cc678
Author: Wenchen Fan <we...@...>
Date:   2016-02-04T00:13:23Z

    [SPARK-13101][SQL][BRANCH-1.6] nullability of array type element should not fail analysis of encoder
    
    nullability should only be considered as an optimization rather than part of the type system, so instead of failing analysis for mismatch nullability, we should pass analysis and add runtime null check.
    
    backport https://github.com/apache/spark/pull/11035 to 1.6
    
    Author: Wenchen Fan <we...@databricks.com>
    
    Closes #11042 from cloud-fan/branch-1.6.

commit 2f390d3066297466d98e17a78c5433f37f70cc95
Author: Yuhao Yang <hh...@...>
Date:   2016-02-04T05:19:44Z

    [ML][DOC] fix wrong api link in ml onevsrest
    
    minor fix for api link in ml onevsrest
    
    Author: Yuhao Yang <hh...@gmail.com>
    
    Closes #11068 from hhbyyh/onevsrestDoc.
    
    (cherry picked from commit c2c956bcd1a75fd01868ee9ad2939a6d3de52bc2)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit a907c7c64887833770cd593eecccf53620de59b7
Author: Shixiong Zhu <sh...@...>
Date:   2016-02-04T20:43:16Z

    [SPARK-13195][STREAMING] Fix NoSuchElementException when a state is not set but timeoutThreshold is defined
    
    Check the state Existence before calling get.
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #11081 from zsxwing/SPARK-13195.
    
    (cherry picked from commit 8e2f296306131e6c7c2f06d6672995d3ff8ab021)
    Signed-off-by: Shixiong Zhu <sh...@databricks.com>

commit 3ca5dc3072d0d96ba07d102e9104cbbb177c352b
Author: Bill Chambers <bi...@...>
Date:   2016-02-05T22:35:39Z

    [SPARK-13214][DOCS] update dynamicAllocation documentation
    
    Author: Bill Chambers <bi...@databricks.com>
    
    Closes #11094 from anabranch/dynamic-docs.
    
    (cherry picked from commit 66e1383de2650a0f06929db8109a02e32c5eaf6b)
    Signed-off-by: Andrew Or <an...@databricks.com>

commit 9b30096227263f77fc67ed8f12fb2911c3256774
Author: Davies Liu <da...@...>
Date:   2016-02-08T20:08:58Z

    [SPARK-13210][SQL] catch OOM when allocate memory and expand array
    
    There is a bug when we try to grow the buffer, OOM is ignore wrongly (the assert also skipped by JVM), then we try grow the array again, this one will trigger spilling free the current page, the current record we inserted will be invalid.
    
    The root cause is that JVM has less free memory than MemoryManager thought, it will OOM when allocate a page without trigger spilling. We should catch the OOM, and acquire memory again to trigger spilling.
    
    And also, we could not grow the array in `insertRecord` of `InMemorySorter` (it was there just for easy testing).
    
    Author: Davies Liu <da...@databricks.com>
    
    Closes #11095 from davies/fix_expand.

commit 82fa86470682cb4fcd4b3d5351167e4a936b8494
Author: Steve Loughran <st...@...>
Date:   2016-02-09T19:01:47Z

    [SPARK-12807][YARN] Spark External Shuffle not working in Hadoop clusters with Jackson 2.2.3
    
    Patch to
    
    1. Shade jackson 2.x in spark-yarn-shuffle JAR: core, databind, annotation
    2. Use maven antrun to verify the JAR has the renamed classes
    
    Being Maven-based, I don't know if the verification phase kicks in on an SBT/jenkins build. It will on a `mvn install`
    
    Author: Steve Loughran <st...@hortonworks.com>
    
    Closes #10780 from steveloughran/stevel/patches/SPARK-12807-master-shuffle.
    
    (cherry picked from commit 34d0b70b309f16af263eb4e6d7c36e2ea170bc67)
    Signed-off-by: Marcelo Vanzin <va...@cloudera.com>

commit 89818cbf808137201d2558eaab312264d852cf00
Author: Liang-Chi Hsieh <vi...@...>
Date:   2016-02-10T01:10:55Z

    [SPARK-10524][ML] Use the soft prediction to order categories' bins
    
    JIRA: https://issues.apache.org/jira/browse/SPARK-10524
    
    Currently we use the hard prediction (`ImpurityCalculator.predict`) to order categories' bins. But we should use the soft prediction.
    
    Author: Liang-Chi Hsieh <vi...@gmail.com>
    Author: Liang-Chi Hsieh <vi...@appier.com>
    Author: Joseph K. Bradley <jo...@databricks.com>
    
    Closes #8734 from viirya/dt-soft-centroids.
    
    (cherry picked from commit 9267bc68fab65c6a798e065a1dbe0f5171df3077)
    Signed-off-by: Joseph K. Bradley <jo...@databricks.com>

commit 93f1d91755475a242456fe06e57bfca10f4d722f
Author: Josh Rosen <jo...@...>
Date:   2016-02-10T19:02:41Z

    [SPARK-12921] Fix another non-reflective TaskAttemptContext access in SpecificParquetRecordReaderBase
    
    This is a minor followup to #10843 to fix one remaining place where we forgot to use reflective access of TaskAttemptContext methods.
    
    Author: Josh Rosen <jo...@databricks.com>
    
    Closes #11131 from JoshRosen/SPARK-12921-take-2.

commit b57fac576f0033e8b43a89b4ada29901199aa29b
Author: raela <ra...@...>
Date:   2016-02-11T01:00:54Z

    [SPARK-13274] Fix Aggregator Links on GroupedDataset Scala API
    
    Update Aggregator links to point to #org.apache.spark.sql.expressions.Aggregator
    
    Author: raela <ra...@databricks.com>
    
    Closes #11158 from raelawang/master.
    
    (cherry picked from commit 719973b05ef6d6b9fbb83d76aebac6454ae84fad)
    Signed-off-by: Reynold Xin <rx...@databricks.com>

commit 91a5ca5e84497c37de98c194566a568117332710
Author: Yu ISHIKAWA <yu...@...>
Date:   2016-02-11T23:00:23Z

    [SPARK-13265][ML] Refactoring of basic ML import/export for other file system besides HDFS
    
    jkbradley I tried to improve the function to export a model. When I tried to export a model to S3 under Spark 1.6, we couldn't do that. So, it should offer S3 besides HDFS. Can you review it when you have time? Thanks!
    
    Author: Yu ISHIKAWA <yu...@gmail.com>
    
    Closes #11151 from yu-iskw/SPARK-13265.
    
    (cherry picked from commit efb65e09bcfa4542348f5cd37fe5c14047b862e5)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit 9d45ec466a4067bb2d0b59ff1174bec630daa7b1
Author: sethah <se...@...>
Date:   2016-02-12T00:42:44Z

    [SPARK-13047][PYSPARK][ML] Pyspark Params.hasParam should not throw an error
    
    Pyspark Params class has a method `hasParam(paramName)` which returns `True` if the class has a parameter by that name, but throws an `AttributeError` otherwise. There is not currently a way of getting a Boolean to indicate if a class has a parameter. With Spark 2.0 we could modify the existing behavior of `hasParam` or add an additional method with this functionality.
    
    In Python:
    ```python
    from pyspark.ml.classification import NaiveBayes
    nb = NaiveBayes()
    print nb.hasParam("smoothing")
    print nb.hasParam("notAParam")
    ```
    produces:
    > True
    > AttributeError: 'NaiveBayes' object has no attribute 'notAParam'
    
    However, in Scala:
    ```scala
    import org.apache.spark.ml.classification.NaiveBayes
    val nb  = new NaiveBayes()
    nb.hasParam("smoothing")
    nb.hasParam("notAParam")
    ```
    produces:
    > true
    > false
    
    cc holdenk
    
    Author: sethah <se...@gmail.com>
    
    Closes #10962 from sethah/SPARK-13047.
    
    (cherry picked from commit b35467388612167f0bc3d17142c21a406f6c620d)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit 18661a2bb527adbd01e98158696a16f6d8162411
Author: Tommy YU <tu...@...>
Date:   2016-02-12T02:38:49Z

    [SPARK-13153][PYSPARK] ML persistence failed when handle no default value parameter
    
    Fix this defect by check default value exist or not.
    
    yanboliang Please help to review.
    
    Author: Tommy YU <tu...@163.com>
    
    Closes #11043 from Wenpei/spark-13153-handle-param-withnodefaultvalue.
    
    (cherry picked from commit d3e2e202994e063856c192e9fdd0541777b88e0e)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit 93a55f3df3c9527ecf4143cb40ac7212bc3a975a
Author: markpavey <ma...@...>
Date:   2016-02-13T08:39:43Z

    [SPARK-13142][WEB UI] Problem accessing Web UI /logPage/ on Microsoft Windows
    
    Due to being on a Windows platform I have been unable to run the tests as described in the "Contributing to Spark" instructions. As the change is only to two lines of code in the Web UI, which I have manually built and tested, I am submitting this pull request anyway. I hope this is OK.
    
    Is it worth considering also including this fix in any future 1.5.x releases (if any)?
    
    I confirm this is my own original work and license it to the Spark project under its open source license.
    
    Author: markpavey <ma...@thefilter.com>
    
    Closes #11135 from markpavey/JIRA_SPARK-13142_WindowsWebUILogFix.
    
    (cherry picked from commit 374c4b2869fc50570a68819cf0ece9b43ddeb34b)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 107290c94312524bfc4560ebe0de268be4ca56af
Author: Liang-Chi Hsieh <vi...@...>
Date:   2016-02-13T23:56:20Z

    [SPARK-12363][MLLIB] Remove setRun and fix PowerIterationClustering failed test
    
    JIRA: https://issues.apache.org/jira/browse/SPARK-12363
    
    This issue is pointed by yanboliang. When `setRuns` is removed from PowerIterationClustering, one of the tests will be failed. I found that some `dstAttr`s of the normalized graph are not correct values but 0.0. By setting `TripletFields.All` in `mapTriplets` it can work.
    
    Author: Liang-Chi Hsieh <vi...@gmail.com>
    Author: Xiangrui Meng <me...@databricks.com>
    
    Closes #10539 from viirya/fix-poweriter.
    
    (cherry picked from commit e3441e3f68923224d5b576e6112917cf1fe1f89a)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit ec40c5a59fe45e49496db6e0082ddc65c937a857
Author: Amit Dev <am...@...>
Date:   2016-02-14T11:41:27Z

    [SPARK-13300][DOCUMENTATION] Added pygments.rb dependancy
    
    Looks like pygments.rb gem is also required for jekyll build to work. At least on Ubuntu/RHEL I could not do build without this dependency. So added this to steps.
    
    Author: Amit Dev <am...@gmail.com>
    
    Closes #11180 from amitdev/master.
    
    (cherry picked from commit 331293c30242dc43e54a25171ca51a1c9330ae44)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 71f53edc0e39bc907755153b9603be8c6fcc1d93
Author: JeremyNixon <jn...@...>
Date:   2016-02-15T09:25:13Z

    [SPARK-13312][MLLIB] Update java train-validation-split example in ml-guide
    
    Response to JIRA https://issues.apache.org/jira/browse/SPARK-13312.
    
    This contribution is my original work and I license the work to this project.
    
    Author: JeremyNixon <jn...@gmail.com>
    
    Closes #11199 from JeremyNixon/update_train_val_split_example.
    
    (cherry picked from commit adb548365012552e991d51740bfd3c25abf0adec)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit d95089190d714e3e95579ada84ac42d463f824b5
Author: Miles Yucht <mi...@...>
Date:   2016-02-16T13:01:21Z

    Correct SparseVector.parse documentation
    
    There's a small typo in the SparseVector.parse docstring (which says that it returns a DenseVector rather than a SparseVector), which seems to be incorrect.
    
    Author: Miles Yucht <mi...@databricks.com>
    
    Closes #11213 from mgyucht/fix-sparsevector-docs.
    
    (cherry picked from commit 827ed1c06785692d14857bd41f1fd94a0853874a)
    Signed-off-by: Sean Owen <so...@cloudera.com>

commit 98354cae984e3719a49050e7a6aa75dae78b12bb
Author: Sital Kedia <sk...@...>
Date:   2016-02-17T06:27:34Z

    [SPARK-13279] Remove O(n^2) operation from scheduler.
    
    This commit removes an unnecessary duplicate check in addPendingTask that meant
    that scheduling a task set took time proportional to (# tasks)^2.
    
    Author: Sital Kedia <sk...@fb.com>
    
    Closes #11175 from sitalkedia/fix_stuck_driver.
    
    (cherry picked from commit 1e1e31e03df14f2e7a9654e640fb2796cf059fe0)
    Signed-off-by: Kay Ousterhout <ka...@gmail.com>

commit 66106a660149607348b8e51994eb2ce29d67abc0
Author: Christopher C. Aycock <ch...@...>
Date:   2016-02-17T19:24:18Z

    [SPARK-13350][DOCS] Config doc updated to state that PYSPARK_PYTHON's default is "python2.7"
    
    Author: Christopher C. Aycock <ch...@chrisaycock.com>
    
    Closes #11239 from chrisaycock/master.
    
    (cherry picked from commit a7c74d7563926573c01baf613708a0f105a03e57)
    Signed-off-by: Josh Rosen <jo...@databricks.com>

commit 16f35c4c6e7e56bdb1402eab0877da6e8497cb3f
Author: Sean Owen <so...@...>
Date:   2016-02-18T20:14:30Z

    [SPARK-13371][CORE][STRING] TaskSetManager.dequeueSpeculativeTask compares Option and String directly.
    
    ## What changes were proposed in this pull request?
    
    Fix some comparisons between unequal types that cause IJ warnings and in at least one case a likely bug (TaskSetManager)
    
    ## How was the this patch tested?
    
    Running Jenkins tests
    
    Author: Sean Owen <so...@cloudera.com>
    
    Closes #11253 from srowen/SPARK-13371.
    
    (cherry picked from commit 78562535feb6e214520b29e0bbdd4b1302f01e93)
    Signed-off-by: Andrew Or <an...@databricks.com>

commit 699644c692472e5b78baa56a1a6c44d8d174e70e
Author: Michael Armbrust <mi...@...>
Date:   2016-02-22T23:27:29Z

    [SPARK-12546][SQL] Change default number of open parquet files
    
    A common problem that users encounter with Spark 1.6.0 is that writing to a partitioned parquet table OOMs.  The root cause is that parquet allocates a significant amount of memory that is not accounted for by our own mechanisms.  As a workaround, we can ensure that only a single file is open per task unless the user explicitly asks for more.
    
    Author: Michael Armbrust <mi...@databricks.com>
    
    Closes #11308 from marmbrus/parquetWriteOOM.
    
    (cherry picked from commit 173aa949c309ff7a7a03e9d762b9108542219a95)
    Signed-off-by: Michael Armbrust <mi...@databricks.com>

commit 85e6a2205d4549c81edbc2238fd15659120cee78
Author: Shixiong Zhu <sh...@...>
Date:   2016-02-23T01:42:30Z

    [SPARK-13298][CORE][UI] Escape "label" to avoid DAG being broken by some special character
    
    ## What changes were proposed in this pull request?
    
    When there are some special characters (e.g., `"`, `\`) in `label`, DAG will be broken. This patch just escapes `label` to avoid DAG being broken by some special characters
    
    ## How was the this patch tested?
    
    Jenkins tests
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #11309 from zsxwing/SPARK-13298.
    
    (cherry picked from commit a11b3995190cb4a983adcc8667f7b316cce18d24)
    Signed-off-by: Andrew Or <an...@databricks.com>

commit f7898f9e2df131fa78200f6034508e74a78c2a44
Author: Daoyuan Wang <da...@...>
Date:   2016-02-23T02:13:32Z

    [SPARK-11624][SPARK-11972][SQL] fix commands that need hive to exec
    
    In SparkSQLCLI, we have created a `CliSessionState`, but then we call `SparkSQLEnv.init()`, which will start another `SessionState`. This would lead to exception because `processCmd` need to get the `CliSessionState` instance by calling `SessionState.get()`, but the return value would be a instance of `SessionState`. See the exception below.
    
    spark-sql> !echo "test";
    Exception in thread "main" java.lang.ClassCastException: org.apache.hadoop.hive.ql.session.SessionState cannot be cast to org.apache.hadoop.hive.cli.CliSessionState
    	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:112)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:301)
    	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:242)
    	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:606)
    	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:691)
    	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    
    Author: Daoyuan Wang <da...@intel.com>
    
    Closes #9589 from adrian-wang/clicommand.
    
    (cherry picked from commit 5d80fac58f837933b5359a8057676f45539e53af)
    Signed-off-by: Michael Armbrust <mi...@databricks.com>
    
    Conflicts:
    	sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala

commit 40d11d0492bcdf4aa442e527e69804e53b4135e9
Author: Michael Armbrust <mi...@...>
Date:   2016-02-23T02:25:48Z

    Update branch-1.6 for 1.6.1 release

commit 152252f15b7ee2a9b0d53212474e344acd8a55a9
Author: Patrick Wendell <pw...@...>
Date:   2016-02-23T02:30:24Z

    Preparing Spark release v1.6.1-rc1

commit 290279808e5e9e91d7c349ccec12ff12b99a4556
Author: Patrick Wendell <pw...@...>
Date:   2016-02-23T02:30:30Z

    Preparing development version 1.6.1-SNAPSHOT

commit d31854da5155550f4e9c5e717c92dfec87d0ff6a
Author: Earthson Lu <ea...@...>
Date:   2016-02-23T07:40:36Z

    [SPARK-12746][ML] ArrayType(_, true) should also accept ArrayType(_, false) fix for branch-1.6
    
    https://issues.apache.org/jira/browse/SPARK-13359
    
    Author: Earthson Lu <Ea...@gmail.com>
    
    Closes #11237 from Earthson/SPARK-13359.

commit 0784e02fd438e5fa2e6639d6bba114fa647dad23
Author: Xiangrui Meng <me...@...>
Date:   2016-02-23T07:54:21Z

    [SPARK-13355][MLLIB] replace GraphImpl.fromExistingRDDs by Graph.apply
    
    `GraphImpl.fromExistingRDDs` expects preprocessed vertex RDD as input. We call it in LDA without validating this requirement. So it might introduce errors. Replacing it by `Graph.apply` would be safer and more proper because it is a public API. The tests still pass. So maybe it is safe to use `fromExistingRDDs` here (though it doesn't seem so based on the implementation) or the test cases are special. jkbradley ankurdave
    
    Author: Xiangrui Meng <me...@databricks.com>
    
    Closes #11226 from mengxr/SPARK-13355.
    
    (cherry picked from commit 764ca18037b6b1884fbc4be9a011714a81495020)
    Signed-off-by: Xiangrui Meng <me...@databricks.com>

commit 573a2c97e9a9b8feae22f8af173fb158d59e5332
Author: Franklyn D'souza <fr...@...>
Date:   2016-02-23T23:34:04Z

    [SPARK-13410][SQL] Support unionAll for DataFrames with UDT columns.
    
    ## What changes were proposed in this pull request?
    
    This PR adds equality operators to UDT classes so that they can be correctly tested for dataType equality during union operations.
    
    This was previously causing `"AnalysisException: u"unresolved operator 'Union;""` when trying to unionAll two dataframes with UDT columns as below.
    
    ```
    from pyspark.sql.tests import PythonOnlyPoint, PythonOnlyUDT
    from pyspark.sql import types
    
    schema = types.StructType([types.StructField("point", PythonOnlyUDT(), True)])
    
    a = sqlCtx.createDataFrame([[PythonOnlyPoint(1.0, 2.0)]], schema)
    b = sqlCtx.createDataFrame([[PythonOnlyPoint(3.0, 4.0)]], schema)
    
    c = a.unionAll(b)
    ```
    
    ## How was the this patch tested?
    
    Tested using two unit tests in sql/test.py and the DataFrameSuite.
    
    Additional information here : https://issues.apache.org/jira/browse/SPARK-13410
    
    rxin
    
    Author: Franklyn D'souza <fr...@gmail.com>
    
    Closes #11333 from damnMeddlingKid/udt-union-patch.

commit 06f4fce29227f9763d9f9abff6e7459542dce261
Author: Shixiong Zhu <sh...@...>
Date:   2016-02-24T13:35:36Z

    [SPARK-13390][SQL][BRANCH-1.6] Fix the issue that Iterator.map().toSeq is not Serializable
    
    ## What changes were proposed in this pull request?
    
    `scala.collection.Iterator`'s methods (e.g., map, filter) will return an `AbstractIterator` which is not Serializable. E.g.,
    ```Scala
    scala> val iter = Array(1, 2, 3).iterator.map(_ + 1)
    iter: Iterator[Int] = non-empty iterator
    
    scala> println(iter.isInstanceOf[Serializable])
    false
    ```
    If we call something like `Iterator.map(...).toSeq`, it will create a `Stream` that contains a non-serializable `AbstractIterator` field and make the `Stream` be non-serializable.
    
    This PR uses `toArray` instead of `toSeq` to fix such issue in `def createDataFrame(data: java.util.List[_], beanClass: Class[_]): DataFrame`.
    
    ## How was the this patch tested?
    
    Jenkins tests.
    
    Author: Shixiong Zhu <sh...@databricks.com>
    
    Closes #11334 from zsxwing/SPARK-13390.

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21507: Branch 1.6

Posted by HyukjinKwon <gi...@git.apache.org>.
Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/21507
  
    ping @deepaksonu close this PR please.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21507: Branch 1.6

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/21507
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21507: Branch 1.6

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/21507
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21507: Branch 1.6

Posted by kiszk <gi...@git.apache.org>.
Github user kiszk commented on the issue:

    https://github.com/apache/spark/pull/21507
  
    @deepaksonu  Would it be possible to close this PR?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #21507: Branch 1.6

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/21507
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #21507: Branch 1.6

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/21507


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org