You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by ueshin <gi...@git.apache.org> on 2018/08/08 16:59:20 UTC

[GitHub] spark pull request #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not...

GitHub user ueshin opened a pull request:

    https://github.com/apache/spark/pull/22039

    [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exhaustive in Scala-2.12.

    ## What changes were proposed in this pull request?
    
    This is a follow-up pr of #22014.
    
    We still have some more compilation errors in scala-2.12 with sbt:
    
    ```
    [error] [warn] /.../sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala:493: match may not be exhaustive.
    [error] It would fail on the following input: (_, _)
    [error] [warn]       val typeMatches = (targetType, f.dataType) match {
    [error] [warn] 
    [error] [warn] /.../sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:393: match may not be exhaustive.
    [error] It would fail on the following input: (_, _)
    [error] [warn]             prevBatchOff.get.toStreamProgress(sources).foreach {
    [error] [warn] 
    [error] [warn] /.../sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/AggUtils.scala:173: match may not be exhaustive.
    [error] It would fail on the following input: AggregateExpression(_, _, false, _)
    [error] [warn]     val rewrittenDistinctFunctions = functionsWithDistinct.map {
    [error] [warn] 
    [error] [warn] /.../sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/SymmetricHashJoinStateManager.scala:271: match may not be exhaustive.
    [error] It would fail on the following input: (_, _)
    [error] [warn]       keyWithIndexToValueMetrics.customMetrics.map {
    [error] [warn] 
    [error] [warn] /.../sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala:959: match may not be exhaustive.
    [error] It would fail on the following input: CatalogTableType(_)
    [error] [warn]     val tableTypeString = metadata.tableType match {
    [error] [warn] 
    [error] [warn] /.../sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:923: match may not be exhaustive.
    [error] It would fail on the following input: CatalogTableType(_)
    [error] [warn]     hiveTable.setTableType(table.tableType match {
    [error] [warn]
    ```
    
    ## How was this patch tested?
    
    Manually build with Scala-2.12.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ueshin/apache-spark issues/SPARK-25036/fix_match

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22039.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22039
    
----
commit e9220b4e68178a6cba4e02bf34f0623a80dbc31f
Author: Takuya UESHIN <ue...@...>
Date:   2018-08-08T15:59:25Z

    Add default case to match.

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    **[Test build #94435 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/94435/testReport)** for PR 22039 at commit [`e9220b4`](https://github.com/apache/spark/commit/e9220b4e68178a6cba4e02bf34f0623a80dbc31f).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/1952/
    Test PASSed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Merged build finished. Test PASSed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not...

Posted by kiszk <gi...@git.apache.org>.
Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22039#discussion_r208666588
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/SymmetricHashJoinStateManager.scala ---
    @@ -273,6 +273,9 @@ class SymmetricHashJoinStateManager(
               s.copy(desc = newDesc(desc)) -> value
             case (s @ StateStoreCustomTimingMetric(_, desc), value) =>
               s.copy(desc = newDesc(desc)) -> value
    +        case (s, _) =>
    +          throw new IllegalArgumentException(
    +          s"Unknown state store custom metric is found at metrics: $s")
    --- End diff --
    
    nit: 2 more spaces?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/94435/
    Test PASSed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    **[Test build #94440 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/94440/testReport)** for PR 22039 at commit [`fc5d7a1`](https://github.com/apache/spark/commit/fc5d7a1c59f8769dffac145a7c7e8dc533a7fc65).
     * This patch **fails to generate documentation**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not...

Posted by kiszk <gi...@git.apache.org>.
Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22039#discussion_r208666245
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala ---
    @@ -394,6 +394,9 @@ class MicroBatchExecution(
                   case (src: Source, off) => src.commit(off)
                   case (reader: MicroBatchReader, off) =>
                     reader.commit(reader.deserializeOffset(off.json))
    +              case (src, _) =>
    +                throw new IllegalArgumentException(
    +                  s"Unknows source is found at constructNextBatch: $src")
    --- End diff --
    
    nit: `Unknows` -> `Unknown`


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Merged build finished. Test PASSed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Also merged to master


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/22039


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by ueshin <gi...@git.apache.org>.
Github user ueshin commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    cc @kiszk @srowen @HyukjinKwon 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Merged build finished. Test PASSed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Merged build finished. Test FAILed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by markhamstra <gi...@git.apache.org>.
Github user markhamstra commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    > Yes, "quick hack", but, as opposed to what in these specific cases?
    
    Yes, that is the key question. I'll admit, I haven't looked at all deeply to try to figure out whether something better is possible, so this is just my knee-jerk reaction: If you don't have to, then don't use a catch-all just to silence the compiler warning. If you're satisfied that there isn't a better option, then I'll accept that second-best is the best we can do here. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    **[Test build #94435 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/94435/testReport)** for PR 22039 at commit [`e9220b4`](https://github.com/apache/spark/commit/e9220b4e68178a6cba4e02bf34f0623a80dbc31f).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by markhamstra <gi...@git.apache.org>.
Github user markhamstra commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Hmmm... sorry to be late to this, but making pattern matches exhaustive by adding a catch-all case that then throws an exception, while easy, should be considered as a less than optimal fix. Ideally, the type handling should be tightened up so that the match can be exhaustive without a catch-all. The reason for this is that if in the future a type is added such that the pattern match should be extended to handle that type, then the presence of a catch-all will give the false impression that the new type is being handled, no compiler warning will be thrown, etc. If the pattern match is made exhaustive without a catch-all and the compiler option to convert warnings to errors is used, then it becomes pretty much impossible that future type additions/additional necessary cases will be silently mishandled.
    
    Now I realize that it is not always feasible to achieve that level of type safety in Scala code, but has adequate consideration been given to making the effort here, or was this just a quick hack to make the compiler shut up?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    **[Test build #94440 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/94440/testReport)** for PR 22039 at commit [`fc5d7a1`](https://github.com/apache/spark/commit/fc5d7a1c59f8769dffac145a7c7e8dc533a7fc65).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/1957/
    Test PASSed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by srowen <gi...@git.apache.org>.
Github user srowen commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Yes, I agree about the future-proofing problem. However, at the moment, new future cases will also not cause any compile problem, but will just also trigger a MatchError. This is, simply, always a problem, that new future unmatched cases aren't handled and trigger an exception at runtime not compile time. These changes at least make the error more explicit.
    
    Of course, if the default case in any of these situations really had a valid outcome, then throwing an exception is wrong. But I don't see any reason to believe that, currently, a match is incorrectly handled as MatchError. I think it's OK to remove the compile-time warning by supplying a good default exception case. I think tests will help figure out where those default cases are no longer appropriate.
    
    Yes, "quick hack", but, as opposed to what in these specific cases?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22039: [SPARK-25036][SQL][FOLLOW-UP] Avoid match may not be exh...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22039
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/94440/
    Test FAILed.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org