You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by JoshRosen <gi...@git.apache.org> on 2016/03/17 01:12:12 UTC

[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

GitHub user JoshRosen opened a pull request:

    https://github.com/apache/spark/pull/11774

    [SPARK-13948] MiMa check should catch if the visibility changes to private

    MiMa excludes are currently generated using both the current Spark version's classes and Spark 1.2.0's classes, but this doesn't make sense: we should only be ignoring classes which were `private` in the previous Spark version, not classes which became private in the current version.
    
    This patch updates `dev/mima` to only generate excludes with respect to the previous artifacts that MiMa checks against. It also updates `MimaBuild` so that `excludeClass` only applies directly to the class being excluded and not to its companion object (since a class and its companion object can have different accessibility).

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/JoshRosen/spark SPARK-13948

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/11774.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #11774
    
----
commit bf161da15e56b53933aede4653eab4258efdb864
Author: Josh Rosen <jo...@databricks.com>
Date:   2016-03-16T22:40:23Z

    Don't automatically exclude companion objects of private[spark] classes.

commit 77613e633bfa9fcf9f6117a5b6307fe3611ed00c
Author: Josh Rosen <jo...@databricks.com>
Date:   2016-03-16T23:40:37Z

    Use previous artifact JARs to generate excludes

commit 4605c79b7ea65735f0c1356e95f5f348b70d40d7
Author: Josh Rosen <jo...@databricks.com>
Date:   2016-03-16T23:49:41Z

    Add MiMa excludes so checks pass for now.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197658459
  
    **[Test build #53378 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53378/consoleFull)** for PR 11774 at commit [`fbb93ca`](https://github.com/apache/spark/commit/fbb93ca0bdd979851e88270cb1b055c7276d0d42).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by JoshRosen <gi...@git.apache.org>.
Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197619692
  
    Here were the errors that I ignored (we should follow up on these later):
    
    ```
    [error]  * method initializeLogIfNecessary(Boolean)Unit in trait org.apache.spark.Logging is present only in current version
    [error]    filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.Logging.initializeLogIfNecessary")
    [error]  * deprecated method lookupTimeout(org.apache.spark.SparkConf)scala.concurrent.duration.FiniteDuration in object org.apache.spark.util.RpcUtils does not have a correspondent in current version
    [error]    filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.util.RpcUtils.lookupTimeout")
    [error]  * deprecated method askTimeout(org.apache.spark.SparkConf)scala.concurrent.duration.FiniteDuration in object org.apache.spark.util.RpcUtils does not have a correspondent in current version
    [error]    filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.util.RpcUtils.askTimeout")
    [error]  * method logEvent()Boolean in trait org.apache.spark.scheduler.SparkListenerEvent is present only in current version
    [error]    filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.scheduler.SparkListenerEvent.logEvent")
    [info] spark-mllib: found 4 potential binary incompatibilities while checking against org.apache.spark:spark-mllib_2.11:1.6.0  (filtered 151)
    [error]  * method transform(org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame in class org.apache.spark.ml.UnaryTransformer's type is different in current version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.sql.Dataset instead of (org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame
    [error]    filter with: ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.UnaryTransformer.transform")
    [error]  * method train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel in class org.apache.spark.ml.classification.DecisionTreeClassifier's type is different in current version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of (org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
    [error]    filter with: ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.DecisionTreeClassifier.train")
    [error]  * method train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel in class org.apache.spark.ml.classification.LogisticRegression's type is different in current version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of (org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
    [error]    filter with: ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.LogisticRegression.train")
    [error]  * method train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel in class org.apache.spark.ml.regression.DecisionTreeRegressor's type is different in current version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of (org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
    [error]    filter with: ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.regression.DecisionTreeRegressor.train")
    [info] spark-sql: found 10 potential binary incompatibilities while checking against org.apache.spark:spark-sql_2.11:1.6.0  (filtered 658)
    [error]  * method toDF()org.apache.spark.sql.DataFrame in class org.apache.spark.sql.Dataset has a different result type in current version, where it is org.apache.spark.sql.Dataset rather than org.apache.spark.sql.DataFrame
    [error]    filter with: ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.toDF")
    [error]  * method groupBy(org.apache.spark.api.java.function.MapFunction,org.apache.spark.sql.Encoder)org.apache.spark.sql.GroupedDataset in class org.apache.spark.sql.Dataset in current version does not have a correspondent with same parameter signature among (java.lang.String,scala.collection.Seq)org.apache.spark.sql.GroupedData, (java.lang.String,Array[java.lang.String])org.apache.spark.sql.GroupedData
    [error]    filter with: ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method groupBy(scala.collection.Seq)org.apache.spark.sql.GroupedDataset in class org.apache.spark.sql.Dataset has a different result type in current version, where it is org.apache.spark.sql.GroupedData rather than org.apache.spark.sql.GroupedDataset
    [error]    filter with: ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method groupBy(scala.Function1,org.apache.spark.sql.Encoder)org.apache.spark.sql.GroupedDataset in class org.apache.spark.sql.Dataset in current version does not have a correspondent with same parameter signature among (java.lang.String,scala.collection.Seq)org.apache.spark.sql.GroupedData, (java.lang.String,Array[java.lang.String])org.apache.spark.sql.GroupedData
    [error]    filter with: ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method groupBy(Array[org.apache.spark.sql.Column])org.apache.spark.sql.GroupedDataset in class org.apache.spark.sql.Dataset has a different result type in current version, where it is org.apache.spark.sql.GroupedData rather than org.apache.spark.sql.GroupedDataset
    [error]    filter with: ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method select(scala.collection.Seq)org.apache.spark.sql.DataFrame in class org.apache.spark.sql.Dataset has a different result type in current version, where it is org.apache.spark.sql.Dataset rather than org.apache.spark.sql.DataFrame
    [error]    filter with: ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.select")
    [error]  * method select(Array[org.apache.spark.sql.Column])org.apache.spark.sql.DataFrame in class org.apache.spark.sql.Dataset has a different result type in current version, where it is org.apache.spark.sql.Dataset rather than org.apache.spark.sql.DataFrame
    [error]    filter with: ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.select")
    [error]  * method toDS()org.apache.spark.sql.Dataset in class org.apache.spark.sql.Dataset does not have a correspondent in current version
    [error]    filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.Dataset.toDS")
    [error]  * abstract method newInstance(java.lang.String,org.apache.spark.sql.types.StructType,org.apache.hadoop.mapreduce.TaskAttemptContext)org.apache.spark.sql.sources.OutputWriter in class org.apache.spark.sql.sources.OutputWriterFactory does not have a correspondent in current version
    [error]    filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.sources.OutputWriterFactory.newInstance")
    [error]  * abstract method newInstance(java.lang.String,scala.Option,org.apache.spark.sql.types.StructType,org.apache.hadoop.mapreduce.TaskAttemptContext)org.apache.spark.sql.sources.OutputWriter in class org.apache.spark.sql.sources.OutputWriterFactory is present only in current version
    [error]    filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.sql.sources.OutputWriterFactory.newInstance")
    
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197619820
  
    **[Test build #53378 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53378/consoleFull)** for PR 11774 at commit [`fbb93ca`](https://github.com/apache/spark/commit/fbb93ca0bdd979851e88270cb1b055c7276d0d42).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by rxin <gi...@git.apache.org>.
Github user rxin commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197720690
  
    merging in master. this is a great change!



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by JoshRosen <gi...@git.apache.org>.
Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197722260
  
    I've filed https://issues.apache.org/jira/browse/SPARK-13959 as a followup to make sure to audit the new excludes before we ship 2.0.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197659095
  
    Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/11774


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-13948] MiMa check should catch if the v...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197659099
  
    Test PASSed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53378/
    Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org