You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2020/05/05 19:19:00 UTC

[jira] [Resolved] (SPARK-31644) Make Spark's guava version configurable from the maven command line.

     [ https://issues.apache.org/jira/browse/SPARK-31644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dongjoon Hyun resolved SPARK-31644.
-----------------------------------
    Fix Version/s: 3.0.0
       Resolution: Fixed

Issue resolved by pull request 28455
[https://github.com/apache/spark/pull/28455]

> Make Spark's guava version configurable from the maven command line.
> --------------------------------------------------------------------
>
>                 Key: SPARK-31644
>                 URL: https://issues.apache.org/jira/browse/SPARK-31644
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>             Fix For: 3.0.0
>
>
> All future releases of hadoop are going to ship with 27.0 or later, *including point releases of the 3.1 branch, which is a mixed blessing.
> Pro: 
> * it's up to date
> * no active CVEs
> Con:
>  * code which uses things the guava team removed won't compile
>  * code built with the later release won't link to older versions due to signature overloading, with Preconditions.checkArgument a key example
> Making the guava.version spark pulls in an overrideable value makes it possible to choose the version to build with alongside the -Dhadoop.version selection of hadoop version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org