You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@submarine.apache.org by "Wei-Chiu Chuang (Jira)" <ji...@apache.org> on 2021/01/04 18:49:00 UTC

[jira] [Commented] (SUBMARINE-705) Unable to run tests in spark-security with spark-3.0 + hadoop-3.2 profiles

    [ https://issues.apache.org/jira/browse/SUBMARINE-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258417#comment-17258417 ] 

Wei-Chiu Chuang commented on SUBMARINE-705:
-------------------------------------------

This is not a Submarine problem per se. Yes it's caused by the guava update in Hadoop. The Hadoop and Spark community are working separately on different efforts to resolve this issue. We'll have to wait for the Hadoop/Spark releases that resolve it.

> Unable to run tests in spark-security with spark-3.0 + hadoop-3.2 profiles 
> ---------------------------------------------------------------------------
>
>                 Key: SUBMARINE-705
>                 URL: https://issues.apache.org/jira/browse/SUBMARINE-705
>             Project: Apache Submarine
>          Issue Type: Bug
>          Components: Security
>            Reporter: Igor Calabria
>            Priority: Minor
>
> {code:java}
> mvn -pl :submarine-spark-security clean test -Pranger-2.0 -Pspark-3.0  -Phadoop-3.2
> {code}
> Fails with 
> {code:java}
> *** RUN ABORTED ***
>   java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>   at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>   at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>   at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
>   at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
>   at org.apache.spark.deploy.SparkHadoopUtil.newConfiguration(SparkHadoopUtil.scala:122)
>   at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:49)
>   at org.apache.spark.deploy.SparkHadoopUtil$.instance$lzycompute(SparkHadoopUtil.scala:397)
>   at org.apache.spark.deploy.SparkHadoopUtil$.instance(SparkHadoopUtil.scala:397)
>   at org.apache.spark.deploy.SparkHadoopUtil$.get(SparkHadoopUtil.scala:418)
>   at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:95){code}
> This is most likely caused by a guava version conflict



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@submarine.apache.org
For additional commands, e-mail: dev-help@submarine.apache.org