You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2018/11/21 06:19:00 UTC

[jira] [Assigned] (SPARK-26134) Upgrading Hadoop to 2.7.4 to fix java.version problem

     [ https://issues.apache.org/jira/browse/SPARK-26134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-26134:
------------------------------------

    Assignee:     (was: Apache Spark)

> Upgrading Hadoop to 2.7.4 to fix java.version problem
> -----------------------------------------------------
>
>                 Key: SPARK-26134
>                 URL: https://issues.apache.org/jira/browse/SPARK-26134
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Takanobu Asanuma
>            Priority: Major
>
> When I ran spark-shell on JDK11+28(2018-09-25), It failed with the error below.
> {noformat}
> Exception in thread "main" java.lang.ExceptionInInitializerError
> 	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
> 	at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> 	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
> 	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
> 	at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
> 	at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
> 	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
> 	at org.apache.spark.util.Utils$.$anonfun$getCurrentUserName$1(Utils.scala:2427)
> 	at scala.Option.getOrElse(Option.scala:121)
> 	at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
> 	at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:79)
> 	at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:359)
> 	at org.apache.spark.deploy.SparkSubmit.secMgr$1(SparkSubmit.scala:359)
> 	at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$9(SparkSubmit.scala:367)
> 	at scala.Option.map(Option.scala:146)
> 	at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:367)
> 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
> 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:927)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:936)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
> 	at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319)
> 	at java.base/java.lang.String.substring(String.java:1874)
> 	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:52)
> {noformat}
> This is a Hadoop issue that fails to parse some {{java.version}}. It has been fixed from Hadoop-2.7.4(see HADOOP-14586).
> Note, Hadoop-2.7.5 or upper have another problem with Spark (SPARK-25330). So upgrading to 2.7.4 would be fine for now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org