You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matei Zaharia (JIRA)" <ji...@apache.org> on 2014/05/19 02:49:37 UTC

[jira] [Commented] (SPARK-1875) NoClassDefFoundError: StringUtils when building against Hadoop 1

    [ https://issues.apache.org/jira/browse/SPARK-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14001297#comment-14001297 ] 

Matei Zaharia commented on SPARK-1875:
--------------------------------------

This may have been broken by https://issues.apache.org/jira/browse/SPARK-1629 / https://github.com/apache/spark/pull/569, which added an explicit dependency on commons-lang, though it's not clear.

> NoClassDefFoundError: StringUtils when building against Hadoop 1
> ----------------------------------------------------------------
>
>                 Key: SPARK-1875
>                 URL: https://issues.apache.org/jira/browse/SPARK-1875
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Matei Zaharia
>            Priority: Blocker
>             Fix For: 1.0.0
>
>
> Maybe I missed something, but after building an assembly with Hadoop 1.2.1 and Hive enabled, if I go into it and run spark-shell, I get this:
> {code}
> java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
> 	at org.apache.hadoop.metrics2.lib.MetricMutableStat.<init>(MetricMutableStat.java:59)
> 	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.<init>(MetricsSystemImpl.java:75)
> 	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.<init>(MetricsSystemImpl.java:120)
> 	at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
> 	at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
> 	at org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
> 	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:216)
> 	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:184)
> 	at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:236)
> 	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:79)
> 	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
> 	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:226)
> 	at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
> 	at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
> 	at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
> 	at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)