You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matei Zaharia (JIRA)" <ji...@apache.org> on 2014/05/19 02:17:38 UTC
[jira] [Created] (SPARK-1875) NoClassDefFoundError: StringUtils
when building against Hadoop 1
Matei Zaharia created SPARK-1875:
------------------------------------
Summary: NoClassDefFoundError: StringUtils when building against Hadoop 1
Key: SPARK-1875
URL: https://issues.apache.org/jira/browse/SPARK-1875
Project: Spark
Issue Type: Bug
Reporter: Matei Zaharia
Priority: Critical
Maybe I missed something, but after building an assembly with Hadoop 1.2.1 and Hive enabled, if I go into it and run spark-shell, I get this:
{code}
java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
at org.apache.hadoop.metrics2.lib.MetricMutableStat.<init>(MetricMutableStat.java:59)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.<init>(MetricsSystemImpl.java:75)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.<init>(MetricsSystemImpl.java:120)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
at org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:216)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:184)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:236)
at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:79)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:226)
at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
{code}
--
This message was sent by Atlassian JIRA
(v6.2#6252)