You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by David Capwell <dc...@gmail.com> on 2017/10/11 16:58:06 UTC

add jars to spark's runtime

We want to emit the metrics out of spark into our own custom store.  To do
this we built our own sink and tried to add it to spark by doing --jars
path/to/jar and defining the class in metrics.properties which is supplied
with the job.  We noticed that spark kept crashing with the below exception

17/10/11 09:42:37 ERROR metrics.MetricsSystem: Sink class
com.example.ExternalSink cannot be instantiated
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
        at
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:284)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.ClassNotFoundException: com.example.ExternalSink
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:230)
        at
org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:198)
        at
org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:194)
        at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
        at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
        at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
        at
org.apache.spark.metrics.MetricsSystem.registerSinks(MetricsSystem.scala:194)
        at
org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:102)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:366)
        at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:201)
        at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:223)
        at
org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:67)
        at
org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:66)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        ... 4 more

We then added this jar into the spark tarball that we use for testing, and
saw that it was able to load just fine and publish.

My question is, how can I add this jar to the spark runtime rather than the
user runtime?  In production we don't have permissions to write to the jars
dir of spark, so that trick to get this working won't work for us.

Thanks for your time reading this email.

Tested on spark 2.2.0