You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Eric Tanner <er...@justenough.com> on 2014/12/10 00:58:43 UTC
Cluster getting a null pointer error
I have set up a cluster on AWS and am trying a really simple hello world
program as a test. The cluster was built using the ec2 scripts that come
with Spark. Anyway, I have output the error message (using --verbose)
below. The source code is further below that.
Any help would be greatly appreciated.
Thanks,
Eric
*Error code:*
root@ip-xx.xx.xx.xx ~]$ ./spark/bin/spark-submit --verbose --class
com.je.test.Hello --master spark://xx.xx.xx.xx:7077
Hello-assembly-1.0.jar
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
Using properties file: /root/spark/conf/spark-defaults.conf
Adding default property: spark.executor.memory=5929m
Adding default property:
spark.executor.extraClassPath=/root/ephemeral-hdfs/conf
Adding default property:
spark.executor.extraLibraryPath=/root/ephemeral-hdfs/lib/native/
Using properties file: /root/spark/conf/spark-defaults.conf
Adding default property: spark.executor.memory=5929m
Adding default property:
spark.executor.extraClassPath=/root/ephemeral-hdfs/conf
Adding default property:
spark.executor.extraLibraryPath=/root/ephemeral-hdfs/lib/native/
Parsed arguments:
master spark://xx.xx.xx.xx:7077
deployMode null
executorMemory 5929m
executorCores null
totalExecutorCores null
propertiesFile /root/spark/conf/spark-defaults.conf
extraSparkProperties Map()
driverMemory null
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions null
supervise false
queue null
numExecutors null
files null
pyFiles null
archives null
mainClass com.je.test.Hello
primaryResource file:/root/Hello-assembly-1.0.jar
name com.je.test.Hello
childArgs []
jars null
verbose true
Default properties from /root/spark/conf/spark-defaults.conf:
spark.executor.extraLibraryPath -> /root/ephemeral-hdfs/lib/native/
spark.executor.memory -> 5929m
spark.executor.extraClassPath -> /root/ephemeral-hdfs/conf
Using properties file: /root/spark/conf/spark-defaults.conf
Adding default property: spark.executor.memory=5929m
Adding default property:
spark.executor.extraClassPath=/root/ephemeral-hdfs/conf
Adding default property:
spark.executor.extraLibraryPath=/root/ephemeral-hdfs/lib/native/
Main class:
com.je.test.Hello
Arguments:
System properties:
spark.executor.extraLibraryPath -> /root/ephemeral-hdfs/lib/native/
spark.executor.memory -> 5929m
SPARK_SUBMIT -> true
spark.app.name -> com.je.test.Hello
spark.jars -> file:/root/Hello-assembly-1.0.jar
spark.executor.extraClassPath -> /root/ephemeral-hdfs/conf
spark.master -> spark://xxx.xx.xx.xxx:7077
Classpath elements:
file:/root/Hello-assembly-1.0.jar
*Actual Error:*
Exception in thread "main" java.lang.NullPointerException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
*Source Code:*
package com.je.test
import org.apache.spark.{SparkConf, SparkContext}
class Hello {
def main(args: Array[String]): Unit = {
val conf = new
SparkConf(true)//.set("spark.cassandra.connection.host",
"xxx.xx.xx.xxx")
val sc = new SparkContext("spark://xxx.xx.xx.xxx:7077", "Season", conf)
println("Hello World")
}
}
Re: Cluster getting a null pointer error
Posted by Yana Kadiyska <ya...@gmail.com>.
does spark-submit with SparkPi and spark-examples.jar work?
e.g.
./spark/bin/spark-submit --class org.apache.spark.examples.SparkPi
--master spark://xx.xx.xx.xx:7077 /path/to/examples.jar
On Tue, Dec 9, 2014 at 6:58 PM, Eric Tanner <er...@justenough.com>
wrote:
> I have set up a cluster on AWS and am trying a really simple hello world
> program as a test. The cluster was built using the ec2 scripts that come
> with Spark. Anyway, I have output the error message (using --verbose)
> below. The source code is further below that.
>
> Any help would be greatly appreciated.
>
> Thanks,
>
> Eric
>
> *Error code:*
>
> root@ip-xx.xx.xx.xx ~]$ ./spark/bin/spark-submit --verbose --class
> com.je.test.Hello --master spark://xx.xx.xx.xx:7077
> Hello-assembly-1.0.jar
> Spark assembly has been built with Hive, including Datanucleus jars on
> classpath
> Using properties file: /root/spark/conf/spark-defaults.conf
> Adding default property: spark.executor.memory=5929m
> Adding default property:
> spark.executor.extraClassPath=/root/ephemeral-hdfs/conf
> Adding default property:
> spark.executor.extraLibraryPath=/root/ephemeral-hdfs/lib/native/
> Using properties file: /root/spark/conf/spark-defaults.conf
> Adding default property: spark.executor.memory=5929m
> Adding default property:
> spark.executor.extraClassPath=/root/ephemeral-hdfs/conf
> Adding default property:
> spark.executor.extraLibraryPath=/root/ephemeral-hdfs/lib/native/
> Parsed arguments:
> master spark://xx.xx.xx.xx:7077
> deployMode null
> executorMemory 5929m
> executorCores null
> totalExecutorCores null
> propertiesFile /root/spark/conf/spark-defaults.conf
> extraSparkProperties Map()
> driverMemory null
> driverCores null
> driverExtraClassPath null
> driverExtraLibraryPath null
> driverExtraJavaOptions null
> supervise false
> queue null
> numExecutors null
> files null
> pyFiles null
> archives null
> mainClass com.je.test.Hello
> primaryResource file:/root/Hello-assembly-1.0.jar
> name com.je.test.Hello
> childArgs []
> jars null
> verbose true
>
> Default properties from /root/spark/conf/spark-defaults.conf:
> spark.executor.extraLibraryPath -> /root/ephemeral-hdfs/lib/native/
> spark.executor.memory -> 5929m
> spark.executor.extraClassPath -> /root/ephemeral-hdfs/conf
>
>
> Using properties file: /root/spark/conf/spark-defaults.conf
> Adding default property: spark.executor.memory=5929m
> Adding default property:
> spark.executor.extraClassPath=/root/ephemeral-hdfs/conf
> Adding default property:
> spark.executor.extraLibraryPath=/root/ephemeral-hdfs/lib/native/
> Main class:
> com.je.test.Hello
> Arguments:
>
> System properties:
> spark.executor.extraLibraryPath -> /root/ephemeral-hdfs/lib/native/
> spark.executor.memory -> 5929m
> SPARK_SUBMIT -> true
> spark.app.name -> com.je.test.Hello
> spark.jars -> file:/root/Hello-assembly-1.0.jar
> spark.executor.extraClassPath -> /root/ephemeral-hdfs/conf
> spark.master -> spark://xxx.xx.xx.xxx:7077
> Classpath elements:
> file:/root/Hello-assembly-1.0.jar
>
> *Actual Error:*
> Exception in thread "main" java.lang.NullPointerException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
> *Source Code:*
> package com.je.test
>
>
> import org.apache.spark.{SparkConf, SparkContext}
>
> class Hello {
>
> def main(args: Array[String]): Unit = {
>
> val conf = new SparkConf(true)//.set("spark.cassandra.connection.host", "xxx.xx.xx.xxx")
> val sc = new SparkContext("spark://xxx.xx.xx.xxx:7077", "Season", conf)
>
> println("Hello World")
>
> }
> }
>
>
>
>
>