You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by purav aggarwal <pu...@gmail.com> on 2013/10/01 10:46:20 UTC

Help required to deploy code to Standalone Cluster

Hi,

I am trying to deploy my code (read jar) to a standalone cluster and
nothing is working for me.
- LocalMachine = build machine (Mac)
- Cluster           = (1 master and 1 slave with over 90gigs memory)
(CentOs)
Observations:
1. I can run the code on my local machine passing local as argument to
spark context.
2. I can execute test function on my LocalMachine using "./run-example
org.apache.spark.examples.**SparkPi spark://master:7077" and I can the see
the jar (spark-examples-assembly-0.8.**0-SNAPSHOT.jar) deployed to the
slave work folder and the job being done.
3. Step 2 behaves similar when executed to on master machine using
"./run-example org.apache.spark.examples.**SparkPi spark://master:7077"

4. Now I have written down code in scala for spark. How do I deploy my jar
to the cluster to run and compute. ?
    a. Running "/libs/spark/sbt/sbt run" from project directory results in
incessant "cluster.ClusterScheduler: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered"

5. I want to keep the build machine separate from the cluster master and
slave.
6. SparkContext in my code looks like this -
val sc          = new SparkContext("spark://master:**7077", "Simple Job",
"$SPARK_HOME", List("target/scala-2.9.3/**simple-project_2.9.3-1.0.jar")**)

Any ideas how to solve this one ?

Regards
Purav