You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by ved_kpl <ve...@gmail.com> on 2017/05/22 07:39:47 UTC
Spark on Mesos failure, when launching a simple job
I have been trying to learn spark on mesos, but the spark-shell just keeps on
ignoring the offers. Here is my setup:
All the components are in the same subnet
- 1 mesos master on EC2 instance (t2.micro)
command: `mesos-master --work_dir=/tmp/abc --hostname=<public IP>`
- 2 mesos agents (each with 4 cores, 16 GB ram and 30 GB disk space)
command: `mesos-slave --master="<private IP of master>:5050"
--hostname="<public IP>" --work_dir=/tmp/abc`
- 1 spark-shell (client) on ec2 instance (t2.micro)
I have set the following environment variables on this instance before
launching the spark-shell
export MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so
export
SPARK_EXECUTOR_URI=https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz
and then I launch the the spark-shell as follows
./bin/spark-shell --master mesos://172.31.1.93:5050 (with private IP
of the master)
Once the spark-shell is up, I run the simplest program possible
val f = sc.textFile ("/tmp/ok.txt");
f.count()
and it fails. Here are the logs
https://pastebin.ca/3815427
Note: I have not set SPARK_LOCAL_IP on the spark shell.
I am using mesos 1.2.0 and spark 2.1.1 on ubuntu 16.04. I have verified by
writing a small node.js based http client and the offers from the master
seem fine. What possibly is going wrong here?
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Mesos-failure-when-launching-a-simple-job-tp28701.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org