You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jorge Machado <jo...@me.com.INVALID> on 2019/03/18 06:49:32 UTC
Spark on Mesos broken on 2.4 ?
Hello Everyone,
I’m just trying out the spark-shell on mesos and I don’t get any executors. To debug it I started the vagrant box from aurora and try it out there and I can the same issue as I’m getting on my cluster.
On Mesos the only active framework is the spark-shel, it is running 1.6.1 and has 4 cores. Does someone else have the same issue ?
vagrant@aurora:~/spark-2.4.0-bin-hadoop2.7$ ./bin/spark-shell --master mesos://192.168.33.7:5050 --total-executor-cores 3
2019-03-18 06:43:30 WARN Utils:66 - Your hostname, aurora resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface eth0)
2019-03-18 06:43:30 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
2019-03-18 06:43:31 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
I0318 06:43:40.738236 5192 sched.cpp:232] Version: 1.6.1
I0318 06:43:40.743121 5188 sched.cpp:336] New master detected at master@192.168.33.7:5050
I0318 06:43:40.744252 5188 sched.cpp:351] No credentials provided. Attempting to register without authentication
I0318 06:43:40.748190 5188 sched.cpp:749] Framework registered with 2c00bfa7-df7b-430b-8c92-c6452c447249-0004
Spark context Web UI available at http://10.0.2.15:4040
Spark context available as 'sc' (master = mesos://192.168.33.7:5050, app id = 2c00bfa7-df7b-430b-8c92-c6452c447249-0004).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_181)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val textFile = spark.read.textFile("README.md")
textFile: org.apache.spark.sql.Dataset[String] = [value: string]
scala> textFile.count()
[Stage 0:> (0 + 0) / 1]2019-03-18 06:44:48 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
2019-03-18 06:45:03 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
2019-03-18 06:45:18 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
2019-03-18 06:45:33 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
[Stage 0:> (0 + 0) / 1]2019-03-18 06:45:46 WARN Signaling:66 - Cancelling all active jobs, this can take a while. Press Ctrl+C again to exit now.
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org