You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by cherryii <ch...@adobe.com> on 2017/02/16 19:00:09 UTC

Spark on Mesos with Docker in bridge networking mode

I'm getting errors when I try to run my docker container in bridge networking
mode on mesos. 
Here is my spark submit script

/spark/bin/spark-submit \
 --class com.package.MySparkJob \
 --name My-Spark-Job \
 --files /path/config.cfg, ${JAR} \
 --master ${SPARK_MASTER_HOST} \
 --deploy-mode client \
 --supervise \
 --total-executor-cores ${SPARK_EXECUTOR_TOTAL_CORES} \
 --driver-cores ${SPARK_DRIVER_CORES} \
 --driver-memory ${SPARK_DRIVER_MEMORY} \
 --num-executors ${SPARK_NUM_EXECUTORS} \
 --executor-cores ${SPARK_EXECUTOR_CORES} \
 --executor-memory ${SPARK_EXECUTOR_MEMORY} \
 --driver-class-path ${JAR} \
 --conf
"spark.mesos.executor.docker.image=${SPARK_MESOS_EXECUTOR_DOCKER_IMAGE}" \
 --conf
"spark.mesos.executor.docker.volumes=${SPARK_MESOS_EXECUTOR_DOCKER_VOLUMES}"
\
 --conf "spark.mesos.uris=${SPARK_MESOS_URIS}" \
 --conf "spark.executorEnv.OBERON_DB_PASS=${OBERON_DB_PASS}" \
 --conf "spark.executorEnv.S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}" \
 --conf "spark.executorEnv.S3_ACCESS_KEY=${S3_ACCESS_KEY}" \
 --conf "spark.mesos.executor.home=${SPARK_HOME}" \
 --conf "spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY=${SPARK_MESOS_LIB}" \
 --conf "spark.files.overwrite=true" \
 --conf "spark.shuffle.service.enabled=false" \
 --conf "spark.dynamicAllocation.enabled=false" \
 --conf "spark.ui.port=${PORT_SPARKUI}" \
 --conf "spark.driver.host=${SPARK_PUBLIC_DNS}" \
 --conf "spark.driver.port=${PORT_SPARKDRIVER}" \
 --conf "spark.driver.blockManager.port=${PORT_SPARKBLOCKMANAGER}" \
 --conf "spark.jars=${JAR}" \
 --conf "spark.executor.extraClassPath=${JAR}" \
 ${JAR} 

Here is the error I'm seeing: 
java.net.BindException: Cannot assign requested address: Service
'sparkDriver' failed after 16 retries! Consider explicitly setting the
appropriate port for the service 'sparkDriver' (for example spark.ui.port
for SparkUI) to an available port or increasing spark.port.maxRetries.
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at
io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at
io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at
io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)

I was trying to follow instructions here:
https://github.com/apache/spark/pull/15120
So in my Marathon json I'm defining the ports to use for the spark driver,
spark ui and block manager.

Can anyone help me get this running in bridge networking mode?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Mesos-with-Docker-in-bridge-networking-mode-tp28397.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: Spark on Mesos with Docker in bridge networking mode

Posted by Michael Gummelt <mg...@mesosphere.io>.
There's a JIRA here: https://issues.apache.org/jira/browse/SPARK-11638

I haven't had time to look at it.

On Thu, Feb 16, 2017 at 11:00 AM, cherryii <ch...@adobe.com> wrote:

> I'm getting errors when I try to run my docker container in bridge
> networking
> mode on mesos.
> Here is my spark submit script
>
> /spark/bin/spark-submit \
>  --class com.package.MySparkJob \
>  --name My-Spark-Job \
>  --files /path/config.cfg, ${JAR} \
>  --master ${SPARK_MASTER_HOST} \
>  --deploy-mode client \
>  --supervise \
>  --total-executor-cores ${SPARK_EXECUTOR_TOTAL_CORES} \
>  --driver-cores ${SPARK_DRIVER_CORES} \
>  --driver-memory ${SPARK_DRIVER_MEMORY} \
>  --num-executors ${SPARK_NUM_EXECUTORS} \
>  --executor-cores ${SPARK_EXECUTOR_CORES} \
>  --executor-memory ${SPARK_EXECUTOR_MEMORY} \
>  --driver-class-path ${JAR} \
>  --conf
> "spark.mesos.executor.docker.image=${SPARK_MESOS_EXECUTOR_DOCKER_IMAGE}" \
>  --conf
> "spark.mesos.executor.docker.volumes=${SPARK_MESOS_
> EXECUTOR_DOCKER_VOLUMES}"
> \
>  --conf "spark.mesos.uris=${SPARK_MESOS_URIS}" \
>  --conf "spark.executorEnv.OBERON_DB_PASS=${OBERON_DB_PASS}" \
>  --conf "spark.executorEnv.S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}" \
>  --conf "spark.executorEnv.S3_ACCESS_KEY=${S3_ACCESS_KEY}" \
>  --conf "spark.mesos.executor.home=${SPARK_HOME}" \
>  --conf "spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY=${SPARK_MESOS_LIB}" \
>  --conf "spark.files.overwrite=true" \
>  --conf "spark.shuffle.service.enabled=false" \
>  --conf "spark.dynamicAllocation.enabled=false" \
>  --conf "spark.ui.port=${PORT_SPARKUI}" \
>  --conf "spark.driver.host=${SPARK_PUBLIC_DNS}" \
>  --conf "spark.driver.port=${PORT_SPARKDRIVER}" \
>  --conf "spark.driver.blockManager.port=${PORT_SPARKBLOCKMANAGER}" \
>  --conf "spark.jars=${JAR}" \
>  --conf "spark.executor.extraClassPath=${JAR}" \
>  ${JAR}
>
> Here is the error I'm seeing:
> java.net.BindException: Cannot assign requested address: Service
> 'sparkDriver' failed after 16 retries! Consider explicitly setting the
> appropriate port for the service 'sparkDriver' (for example spark.ui.port
> for SparkUI) to an available port or increasing spark.port.maxRetries.
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:
> 223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at
> io.netty.channel.socket.nio.NioServerSocketChannel.doBind(
> NioServerSocketChannel.java:125)
> at
> io.netty.channel.AbstractChannel$AbstractUnsafe.bind(
> AbstractChannel.java:485)
> at
> io.netty.channel.DefaultChannelPipeline$HeadContext.bind(
> DefaultChannelPipeline.java:1089)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeBind(
> AbstractChannelHandlerContext.java:430)
> at
> io.netty.channel.AbstractChannelHandlerContext.bind(
> AbstractChannelHandlerContext.java:415)
> at
> io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:
> 903)
> at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
> at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(
> SingleThreadEventExecutor.java:357)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
> at java.lang.Thread.run(Thread.java:745)
>
> I was trying to follow instructions here:
> https://github.com/apache/spark/pull/15120
> So in my Marathon json I'm defining the ports to use for the spark driver,
> spark ui and block manager.
>
> Can anyone help me get this running in bridge networking mode?
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Spark-on-Mesos-with-Docker-in-
> bridge-networking-mode-tp28397.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>


-- 
Michael Gummelt
Software Engineer
Mesosphere