You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Kai Wang <de...@gmail.com> on 2016/11/16 14:39:33 UTC

HttpFileServer behavior in 1.6.3

Hi

I am running Spark 1.6.3 along with Spark Jobserver. I notice some
interesting behaviors of HttpFileServer.

When I destroy&recreate a SparkContext, HttpFileServer doesn't release the
port. If I don't specify spark.fileserver.port, the next HttpFileServer
binds to a new random port (as expected). However if I do want
HttpFileServer to use a well known port for firewall purpose, the next
HttpFileServer will try to bind to the port, it will fail then try the
port+1 until either it can find an open port or max retries is exceeded.

I feel HttpFileServer should be shutdown when SparkContext is destroyed. Is
this a bug in Spark or SJS?