You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Erik O'Shaughnessy (JIRA)" <ji...@apache.org> on 2016/03/09 20:08:40 UTC

[jira] [Commented] (SPARK-13776) Web UI is not available after ./sbin/start-master.sh

    [ https://issues.apache.org/jira/browse/SPARK-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187693#comment-15187693 ] 

Erik O'Shaughnessy commented on SPARK-13776:
--------------------------------------------

Quite agree, this is a Minor and not Major problem. This is the first JIRA I've filled out and I missed updating that field. That's my story and I'm sticking with it. 

> Web UI is not available after ./sbin/start-master.sh
> ----------------------------------------------------
>
>                 Key: SPARK-13776
>                 URL: https://issues.apache.org/jira/browse/SPARK-13776
>             Project: Spark
>          Issue Type: Bug
>          Components: Web UI
>    Affects Versions: 1.6.0
>         Environment: Solaris 11.3, Oracle SPARC T-5 8 with 1024 hardware threads
>            Reporter: Erik O'Shaughnessy
>            Priority: Minor
>
> The Apache Spark Web UI fails to become available after starting a Spark master in stand-alone mode:
> $ ./sbin/start-master.sh
> The log file contains the following:
> {quote}
> cat spark-hadoop-org.apache.spark.deploy.master.Master-1-t5-8-002.out
> Spark Command: /usr/java/bin/java -cp /usr/local/spark-1.6.0_nohadoop/conf/:/usr/local/spark-1.6.0_nohadoop/assembly/target/scala-2.10/spark-assembly-1.6.0-hadoop2.2.0.jar:/usr/local/spark-1.6.0_nohadoop/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0_nohadoop/lib_managed/jars/datanucleus-rdbms-3.2.9.jar:/usr/local/spark-1.6.0_nohadoop/lib_managed/jars/datanucleus-core-3.2.10.jar -Xms1g -Xmx1g org.apache.spark.deploy.master.Master --ip t5-8-002 --port 7077 --webui-port 8080
> ========================================
> 16/01/27 12:00:42 WARN AbstractConnector: insufficient threads configured for SelectChannelConnector@0.0.0.0:8080
> 16/01/27 12:00:42 WARN AbstractConnector: insufficient threads configured for SelectChannelConnector@t5-8-002:6066
> {quote}
> I did some poking around and it seems that message is coming from Jetty and indicates a mismatch between Jetty's default maxThreads configuration and the actual number of CPUs available on the hardware (1024). I was not able to find a way to successfully change Jetty's configuration at run-time. 
> Our work around was to disable CPUs until the WARN messages did not occur in the log file, which was when NCPUs = 504. 
> I don't know for certain that this is isn't a known problem in Jetty from looking at their bug reports, but I wasn't able to locate a Jetty issue that described this problem.
> While not specifically an Apache Spark problem, I thought documenting it would at least be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org