You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kaveen Raajan (JIRA)" <ji...@apache.org> on 2015/08/04 07:32:04 UTC

[jira] [Comment Edited] (SPARK-9587) Spark Web UI not displaying while changing another network

    [ https://issues.apache.org/jira/browse/SPARK-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14653068#comment-14653068 ] 

Kaveen Raajan edited comment on SPARK-9587 at 8/4/15 5:31 AM:
--------------------------------------------------------------

Even I tried to *set SPARK_LOCAL_IP=localhost*, by using this property. Spark Web UI is not displaying at any time.

If I connecting hadoop proxyserver mean same problem raise. 
Is there any alternative solution to connect all my spark driver, launcher, executors with localhost instead of IP?

or else how to change 'spark.driver.appUIAddress'? Even I changed this property at spark-default.conf but not reflecting. 


was (Author: kaveenbigdata):
Even I tried to *set SPARK_LOCAL_IP=localhost*, by using this property. Spark Web UI is not displaying at any time.

If I connecting hadoop proxyserver mean same problem raise. 
Is there any alternative solution to connect all my spark driver, launcher, executors with localhost instead of IP?

> Spark Web UI not displaying while changing another network
> ----------------------------------------------------------
>
>                 Key: SPARK-9587
>                 URL: https://issues.apache.org/jira/browse/SPARK-9587
>             Project: Spark
>          Issue Type: Bug
>          Components: Web UI
>    Affects Versions: 1.4.1
>         Environment: Windows,
> Hadoop-2.5.2,
>            Reporter: Kaveen Raajan
>
> I want to start my spark-shell with localhost instead of IP. I'm running spark-shell in yarn-client mode. My Hadoop are running as singlenode cluster connecting with localhost.
> I changed following property in spark-default.conf 
> {panel:title=spark-default.conf}
> spark.driver.host    localhost
> spark.driver.hosts   localhost
> {panel}
> Initially while starting spark-shell I'm connecting with some public network (172.16.xxx.yyy) If I disconnect network mean Spark jobs are working without any problem. But Spark web UI are not working.
> ApplicationMaster always connecting with current IP instead of localhost.
> My log are here
> {code}
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:10 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:10 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:10 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:58416
> 15/08/04 10:17:10 INFO util.Utils: Successfully started service 'HTTP class server' on port 58416.
> 15/08/04 10:17:15 INFO spark.SparkContext: Running Spark version 1.4.0
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:15 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with modify permissions: Set(SYSTEM)
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 1.4.0
>       /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/08/04 10:17:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/08/04 10:17:15 INFO Remoting: Starting remoting
> 15/08/04 10:17:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:58439]
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'sparkDriver' on port 58439.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/08/04 10:17:16 INFO storage.DiskBlockManager: Created local directory at C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\blockmgr-2c1b95de-936b-44f3-b98d-263c45e310ca
> 15/08/04 10:17:16 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
> 15/08/04 10:17:16 INFO spark.HttpFileServer: HTTP File server directory is C:\Windows\Temp\spark-86221988-7e8b-4340-be80-a2be283845e3\httpd-da7b686d-deb0-446d-af20-42ded6d6d035
> 15/08/04 10:17:16 INFO spark.HttpServer: Starting HTTP Server
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:58440
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'HTTP file server' on port 58440.
> 15/08/04 10:17:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/08/04 10:17:16 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/08/04 10:17:16 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
> 15/08/04 10:17:16 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
> 15/08/04 10:17:16 INFO ui.SparkUI: Started SparkUI at http://172.16.123.123:4040
> 15/08/04 10:17:16 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
> 15/08/04 10:17:17 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
> 15/08/04 10:17:17 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container)
> 15/08/04 10:17:17 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
> 15/08/04 10:17:17 INFO yarn.Client: Setting up container launch context for our AM
> 15/08/04 10:17:17 INFO yarn.Client: Preparing resources for our AM container
> 15/08/04 10:17:17 INFO yarn.Client: Uploading resource file:/C://Spark/lib/spark-assembly-1.4.0-hadoop2.5.2.jar -> hdfs://localhost:9000/user/SYSTEM/.sparkStaging/application_1438662854479_0001/spark-assembly-1.4.0-hadoop2.5.2.jar
> 15/08/04 10:17:20 INFO yarn.Client: Uploading resource file:/C:/Windows/Temp/spark-86221988-7e8b-4340-be80-a2be283845e3/__hadoop_conf__3573844093591295334.zip -> hdfs://localhost:9000/user/SYSTEM/.sparkStaging/application_1438662854479_0001/__hadoop_conf__3573844093591295334.zip
> 15/08/04 10:17:21 INFO yarn.Client: Setting up the launch environment for our AM container
> 15/08/04 10:17:21 INFO spark.SecurityManager: Changing view acls to: SYSTEM
> 15/08/04 10:17:21 INFO spark.SecurityManager: Changing modify acls to: SYSTEM
> 15/08/04 10:17:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(SYSTEM); users with modify permissions: Set(SYSTEM)
> 15/08/04 10:17:21 INFO yarn.Client: Submitting application 1 to ResourceManager
> 15/08/04 10:17:21 INFO impl.YarnClientImpl: Submitted application application_1438662854479_0001
> 15/08/04 10:17:22 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:22 INFO yarn.Client: 
> 	 client token: N/A
> 	 diagnostics: N/A
> 	 ApplicationMaster host: N/A
> 	 ApplicationMaster RPC port: -1
> 	 queue: default
> 	 start time: 1438663641312
> 	 final status: UNDEFINED
> 	 tracking URL: http://MASTER:8088/proxy/application_1438662854479_0001/
> 	 user: SYSTEM
> 15/08/04 10:17:23 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:24 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:25 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:26 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:27 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:28 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:29 INFO yarn.Client: Application report for application_1438662854479_0001 (state: ACCEPTED)
> 15/08/04 10:17:29 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://sparkYarnAM@172.16.123.123:58480/user/YarnAM#-1256718400])
> 15/08/04 10:17:29 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> MASTER, PROXY_URI_BASES -> http://mASTER:8088/proxy/application_1438662854479_0001), /proxy/application_1438662854479_0001
> 15/08/04 10:17:29 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 15/08/04 10:17:30 INFO yarn.Client: Application report for application_1438662854479_0001 (state: RUNNING)
> 15/08/04 10:17:30 INFO yarn.Client: 
> 	 client token: N/A
> 	 diagnostics: N/A
> 	 ApplicationMaster host: 172.16.123.123
> 	 ApplicationMaster RPC port: 0
> 	 queue: default
> 	 start time: 1438663641312
> 	 final status: UNDEFINED
> 	 tracking URL: http://MASTER:8088/proxy/application_1438662854479_0001/
> 	 user: SYSTEM
> 15/08/04 10:17:30 INFO cluster.YarnClientSchedulerBackend: Application application_1438662854479_0001 has started running.
> 15/08/04 10:17:31 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58502.
> 15/08/04 10:17:31 INFO netty.NettyBlockTransferService: Server created on 58502
> 15/08/04 10:17:31 INFO storage.BlockManagerMaster: Trying to register BlockManager
> 15/08/04 10:17:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager 172.16.123.123:58502 with 265.4 MB RAM, BlockManagerId(driver, 172.16.123.123, 58502)
> 15/08/04 10:17:31 INFO storage.BlockManagerMaster: Registered BlockManager
> 15/08/04 10:17:41 INFO cluster.YarnClientSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@127.0.0.1:58564/user/Executor#1372491846]) with ID 2
> 15/08/04 10:17:41 INFO cluster.YarnClientSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@127.0.0.1:58561/user/Executor#-306444795]) with ID 1
> 15/08/04 10:17:41 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
> 15/08/04 10:17:41 INFO repl.SparkILoop: Created spark context..
> 15/08/04 10:17:42 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
> 15/08/04 10:17:42 INFO storage.BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:58575 with 530.3 MB RAM, BlockManagerId(2, 127.0.0.1, 58575)
> 15/08/04 10:17:42 INFO storage.BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:58576 with 530.3 MB RAM, BlockManagerId(1, 127.0.0.1, 58576)
> 15/08/04 10:17:42 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost:9083
> Spark context available as sc.
> 15/08/04 10:17:42 INFO hive.metastore: Connected to metastore.
> 15/08/04 10:17:42 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
> 15/08/04 10:17:42 INFO repl.SparkILoop: Created sql context (with Hive support)..
> SQL context available as sqlContext.
> scala> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org