You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shahid (JIRA)" <ji...@apache.org> on 2018/10/10 06:05:00 UTC

[jira] [Commented] (SPARK-25695) Spark history server event log store problem

    [ https://issues.apache.org/jira/browse/SPARK-25695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644503#comment-16644503 ] 

shahid commented on SPARK-25695:
--------------------------------

Hi,  This error seems, eventLog directory doesn't exist in your cluster. please create the directory and run. 


> Spark history server event log store problem
> --------------------------------------------
>
>                 Key: SPARK-25695
>                 URL: https://issues.apache.org/jira/browse/SPARK-25695
>             Project: Spark
>          Issue Type: Bug
>          Components: Web UI
>    Affects Versions: 2.0.0
>            Reporter: Si Chen
>            Priority: Major
>
> Envirment:spark 2.0.0 hadoop 2.7.3
> spark-default.conf
> {code:java}
> spark.driver.extraLibraryPath /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
> spark.eventLog.dir file:/home/hdfs/event
> spark.eventLog.enabled true
> spark.executor.extraLibraryPath /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
> spark.history.fs.logDirectory file:/home/hdfs/event
> spark.history.kerberos.keytab none
> spark.history.kerberos.principal none
> spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
> spark.history.ui.port 18081
> spark.yarn.historyServer.address slave6.htdata.com:18081
> spark.yarn.queue default
> {code}
>  I want to save eventLog to local disk.
>  When I submit spark job use client deploy mode, event log can write to local disk.
>  But when I use cluster mode,The following problems arise.I am sure all servers have this path.
> {code:java}
> 18/10/10 13:10:13 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1538963194112_0033 and attemptId Some(appattempt_1538963194112_0033_000001)
> 18/10/10 13:10:13 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 63016.
> 18/10/10 13:10:13 INFO netty.NettyBlockTransferService: Server created on 192.168.0.78:63016
> 18/10/10 13:10:13 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.0.78, 63016)
> 18/10/10 13:10:13 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.0.78:63016 with 366.3 MB RAM, BlockManagerId(driver, 192.168.0.78, 63016)
> 18/10/10 13:10:13 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.0.78, 63016)
> 18/10/10 13:10:13 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@32844dd3{/metrics/json,null,AVAILABLE}
> 18/10/10 13:10:13 ERROR spark.SparkContext: Error initializing SparkContext.
> java.io.FileNotFoundException: File file:/home/hdfs/event does not exist
> 	at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
> 	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
> 	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
> 	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
> 	at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:93)
> 	at org.apache.spark.SparkContext.<init>(SparkContext.scala:516)
> 	at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:836)
> 	at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:84)
> 	at com.iiot.stream.spark.HTMonitorContext$.main(HTMonitorContext.scala:23)
> 	at com.iiot.stream.spark.HTMonitorContext.main(HTMonitorContext.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
> 18/10/10 13:10:13 INFO server.ServerConnector: Stopped ServerConnector@4fd401cf{HTTP/1.1}{0.0.0.0:0}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@73fe17d4{/stages/stage/kill,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@11e5f9d4{/api,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5d6fa266{/,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5920363{/static,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4494dda9{/executors/threadDump/json,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5a32de89{/executors/threadDump,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c448c23{/executors/json,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@b0949d9{/executors,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6a3e90c6{/environment/json,null,UNAVAILABLE}
> 18/10/10 13:10:13 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6979e53f{/environment,null,UNAVAILABLE}
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org