You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by t4ng0 <ma...@gmail.com> on 2015/08/16 14:47:35 UTC

Spark can't fetch application jar after adding it to HTTP server

Hi 

I am new to spark and trying to run standalone application using
spark-submit. Whatever i could understood, from logs is that spark can't
fetch the jar file after adding it to the http server. Do i need to
configure proxy settings for spark too individually if it is a problem.
Otherwise please help me, thanks in advance. 

PS: i am attaching logs here. 

 Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties 15/08/16 15:20:52 INFO
SparkContext: Running Spark version 1.4.1 15/08/16 15:20:53 WARN
NativeCodeLoader: Unable to load native-hadoop library for your platform...
using builtin-java classes where applicable 15/08/16 15:20:53 INFO
SecurityManager: Changing view acls to: manvendratomar 15/08/16 15:20:53
INFO SecurityManager: Changing modify acls to: manvendratomar 15/08/16
15:20:53 INFO SecurityManager: SecurityManager: authentication disabled; ui
acls disabled; users with view permissions: Set(manvendratomar); users with
modify permissions: Set(manvendratomar) 15/08/16 15:20:53 INFO Slf4jLogger:
Slf4jLogger started 15/08/16 15:20:53 INFO Remoting: Starting remoting
15/08/16 15:20:54 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@10.133.201.51:63985] 15/08/16 15:20:54 INFO Utils:
Successfully started service 'sparkDriver' on port 63985. 15/08/16 15:20:54
INFO SparkEnv: Registering MapOutputTracker 15/08/16 15:20:54 INFO SparkEnv:
Registering BlockManagerMaster 15/08/16 15:20:54 INFO DiskBlockManager:
Created local directory at
/private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66/blockmgr-ed753ee2-726e-45ae-97a2-0c82923262ef
15/08/16 15:20:54 INFO MemoryStore: MemoryStore started with capacity 265.4
MB 15/08/16 15:20:54 INFO HttpFileServer: HTTP File server directory is
/private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66/httpd-68c0afb2-92ef-4ddd-9ffc-639ccac81d6a
15/08/16 15:20:54 INFO HttpServer: Starting HTTP Server 15/08/16 15:20:54
INFO Utils: Successfully started service 'HTTP file server' on port 63986.
15/08/16 15:20:54 INFO SparkEnv: Registering OutputCommitCoordinator
15/08/16 15:20:54 INFO Utils: Successfully started service 'SparkUI' on port
4040. 15/08/16 15:20:54 INFO SparkUI: Started SparkUI at
http://10.133.201.51:4040 15/08/16 15:20:54 INFO SparkContext: Added JAR
target/scala-2.11/spark_matrix_2.11-1.0.jar at
http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar with timestamp
1439718654603 15/08/16 15:20:54 INFO Executor: Starting executor ID driver
on host localhost 15/08/16 15:20:54 INFO Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 63987.
15/08/16 15:20:54 INFO NettyBlockTransferService: Server created on 63987
15/08/16 15:20:54 INFO BlockManagerMaster: Trying to register BlockManager
15/08/16 15:20:54 INFO BlockManagerMasterEndpoint: Registering block manager
localhost:63987 with 265.4 MB RAM, BlockManagerId(driver, localhost, 63987)
15/08/16 15:20:54 INFO BlockManagerMaster: Registered BlockManager 15/08/16
15:20:55 INFO MemoryStore: ensureFreeSpace(157248) called with curMem=0,
maxMem=278302556 15/08/16 15:20:55 INFO MemoryStore: Block broadcast_0
stored as values in memory (estimated size 153.6 KB, free 265.3 MB) 15/08/16
15:20:55 INFO MemoryStore: ensureFreeSpace(14257) called with curMem=157248,
maxMem=278302556 15/08/16 15:20:55 INFO MemoryStore: Block
broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free
265.2 MB) 15/08/16 15:20:55 INFO BlockManagerInfo: Added broadcast_0_piece0
in memory on localhost:63987 (size: 13.9 KB, free: 265.4 MB) 15/08/16
15:20:55 INFO SparkContext: Created broadcast 0 from textFile at
partition.scala:20 15/08/16 15:20:56 INFO FileInputFormat: Total input paths
to process : 1 15/08/16 15:20:56 INFO SparkContext: Starting job: reduce at
IndexedRowMatrix.scala:65 15/08/16 15:20:56 INFO DAGScheduler: Got job 0
(reduce at IndexedRowMatrix.scala:65) with 1 output partitions
(allowLocal=false) 15/08/16 15:20:56 INFO DAGScheduler: Final stage:
ResultStage 0(reduce at IndexedRowMatrix.scala:65) 15/08/16 15:20:56 INFO
DAGScheduler: Parents of final stage: List() 15/08/16 15:20:56 INFO
DAGScheduler: Missing parents: List() 15/08/16 15:20:56 INFO DAGScheduler:
Submitting ResultStage 0 (MapPartitionsRDD[6] at map at
IndexedRowMatrix.scala:65), which has no missing parents 15/08/16 15:20:56
INFO MemoryStore: ensureFreeSpace(4064) called with curMem=171505,
maxMem=278302556 15/08/16 15:20:56 INFO MemoryStore: Block broadcast_1
stored as values in memory (estimated size 4.0 KB, free 265.2 MB) 15/08/16
15:20:56 INFO MemoryStore: ensureFreeSpace(2249) called with curMem=175569,
maxMem=278302556 15/08/16 15:20:56 INFO MemoryStore: Block
broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free
265.2 MB) 15/08/16 15:20:56 INFO BlockManagerInfo: Added broadcast_1_piece0
in memory on localhost:63987 (size: 2.2 KB, free: 265.4 MB) 15/08/16
15:20:56 INFO SparkContext: Created broadcast 1 from broadcast at
DAGScheduler.scala:874 15/08/16 15:20:56 INFO DAGScheduler: Submitting 1
missing tasks from ResultStage 0 (MapPartitionsRDD[6] at map at
IndexedRowMatrix.scala:65) 15/08/16 15:20:56 INFO TaskSchedulerImpl: Adding
task set 0.0 with 1 tasks 15/08/16 15:20:56 INFO TaskSetManager: Starting
task 0.0 in stage 0.0 (TID 0, localhost, ANY, 1607 bytes) 15/08/16 15:20:56
INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 15/08/16 15:20:56 INFO
Executor: Fetching http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar
with timestamp 1439718654603 15/08/16 15:21:26 ERROR Executor: Exception in
task 0.0 in stage 0.0 (TID 0) java.io.IOException: Server returned HTTP
response code: 504 for URL:
http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1627)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:640) at
org.apache.spark.util.Utils$.fetchFile(Utils.scala:453) at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at
scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) 15/08/16 15:21:26 WARN
TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost):
java.io.IOException: Server returned HTTP response code: 504 for URL:
http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1627)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:640) at
org.apache.spark.util.Utils$.fetchFile(Utils.scala:453) at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at
scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) 15/08/16 15:21:26 ERROR
TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job 15/08/16
15:21:26 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all
completed, from pool 15/08/16 15:21:26 INFO TaskSchedulerImpl: Cancelling
stage 0 15/08/16 15:21:26 INFO DAGScheduler: ResultStage 0 (reduce at
IndexedRowMatrix.scala:65) failed in 30.213 s 15/08/16 15:21:26 INFO
DAGScheduler: Job 0 failed: reduce at IndexedRowMatrix.scala:65, took
30.313045 s Exception in thread "main" org.apache.spark.SparkException: Job
aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most
recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost):
java.io.IOException: Server returned HTTP response code: 504 for URL:
http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1627)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:640) at
org.apache.spark.util.Utils$.fetchFile(Utils.scala:453) at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at
scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236) at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 15/08/16
15:21:26 INFO SparkContext: Invoking stop() from shutdown hook 15/08/16
15:21:26 INFO SparkUI: Stopped Spark web UI at http://10.133.201.51:4040
15/08/16 15:21:26 INFO DAGScheduler: Stopping DAGScheduler 15/08/16 15:21:26
INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/08/16 15:21:26 INFO Utils: path =
/private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66/blockmgr-ed753ee2-726e-45ae-97a2-0c82923262ef,
already present as root for deletion. 15/08/16 15:21:26 INFO MemoryStore:
MemoryStore cleared 15/08/16 15:21:26 INFO BlockManager: BlockManager
stopped 15/08/16 15:21:26 INFO BlockManagerMaster: BlockManagerMaster
stopped 15/08/16 15:21:26 INFO
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped! 15/08/16 15:21:26 INFO SparkContext:
Successfully stopped SparkContext 15/08/16 15:21:26 INFO Utils: Shutdown
hook called 15/08/16 15:21:26 INFO Utils: Deleting directory
/private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66 




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-can-t-fetch-application-jar-after-adding-it-to-HTTP-server-tp24286.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Spark can't fetch application jar after adding it to HTTP server

Posted by Rishi Yadav <ri...@infoobjects.com>.
can you tell more about your environment. I understand you are running it
on a single machine but is firewall enabled?

On Sun, Aug 16, 2015 at 5:47 AM, t4ng0 <ma...@gmail.com> wrote:

> Hi
>
> I am new to spark and trying to run standalone application using
> spark-submit. Whatever i could understood, from logs is that spark can't
> fetch the jar file after adding it to the http server. Do i need to
> configure proxy settings for spark too individually if it is a problem.
> Otherwise please help me, thanks in advance.
>
> PS: i am attaching logs here.
>
>  Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties 15/08/16 15:20:52 INFO
> SparkContext: Running Spark version 1.4.1 15/08/16 15:20:53 WARN
> NativeCodeLoader: Unable to load native-hadoop library for your platform...
> using builtin-java classes where applicable 15/08/16 15:20:53 INFO
> SecurityManager: Changing view acls to: manvendratomar 15/08/16 15:20:53
> INFO SecurityManager: Changing modify acls to: manvendratomar 15/08/16
> 15:20:53 INFO SecurityManager: SecurityManager: authentication disabled; ui
> acls disabled; users with view permissions: Set(manvendratomar); users with
> modify permissions: Set(manvendratomar) 15/08/16 15:20:53 INFO Slf4jLogger:
> Slf4jLogger started 15/08/16 15:20:53 INFO Remoting: Starting remoting
> 15/08/16 15:20:54 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkDriver@10.133.201.51:63985] 15/08/16 15:20:54 INFO
> Utils:
> Successfully started service 'sparkDriver' on port 63985. 15/08/16 15:20:54
> INFO SparkEnv: Registering MapOutputTracker 15/08/16 15:20:54 INFO
> SparkEnv:
> Registering BlockManagerMaster 15/08/16 15:20:54 INFO DiskBlockManager:
> Created local directory at
>
> /private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66/blockmgr-ed753ee2-726e-45ae-97a2-0c82923262ef
> 15/08/16 15:20:54 INFO MemoryStore: MemoryStore started with capacity 265.4
> MB 15/08/16 15:20:54 INFO HttpFileServer: HTTP File server directory is
>
> /private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66/httpd-68c0afb2-92ef-4ddd-9ffc-639ccac81d6a
> 15/08/16 15:20:54 INFO HttpServer: Starting HTTP Server 15/08/16 15:20:54
> INFO Utils: Successfully started service 'HTTP file server' on port 63986.
> 15/08/16 15:20:54 INFO SparkEnv: Registering OutputCommitCoordinator
> 15/08/16 15:20:54 INFO Utils: Successfully started service 'SparkUI' on
> port
> 4040. 15/08/16 15:20:54 INFO SparkUI: Started SparkUI at
> http://10.133.201.51:4040 15/08/16 15:20:54 INFO SparkContext: Added JAR
> target/scala-2.11/spark_matrix_2.11-1.0.jar at
> http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar with timestamp
> 1439718654603 15/08/16 15:20:54 INFO Executor: Starting executor ID driver
> on host localhost 15/08/16 15:20:54 INFO Utils: Successfully started
> service
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 63987.
> 15/08/16 15:20:54 INFO NettyBlockTransferService: Server created on 63987
> 15/08/16 15:20:54 INFO BlockManagerMaster: Trying to register BlockManager
> 15/08/16 15:20:54 INFO BlockManagerMasterEndpoint: Registering block
> manager
> localhost:63987 with 265.4 MB RAM, BlockManagerId(driver, localhost, 63987)
> 15/08/16 15:20:54 INFO BlockManagerMaster: Registered BlockManager 15/08/16
> 15:20:55 INFO MemoryStore: ensureFreeSpace(157248) called with curMem=0,
> maxMem=278302556 15/08/16 15:20:55 INFO MemoryStore: Block broadcast_0
> stored as values in memory (estimated size 153.6 KB, free 265.3 MB)
> 15/08/16
> 15:20:55 INFO MemoryStore: ensureFreeSpace(14257) called with
> curMem=157248,
> maxMem=278302556 15/08/16 15:20:55 INFO MemoryStore: Block
> broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free
> 265.2 MB) 15/08/16 15:20:55 INFO BlockManagerInfo: Added broadcast_0_piece0
> in memory on localhost:63987 (size: 13.9 KB, free: 265.4 MB) 15/08/16
> 15:20:55 INFO SparkContext: Created broadcast 0 from textFile at
> partition.scala:20 15/08/16 15:20:56 INFO FileInputFormat: Total input
> paths
> to process : 1 15/08/16 15:20:56 INFO SparkContext: Starting job: reduce at
> IndexedRowMatrix.scala:65 15/08/16 15:20:56 INFO DAGScheduler: Got job 0
> (reduce at IndexedRowMatrix.scala:65) with 1 output partitions
> (allowLocal=false) 15/08/16 15:20:56 INFO DAGScheduler: Final stage:
> ResultStage 0(reduce at IndexedRowMatrix.scala:65) 15/08/16 15:20:56 INFO
> DAGScheduler: Parents of final stage: List() 15/08/16 15:20:56 INFO
> DAGScheduler: Missing parents: List() 15/08/16 15:20:56 INFO DAGScheduler:
> Submitting ResultStage 0 (MapPartitionsRDD[6] at map at
> IndexedRowMatrix.scala:65), which has no missing parents 15/08/16 15:20:56
> INFO MemoryStore: ensureFreeSpace(4064) called with curMem=171505,
> maxMem=278302556 15/08/16 15:20:56 INFO MemoryStore: Block broadcast_1
> stored as values in memory (estimated size 4.0 KB, free 265.2 MB) 15/08/16
> 15:20:56 INFO MemoryStore: ensureFreeSpace(2249) called with curMem=175569,
> maxMem=278302556 15/08/16 15:20:56 INFO MemoryStore: Block
> broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free
> 265.2 MB) 15/08/16 15:20:56 INFO BlockManagerInfo: Added broadcast_1_piece0
> in memory on localhost:63987 (size: 2.2 KB, free: 265.4 MB) 15/08/16
> 15:20:56 INFO SparkContext: Created broadcast 1 from broadcast at
> DAGScheduler.scala:874 15/08/16 15:20:56 INFO DAGScheduler: Submitting 1
> missing tasks from ResultStage 0 (MapPartitionsRDD[6] at map at
> IndexedRowMatrix.scala:65) 15/08/16 15:20:56 INFO TaskSchedulerImpl: Adding
> task set 0.0 with 1 tasks 15/08/16 15:20:56 INFO TaskSetManager: Starting
> task 0.0 in stage 0.0 (TID 0, localhost, ANY, 1607 bytes) 15/08/16 15:20:56
> INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 15/08/16 15:20:56 INFO
> Executor: Fetching
> http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar
> with timestamp 1439718654603 15/08/16 15:21:26 ERROR Executor: Exception in
> task 0.0 in stage 0.0 (TID 0) java.io.IOException: Server returned HTTP
> response code: 504 for URL:
> http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar at
>
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1627)
> at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:640) at
> org.apache.spark.util.Utils$.fetchFile(Utils.scala:453) at
>
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
> at
>
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
> at
>
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at
> scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at
>
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at
> org.apache.spark.executor.Executor.org
> $apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193) at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745) 15/08/16 15:21:26 WARN
> TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost):
> java.io.IOException: Server returned HTTP response code: 504 for URL:
> http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar at
>
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1627)
> at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:640) at
> org.apache.spark.util.Utils$.fetchFile(Utils.scala:453) at
>
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
> at
>
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
> at
>
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at
> scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at
>
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at
> org.apache.spark.executor.Executor.org
> $apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193) at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745) 15/08/16 15:21:26 ERROR
> TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job 15/08/16
> 15:21:26 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all
> completed, from pool 15/08/16 15:21:26 INFO TaskSchedulerImpl: Cancelling
> stage 0 15/08/16 15:21:26 INFO DAGScheduler: ResultStage 0 (reduce at
> IndexedRowMatrix.scala:65) failed in 30.213 s 15/08/16 15:21:26 INFO
> DAGScheduler: Job 0 failed: reduce at IndexedRowMatrix.scala:65, took
> 30.313045 s Exception in thread "main" org.apache.spark.SparkException: Job
> aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most
> recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost):
> java.io.IOException: Server returned HTTP response code: 504 for URL:
> http://10.133.201.51:63986/jars/spark_matrix_2.11-1.0.jar at
>
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1627)
> at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:640) at
> org.apache.spark.util.Utils$.fetchFile(Utils.scala:453) at
>
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
> at
>
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
> at
>
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at
> scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at
>
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at
> org.apache.spark.executor.Executor.org
> $apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193) at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at
> org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
> at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
> at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
> at
>
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
> at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at
>
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at scala.Option.foreach(Option.scala:236) at
>
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
> at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
> at
>
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 15/08/16
> 15:21:26 INFO SparkContext: Invoking stop() from shutdown hook 15/08/16
> 15:21:26 INFO SparkUI: Stopped Spark web UI at http://10.133.201.51:4040
> 15/08/16 15:21:26 INFO DAGScheduler: Stopping DAGScheduler 15/08/16
> 15:21:26
> INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint
> stopped!
> 15/08/16 15:21:26 INFO Utils: path =
>
> /private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66/blockmgr-ed753ee2-726e-45ae-97a2-0c82923262ef,
> already present as root for deletion. 15/08/16 15:21:26 INFO MemoryStore:
> MemoryStore cleared 15/08/16 15:21:26 INFO BlockManager: BlockManager
> stopped 15/08/16 15:21:26 INFO BlockManagerMaster: BlockManagerMaster
> stopped 15/08/16 15:21:26 INFO
> OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
> OutputCommitCoordinator stopped! 15/08/16 15:21:26 INFO SparkContext:
> Successfully stopped SparkContext 15/08/16 15:21:26 INFO Utils: Shutdown
> hook called 15/08/16 15:21:26 INFO Utils: Deleting directory
>
> /private/var/folders/6z/2j2qj3rn3z32tymm_9ybz9600000gn/T/spark-bd1c72c8-5d27-4751-ae6f-f298451e4f66
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-can-t-fetch-application-jar-after-adding-it-to-HTTP-server-tp24286.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>