You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Naga Vij <nv...@gmail.com> on 2015/08/13 18:47:29 UTC

Fwd: - Spark 1.4.1 - run-example SparkPi - Failure ...

Has anyone run into this?

---------- Forwarded message ----------
From: Naga Vij <nv...@gmail.com>
Date: Wed, Aug 12, 2015 at 5:47 PM
Subject: - Spark 1.4.1 - run-example SparkPi - Failure ...
To: user@spark.apache.org


Hi,

I am evaluating Spark 1.4.1

Any idea on why run-example SparkPi fails?

Here's what I am encountering with Spark 1.4.1 on Mac OS X (10.9.5) ...

---------------------------------------------------------------------------------------------------------------

~/spark-1.4.1 $ bin/run-example SparkPi

Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties

15/08/12 17:20:20 INFO SparkContext: Running Spark version 1.4.1

15/08/12 17:20:20 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable

15/08/12 17:20:20 INFO SecurityManager: Changing view acls to: nv

15/08/12 17:20:20 INFO SecurityManager: Changing modify acls to: nv

15/08/12 17:20:20 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(nv); users
with modify permissions: Set(nv)

15/08/12 17:20:21 INFO Slf4jLogger: Slf4jLogger started

15/08/12 17:20:21 INFO Remoting: Starting remoting

15/08/12 17:20:21 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriver@10.0.0.6:53024]

15/08/12 17:20:21 INFO Utils: Successfully started service 'sparkDriver' on
port 53024.

15/08/12 17:20:21 INFO SparkEnv: Registering MapOutputTracker

15/08/12 17:20:21 INFO SparkEnv: Registering BlockManagerMaster

15/08/12 17:20:21 INFO DiskBlockManager: Created local directory at
/private/var/folders/0j/bkhg_dw17w96qxddkmryz63r0000gn/T/spark-52fc9b2e-52b1-4456-a6e4-36ee2505fa01/blockmgr-1a7c45b7-0839-420a-99db-737414f35bd7

15/08/12 17:20:21 INFO MemoryStore: MemoryStore started with capacity 265.4
MB

15/08/12 17:20:21 INFO HttpFileServer: HTTP File server directory is
/private/var/folders/0j/bkhg_dw17w96qxddkmryz63r0000gn/T/spark-52fc9b2e-52b1-4456-a6e4-36ee2505fa01/httpd-2ef0b6b9-8614-41be-bc73-6ba856694d5e

15/08/12 17:20:21 INFO HttpServer: Starting HTTP Server

15/08/12 17:20:21 INFO Utils: Successfully started service 'HTTP file
server' on port 53025.

15/08/12 17:20:21 INFO SparkEnv: Registering OutputCommitCoordinator

15/08/12 17:20:21 INFO Utils: Successfully started service 'SparkUI' on
port 4040.

15/08/12 17:20:21 INFO SparkUI: Started SparkUI at http://10.0.0.6:4040

15/08/12 17:20:21 INFO SparkContext: Added JAR
file:/Users/nv/spark-1.4.1/examples/target/scala-2.10/spark-examples-1.4.1-hadoop2.6.0.jar
at http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
timestamp 1439425221758

15/08/12 17:20:21 INFO Executor: Starting executor ID driver on host
localhost

15/08/12 17:20:21 INFO Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 53026.

15/08/12 17:20:21 INFO NettyBlockTransferService: Server created on 53026

15/08/12 17:20:21 INFO BlockManagerMaster: Trying to register BlockManager

15/08/12 17:20:21 INFO BlockManagerMasterEndpoint: Registering block
manager localhost:53026 with 265.4 MB RAM, BlockManagerId(driver,
localhost, 53026)

15/08/12 17:20:21 INFO BlockManagerMaster: Registered BlockManager

15/08/12 17:20:22 INFO SparkContext: Starting job: reduce at
SparkPi.scala:35

15/08/12 17:20:22 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:35)
with 2 output partitions (allowLocal=false)

15/08/12 17:20:22 INFO DAGScheduler: Final stage: ResultStage 0(reduce at
SparkPi.scala:35)

15/08/12 17:20:22 INFO DAGScheduler: Parents of final stage: List()

15/08/12 17:20:22 INFO DAGScheduler: Missing parents: List()

15/08/12 17:20:22 INFO DAGScheduler: Submitting ResultStage 0
(MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing
parents

15/08/12 17:20:22 INFO MemoryStore: ensureFreeSpace(1888) called with
curMem=0, maxMem=278302556

15/08/12 17:20:22 INFO MemoryStore: Block broadcast_0 stored as values in
memory (estimated size 1888.0 B, free 265.4 MB)

15/08/12 17:20:22 INFO MemoryStore: ensureFreeSpace(1202) called with
curMem=1888, maxMem=278302556

15/08/12 17:20:22 INFO MemoryStore: Block broadcast_0_piece0 stored as
bytes in memory (estimated size 1202.0 B, free 265.4 MB)

15/08/12 17:20:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory
on localhost:53026 (size: 1202.0 B, free: 265.4 MB)

15/08/12 17:20:22 INFO SparkContext: Created broadcast 0 from broadcast at
DAGScheduler.scala:874

15/08/12 17:20:22 INFO DAGScheduler: Submitting 2 missing tasks from
ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31)

15/08/12 17:20:22 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks

15/08/12 17:20:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID
0, localhost, PROCESS_LOCAL, 1442 bytes)

15/08/12 17:20:22 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID
1, localhost, PROCESS_LOCAL, 1442 bytes)

15/08/12 17:20:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)

15/08/12 17:20:22 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)

15/08/12 17:20:22 INFO Executor: Fetching
http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
timestamp 1439425221758

15/08/12 17:21:22 INFO Executor: Fetching
http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
timestamp 1439425221758

15/08/12 17:21:22 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)

java.net.SocketTimeoutException: connect timed out

at java.net.PlainSocketImpl.socketConnect(Native Method)

at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)

at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)

at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)

at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

at java.net.Socket.connect(Socket.java:579)

at sun.net.NetworkClient.doConnect(NetworkClient.java:175)

at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)

at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)

at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)

at sun.net.www.http.HttpClient.New(HttpClient.java:308)

at sun.net.www.http.HttpClient.New(HttpClient.java:326)

at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)

at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)

at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)

at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:639)

at org.apache.spark.util.Utils$.fetchFile(Utils.scala:453)

at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)

at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)

at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)

at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)

at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)

at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)

at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)

at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)

at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)

at org.apache.spark.executor.Executor.org
$apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)
---------------------------------------------------------------------------------------------------------------

Thanks
Naga

Re: - Spark 1.4.1 - run-example SparkPi - Failure ...

Posted by Dirceu Semighini Filho <di...@gmail.com>.
Hi Naga,
If you are trying to use classes from this jar, you will need to call the
addJar method from the sparkcontext, which will put this jar in the all
workers context.
Even when you execute it in standalone.


2015-08-13 16:02 GMT-03:00 Naga Vij <nv...@gmail.com>:

> Hi Dirceu,
>
> Thanks for getting back to me on this.
>
> --
>
> I am just trying standalone on my Mac and trying to understand what
> exactly is going on behind this line ...
>
> --
>
> 15/08/13 11:53:13 INFO Executor: Fetching
> http://10.0.0.6:55518/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
> timestamp 1439491992525
>
> --
>
> Appears it is trying to retrieve the jar from a local URL
>
> --
>
> "ifconfig -a" reveals my ip address as 10.0.0.6 but "nc -zv 10.0.0.6
> 55518" hung when tried from another Terminal window, whereas "nc -zv
> localhost 55518" succeeded.
>
> --
>
> Don't know how to overcome.  Any ideas as applicable to standalone on Mac?
>
> --
>
> Regards
>
> Naga
>
> On Thu, Aug 13, 2015 at 11:46 AM, Dirceu Semighini Filho <
> dirceu.semighini@gmail.com> wrote:
>
>> Hi Naga,
>> This happened here sometimes when the memory of the spark cluster wasn't
>> enough, and Java GC enters into an infinite loop trying to free some memory.
>> To fix this I just added more memory to the Workers of my cluster, or you
>> can increase the number of partitions of your RDD, using the repartition
>> method.
>>
>> Regards,
>> Dirceu
>>
>> 2015-08-13 13:47 GMT-03:00 Naga Vij <nv...@gmail.com>:
>>
>>> Has anyone run into this?
>>>
>>> ---------- Forwarded message ----------
>>> From: Naga Vij <nv...@gmail.com>
>>> Date: Wed, Aug 12, 2015 at 5:47 PM
>>> Subject: - Spark 1.4.1 - run-example SparkPi - Failure ...
>>> To: user@spark.apache.org
>>>
>>>
>>> Hi,
>>>
>>> I am evaluating Spark 1.4.1
>>>
>>> Any idea on why run-example SparkPi fails?
>>>
>>> Here's what I am encountering with Spark 1.4.1 on Mac OS X (10.9.5) ...
>>>
>>>
>>> ---------------------------------------------------------------------------------------------------------------
>>>
>>> ~/spark-1.4.1 $ bin/run-example SparkPi
>>>
>>> Using Spark's default log4j profile:
>>> org/apache/spark/log4j-defaults.properties
>>>
>>> 15/08/12 17:20:20 INFO SparkContext: Running Spark version 1.4.1
>>>
>>> 15/08/12 17:20:20 WARN NativeCodeLoader: Unable to load native-hadoop
>>> library for your platform... using builtin-java classes where applicable
>>>
>>> 15/08/12 17:20:20 INFO SecurityManager: Changing view acls to: nv
>>>
>>> 15/08/12 17:20:20 INFO SecurityManager: Changing modify acls to: nv
>>>
>>> 15/08/12 17:20:20 INFO SecurityManager: SecurityManager: authentication
>>> disabled; ui acls disabled; users with view permissions: Set(nv); users
>>> with modify permissions: Set(nv)
>>>
>>> 15/08/12 17:20:21 INFO Slf4jLogger: Slf4jLogger started
>>>
>>> 15/08/12 17:20:21 INFO Remoting: Starting remoting
>>>
>>> 15/08/12 17:20:21 INFO Remoting: Remoting started; listening on
>>> addresses :[akka.tcp://sparkDriver@10.0.0.6:53024]
>>>
>>> 15/08/12 17:20:21 INFO Utils: Successfully started service 'sparkDriver'
>>> on port 53024.
>>>
>>> 15/08/12 17:20:21 INFO SparkEnv: Registering MapOutputTracker
>>>
>>> 15/08/12 17:20:21 INFO SparkEnv: Registering BlockManagerMaster
>>>
>>> 15/08/12 17:20:21 INFO DiskBlockManager: Created local directory at
>>> /private/var/folders/0j/bkhg_dw17w96qxddkmryz63r0000gn/T/spark-52fc9b2e-52b1-4456-a6e4-36ee2505fa01/blockmgr-1a7c45b7-0839-420a-99db-737414f35bd7
>>>
>>> 15/08/12 17:20:21 INFO MemoryStore: MemoryStore started with capacity
>>> 265.4 MB
>>>
>>> 15/08/12 17:20:21 INFO HttpFileServer: HTTP File server directory is
>>> /private/var/folders/0j/bkhg_dw17w96qxddkmryz63r0000gn/T/spark-52fc9b2e-52b1-4456-a6e4-36ee2505fa01/httpd-2ef0b6b9-8614-41be-bc73-6ba856694d5e
>>>
>>> 15/08/12 17:20:21 INFO HttpServer: Starting HTTP Server
>>>
>>> 15/08/12 17:20:21 INFO Utils: Successfully started service 'HTTP file
>>> server' on port 53025.
>>>
>>> 15/08/12 17:20:21 INFO SparkEnv: Registering OutputCommitCoordinator
>>>
>>> 15/08/12 17:20:21 INFO Utils: Successfully started service 'SparkUI' on
>>> port 4040.
>>>
>>> 15/08/12 17:20:21 INFO SparkUI: Started SparkUI at http://10.0.0.6:4040
>>>
>>> 15/08/12 17:20:21 INFO SparkContext: Added JAR
>>> file:/Users/nv/spark-1.4.1/examples/target/scala-2.10/spark-examples-1.4.1-hadoop2.6.0.jar
>>> at http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
>>> timestamp 1439425221758
>>>
>>> 15/08/12 17:20:21 INFO Executor: Starting executor ID driver on host
>>> localhost
>>>
>>> 15/08/12 17:20:21 INFO Utils: Successfully started service
>>> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53026.
>>>
>>> 15/08/12 17:20:21 INFO NettyBlockTransferService: Server created on 53026
>>>
>>> 15/08/12 17:20:21 INFO BlockManagerMaster: Trying to register
>>> BlockManager
>>>
>>> 15/08/12 17:20:21 INFO BlockManagerMasterEndpoint: Registering block
>>> manager localhost:53026 with 265.4 MB RAM, BlockManagerId(driver,
>>> localhost, 53026)
>>>
>>> 15/08/12 17:20:21 INFO BlockManagerMaster: Registered BlockManager
>>>
>>> 15/08/12 17:20:22 INFO SparkContext: Starting job: reduce at
>>> SparkPi.scala:35
>>>
>>> 15/08/12 17:20:22 INFO DAGScheduler: Got job 0 (reduce at
>>> SparkPi.scala:35) with 2 output partitions (allowLocal=false)
>>>
>>> 15/08/12 17:20:22 INFO DAGScheduler: Final stage: ResultStage 0(reduce
>>> at SparkPi.scala:35)
>>>
>>> 15/08/12 17:20:22 INFO DAGScheduler: Parents of final stage: List()
>>>
>>> 15/08/12 17:20:22 INFO DAGScheduler: Missing parents: List()
>>>
>>> 15/08/12 17:20:22 INFO DAGScheduler: Submitting ResultStage 0
>>> (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing
>>> parents
>>>
>>> 15/08/12 17:20:22 INFO MemoryStore: ensureFreeSpace(1888) called with
>>> curMem=0, maxMem=278302556
>>>
>>> 15/08/12 17:20:22 INFO MemoryStore: Block broadcast_0 stored as values
>>> in memory (estimated size 1888.0 B, free 265.4 MB)
>>>
>>> 15/08/12 17:20:22 INFO MemoryStore: ensureFreeSpace(1202) called with
>>> curMem=1888, maxMem=278302556
>>>
>>> 15/08/12 17:20:22 INFO MemoryStore: Block broadcast_0_piece0 stored as
>>> bytes in memory (estimated size 1202.0 B, free 265.4 MB)
>>>
>>> 15/08/12 17:20:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in
>>> memory on localhost:53026 (size: 1202.0 B, free: 265.4 MB)
>>>
>>> 15/08/12 17:20:22 INFO SparkContext: Created broadcast 0 from broadcast
>>> at DAGScheduler.scala:874
>>>
>>> 15/08/12 17:20:22 INFO DAGScheduler: Submitting 2 missing tasks from
>>> ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31)
>>>
>>> 15/08/12 17:20:22 INFO TaskSchedulerImpl: Adding task set 0.0 with 2
>>> tasks
>>>
>>> 15/08/12 17:20:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0
>>> (TID 0, localhost, PROCESS_LOCAL, 1442 bytes)
>>>
>>> 15/08/12 17:20:22 INFO TaskSetManager: Starting task 1.0 in stage 0.0
>>> (TID 1, localhost, PROCESS_LOCAL, 1442 bytes)
>>>
>>> 15/08/12 17:20:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
>>>
>>> 15/08/12 17:20:22 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
>>>
>>> 15/08/12 17:20:22 INFO Executor: Fetching
>>> http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
>>> timestamp 1439425221758
>>>
>>> 15/08/12 17:21:22 INFO Executor: Fetching
>>> http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
>>> timestamp 1439425221758
>>>
>>> 15/08/12 17:21:22 ERROR Executor: Exception in task 1.0 in stage 0.0
>>> (TID 1)
>>>
>>> java.net.SocketTimeoutException: connect timed out
>>>
>>> at java.net.PlainSocketImpl.socketConnect(Native Method)
>>>
>>> at
>>> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>>>
>>> at
>>> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>>>
>>> at
>>> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>>>
>>> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>>>
>>> at java.net.Socket.connect(Socket.java:579)
>>>
>>> at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>>>
>>> at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>>>
>>> at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>>>
>>> at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
>>>
>>> at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>>>
>>> at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>>>
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>>>
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>>>
>>> at
>>> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>>>
>>> at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:639)
>>>
>>> at org.apache.spark.util.Utils$.fetchFile(Utils.scala:453)
>>>
>>> at
>>> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
>>>
>>> at
>>> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
>>>
>>> at
>>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>>
>>> at
>>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>>
>>> at
>>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>>
>>> at
>>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>>>
>>> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>>>
>>> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>>>
>>> at
>>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>>>
>>> at org.apache.spark.executor.Executor.org
>>> $apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
>>>
>>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> ---------------------------------------------------------------------------------------------------------------
>>>
>>> Thanks
>>> Naga
>>>
>>>
>>>
>>>
>>
>

Re: - Spark 1.4.1 - run-example SparkPi - Failure ...

Posted by Dirceu Semighini Filho <di...@gmail.com>.
Hi Naga,
This happened here sometimes when the memory of the spark cluster wasn't
enough, and Java GC enters into an infinite loop trying to free some memory.
To fix this I just added more memory to the Workers of my cluster, or you
can increase the number of partitions of your RDD, using the repartition
method.

Regards,
Dirceu

2015-08-13 13:47 GMT-03:00 Naga Vij <nv...@gmail.com>:

> Has anyone run into this?
>
> ---------- Forwarded message ----------
> From: Naga Vij <nv...@gmail.com>
> Date: Wed, Aug 12, 2015 at 5:47 PM
> Subject: - Spark 1.4.1 - run-example SparkPi - Failure ...
> To: user@spark.apache.org
>
>
> Hi,
>
> I am evaluating Spark 1.4.1
>
> Any idea on why run-example SparkPi fails?
>
> Here's what I am encountering with Spark 1.4.1 on Mac OS X (10.9.5) ...
>
>
> ---------------------------------------------------------------------------------------------------------------
>
> ~/spark-1.4.1 $ bin/run-example SparkPi
>
> Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
>
> 15/08/12 17:20:20 INFO SparkContext: Running Spark version 1.4.1
>
> 15/08/12 17:20:20 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 15/08/12 17:20:20 INFO SecurityManager: Changing view acls to: nv
>
> 15/08/12 17:20:20 INFO SecurityManager: Changing modify acls to: nv
>
> 15/08/12 17:20:20 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(nv); users
> with modify permissions: Set(nv)
>
> 15/08/12 17:20:21 INFO Slf4jLogger: Slf4jLogger started
>
> 15/08/12 17:20:21 INFO Remoting: Starting remoting
>
> 15/08/12 17:20:21 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkDriver@10.0.0.6:53024]
>
> 15/08/12 17:20:21 INFO Utils: Successfully started service 'sparkDriver'
> on port 53024.
>
> 15/08/12 17:20:21 INFO SparkEnv: Registering MapOutputTracker
>
> 15/08/12 17:20:21 INFO SparkEnv: Registering BlockManagerMaster
>
> 15/08/12 17:20:21 INFO DiskBlockManager: Created local directory at
> /private/var/folders/0j/bkhg_dw17w96qxddkmryz63r0000gn/T/spark-52fc9b2e-52b1-4456-a6e4-36ee2505fa01/blockmgr-1a7c45b7-0839-420a-99db-737414f35bd7
>
> 15/08/12 17:20:21 INFO MemoryStore: MemoryStore started with capacity
> 265.4 MB
>
> 15/08/12 17:20:21 INFO HttpFileServer: HTTP File server directory is
> /private/var/folders/0j/bkhg_dw17w96qxddkmryz63r0000gn/T/spark-52fc9b2e-52b1-4456-a6e4-36ee2505fa01/httpd-2ef0b6b9-8614-41be-bc73-6ba856694d5e
>
> 15/08/12 17:20:21 INFO HttpServer: Starting HTTP Server
>
> 15/08/12 17:20:21 INFO Utils: Successfully started service 'HTTP file
> server' on port 53025.
>
> 15/08/12 17:20:21 INFO SparkEnv: Registering OutputCommitCoordinator
>
> 15/08/12 17:20:21 INFO Utils: Successfully started service 'SparkUI' on
> port 4040.
>
> 15/08/12 17:20:21 INFO SparkUI: Started SparkUI at http://10.0.0.6:4040
>
> 15/08/12 17:20:21 INFO SparkContext: Added JAR
> file:/Users/nv/spark-1.4.1/examples/target/scala-2.10/spark-examples-1.4.1-hadoop2.6.0.jar
> at http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
> timestamp 1439425221758
>
> 15/08/12 17:20:21 INFO Executor: Starting executor ID driver on host
> localhost
>
> 15/08/12 17:20:21 INFO Utils: Successfully started service
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53026.
>
> 15/08/12 17:20:21 INFO NettyBlockTransferService: Server created on 53026
>
> 15/08/12 17:20:21 INFO BlockManagerMaster: Trying to register BlockManager
>
> 15/08/12 17:20:21 INFO BlockManagerMasterEndpoint: Registering block
> manager localhost:53026 with 265.4 MB RAM, BlockManagerId(driver,
> localhost, 53026)
>
> 15/08/12 17:20:21 INFO BlockManagerMaster: Registered BlockManager
>
> 15/08/12 17:20:22 INFO SparkContext: Starting job: reduce at
> SparkPi.scala:35
>
> 15/08/12 17:20:22 INFO DAGScheduler: Got job 0 (reduce at
> SparkPi.scala:35) with 2 output partitions (allowLocal=false)
>
> 15/08/12 17:20:22 INFO DAGScheduler: Final stage: ResultStage 0(reduce at
> SparkPi.scala:35)
>
> 15/08/12 17:20:22 INFO DAGScheduler: Parents of final stage: List()
>
> 15/08/12 17:20:22 INFO DAGScheduler: Missing parents: List()
>
> 15/08/12 17:20:22 INFO DAGScheduler: Submitting ResultStage 0
> (MapPartitionsRDD[1] at map at SparkPi.scala:31), which has no missing
> parents
>
> 15/08/12 17:20:22 INFO MemoryStore: ensureFreeSpace(1888) called with
> curMem=0, maxMem=278302556
>
> 15/08/12 17:20:22 INFO MemoryStore: Block broadcast_0 stored as values in
> memory (estimated size 1888.0 B, free 265.4 MB)
>
> 15/08/12 17:20:22 INFO MemoryStore: ensureFreeSpace(1202) called with
> curMem=1888, maxMem=278302556
>
> 15/08/12 17:20:22 INFO MemoryStore: Block broadcast_0_piece0 stored as
> bytes in memory (estimated size 1202.0 B, free 265.4 MB)
>
> 15/08/12 17:20:22 INFO BlockManagerInfo: Added broadcast_0_piece0 in
> memory on localhost:53026 (size: 1202.0 B, free: 265.4 MB)
>
> 15/08/12 17:20:22 INFO SparkContext: Created broadcast 0 from broadcast at
> DAGScheduler.scala:874
>
> 15/08/12 17:20:22 INFO DAGScheduler: Submitting 2 missing tasks from
> ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:31)
>
> 15/08/12 17:20:22 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
>
> 15/08/12 17:20:22 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID
> 0, localhost, PROCESS_LOCAL, 1442 bytes)
>
> 15/08/12 17:20:22 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID
> 1, localhost, PROCESS_LOCAL, 1442 bytes)
>
> 15/08/12 17:20:22 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
>
> 15/08/12 17:20:22 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
>
> 15/08/12 17:20:22 INFO Executor: Fetching
> http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
> timestamp 1439425221758
>
> 15/08/12 17:21:22 INFO Executor: Fetching
> http://10.0.0.6:53025/jars/spark-examples-1.4.1-hadoop2.6.0.jar with
> timestamp 1439425221758
>
> 15/08/12 17:21:22 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID
> 1)
>
> java.net.SocketTimeoutException: connect timed out
>
> at java.net.PlainSocketImpl.socketConnect(Native Method)
>
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>
> at java.net.Socket.connect(Socket.java:579)
>
> at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>
> at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
>
> at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>
> at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>
> at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>
> at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>
> at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>
> at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:639)
>
> at org.apache.spark.util.Utils$.fetchFile(Utils.scala:453)
>
> at
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
>
> at
> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
>
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>
> at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>
> at org.apache.spark.executor.Executor.org
> $apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
>
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> ---------------------------------------------------------------------------------------------------------------
>
> Thanks
> Naga
>
>
>
>