You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Henggang Cui <cu...@gmail.com> on 2014/06/10 00:24:27 UTC

Setting spark memory limit

Hi,

I'm trying to run the SimpleApp example (
http://spark.apache.org/docs/latest/quick-start.html#a-standalone-app-in-scala)
on a larger dataset.

The input file is about 1GB, but when I run the Spark program, it
says:"java.lang.OutOfMemoryError: GC overhead limit exceeded", the full
error output is attached at the end of the E-mail.

Then I tried multiple ways of setting the memory limit.

In SimpleApp.scala file, I set the following configurations:
    val conf = new SparkConf()
               .setAppName("Simple Application")
               .set("spark.executor.memory", "10g")

And I have also tried appending the following configuration to
conf/spark-defaults.conf file:
    spark.executor.memory   10g

But neither of them works. In the error message, it claims "(estimated size
103.8 MB, free 191.1 MB)", so the total available memory is still 300MB.
Why?

Thanks,
Cui


$ ~/spark-1.0.0-bin-hadoop1/bin/spark-submit --class "SimpleApp" --master
local[4] target/scala-2.10/simple-project_2.10-1.0.jar /tmp/mdata0-10.tsd
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
14/06/09 15:06:29 INFO SecurityManager: Using Spark's default log4j
profile: org/apache/spark/log4j-defaults.properties
14/06/09 15:06:29 INFO SecurityManager: Changing view acls to: cuihe
14/06/09 15:06:29 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(cuihe)
14/06/09 15:06:29 INFO Slf4jLogger: Slf4jLogger started
14/06/09 15:06:29 INFO Remoting: Starting remoting
14/06/09 15:06:30 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://spark@131-1.bfc.hpl.hp.com:40779]
14/06/09 15:06:30 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@131-1.bfc.hpl.hp.com:40779]
14/06/09 15:06:30 INFO SparkEnv: Registering MapOutputTracker
14/06/09 15:06:30 INFO SparkEnv: Registering BlockManagerMaster
14/06/09 15:06:30 INFO DiskBlockManager: Created local directory at
/tmp/spark-local-20140609150630-eaa9
14/06/09 15:06:30 INFO MemoryStore: MemoryStore started with capacity 294.9
MB.
14/06/09 15:06:30 INFO ConnectionManager: Bound socket to port 47164 with
id = ConnectionManagerId(131-1.bfc.hpl.hp.com,47164)
14/06/09 15:06:30 INFO BlockManagerMaster: Trying to register BlockManager
14/06/09 15:06:30 INFO BlockManagerInfo: Registering block manager
131-1.bfc.hpl.hp.com:47164 with 294.9 MB RAM
14/06/09 15:06:30 INFO BlockManagerMaster: Registered BlockManager
14/06/09 15:06:30 INFO HttpServer: Starting HTTP Server
14/06/09 15:06:30 INFO HttpBroadcast: Broadcast server started at
http://16.106.36.131:48587
14/06/09 15:06:30 INFO HttpFileServer: HTTP File server directory is
/tmp/spark-35e1c47b-bfa1-4fba-bc64-df8eee287bb7
14/06/09 15:06:30 INFO HttpServer: Starting HTTP Server
14/06/09 15:06:30 INFO SparkUI: Started SparkUI at
http://131-1.bfc.hpl.hp.com:4040
14/06/09 15:06:30 INFO SparkContext: Added JAR
file:/data/cuihe/spark-app/target/scala-2.10/simple-project_2.10-1.0.jar at
http://16.106.36.131:35579/jars/simple-project_2.10-1.0.jar with timestamp
1402351590741
14/06/09 15:06:30 INFO MemoryStore: ensureFreeSpace(32856) called with
curMem=0, maxMem=309225062
14/06/09 15:06:30 INFO MemoryStore: Block broadcast_0 stored as values to
memory (estimated size 32.1 KB, free 294.9 MB)
14/06/09 15:06:30 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
14/06/09 15:06:30 WARN LoadSnappy: Snappy native library not loaded
14/06/09 15:06:30 INFO FileInputFormat: Total input paths to process : 1
14/06/09 15:06:30 INFO SparkContext: Starting job: count at
SimpleApp.scala:14
14/06/09 15:06:31 INFO DAGScheduler: Got job 0 (count at
SimpleApp.scala:14) with 7 output partitions (allowLocal=false)
14/06/09 15:06:31 INFO DAGScheduler: Final stage: Stage 0(count at
SimpleApp.scala:14)
14/06/09 15:06:31 INFO DAGScheduler: Parents of final stage: List()
14/06/09 15:06:31 INFO DAGScheduler: Missing parents: List()
14/06/09 15:06:31 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at
filter at SimpleApp.scala:14), which has no missing parents
14/06/09 15:06:31 INFO DAGScheduler: Submitting 7 missing tasks from Stage
0 (FilteredRDD[2] at filter at SimpleApp.scala:14)
14/06/09 15:06:31 INFO TaskSchedulerImpl: Adding task set 0.0 with 7 tasks
14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on
executor localhost: localhost (PROCESS_LOCAL)
14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:0 as 1839 bytes
in 2 ms
14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:1 as TID 1 on
executor localhost: localhost (PROCESS_LOCAL)
14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:1 as 1839 bytes
in 0 ms
14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:2 as TID 2 on
executor localhost: localhost (PROCESS_LOCAL)
14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:2 as 1839 bytes
in 1 ms
14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:3 as TID 3 on
executor localhost: localhost (PROCESS_LOCAL)
14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:3 as 1839 bytes
in 1 ms
14/06/09 15:06:31 INFO Executor: Running task ID 0
14/06/09 15:06:31 INFO Executor: Running task ID 1
14/06/09 15:06:31 INFO Executor: Running task ID 2
14/06/09 15:06:31 INFO Executor: Running task ID 3
14/06/09 15:06:31 INFO Executor: Fetching
http://16.106.36.131:35579/jars/simple-project_2.10-1.0.jar with timestamp
1402351590741
14/06/09 15:06:31 INFO Utils: Fetching
http://16.106.36.131:35579/jars/simple-project_2.10-1.0.jar to
/tmp/fetchFileTemp7241193225836706654.tmp
14/06/09 15:06:31 INFO Executor: Adding
file:/tmp/spark-68aa13c8-8146-4e6a-80a1-c406a4cef89f/simple-project_2.10-1.0.jar
to class loader
14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_2 not found, computing
it
14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_0 not found, computing
it
14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_1 not found, computing
it
14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_3 not found, computing
it
14/06/09 15:06:31 INFO HadoopRDD: Input split:
file:/tmp/mdata0-10.tsd:67108864+33554432
14/06/09 15:06:31 INFO HadoopRDD: Input split:
file:/tmp/mdata0-10.tsd:33554432+33554432
14/06/09 15:06:31 INFO HadoopRDD: Input split:
file:/tmp/mdata0-10.tsd:100663296+33554432
14/06/09 15:06:31 INFO HadoopRDD: Input split:
file:/tmp/mdata0-10.tsd:0+33554432
14/06/09 15:06:50 INFO MemoryStore: ensureFreeSpace(108800293) called with
curMem=32856, maxMem=309225062
14/06/09 15:06:50 INFO MemoryStore: Block rdd_1_2 stored as values to
memory (estimated size 103.8 MB, free 191.1 MB)
14/06/09 15:06:50 INFO MemoryStore: ensureFreeSpace(108716407) called with
curMem=108833149, maxMem=309225062
14/06/09 15:06:50 ERROR Executor: Exception in task ID 1
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.Arrays.copyOfRange(Arrays.java:2694)
        at java.lang.String.<init>(String.java:203)
        at java.nio.HeapCharBuffer.toString(HeapCharBuffer.java:561)
        at java.nio.CharBuffer.toString(CharBuffer.java:1201)
        at org.apache.hadoop.io.Text.decode(Text.java:350)
        at org.apache.hadoop.io.Text.decode(Text.java:327)
        at org.apache.hadoop.io.Text.toString(Text.java:254)
        at
org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
        at
org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
        at
org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
        at org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        at org.apache.spark.scheduler.Task.run(Task.scala:51)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
14/06/09 15:06:50 INFO BlockManagerInfo: Added rdd_1_2 in memory on
131-1.bfc.hpl.hp.com:47164 (size: 103.8 MB, free: 191.1 MB)
14/06/09 15:06:50 INFO MemoryStore: Block rdd_1_3 stored as values to
memory (estimated size 103.7 MB, free 87.4 MB)
14/06/09 15:06:50 INFO BlockManagerMaster: Updated info of block rdd_1_2
14/06/09 15:06:50 INFO BlockManagerInfo: Added rdd_1_3 in memory on
131-1.bfc.hpl.hp.com:47164 (size: 103.7 MB, free: 87.5 MB)
14/06/09 15:06:50 INFO BlockManagerMaster: Updated info of block rdd_1_3
14/06/09 15:06:50 ERROR ExecutorUncaughtExceptionHandler: Uncaught
exception in thread Thread[Executor task launch worker-1,5,main]
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.Arrays.copyOfRange(Arrays.java:2694)
        at java.lang.String.<init>(String.java:203)
        at java.nio.HeapCharBuffer.toString(HeapCharBuffer.java:561)
        at java.nio.CharBuffer.toString(CharBuffer.java:1201)
        at org.apache.hadoop.io.Text.decode(Text.java:350)
        at org.apache.hadoop.io.Text.decode(Text.java:327)
        at org.apache.hadoop.io.Text.toString(Text.java:254)
        at
org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
        at
org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
        at
org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
        at org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        at org.apache.spark.scheduler.Task.run(Task.scala:51)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
14/06/09 15:06:50 INFO TaskSetManager: Starting task 0.0:4 as TID 4 on
executor localhost: localhost (PROCESS_LOCAL)

Re: Setting spark memory limit

Posted by Patrick Wendell <pw...@gmail.com>.
I you run locally then Spark doesn't launch remote executors. However,
in this case you can set the memory with --spark-driver-memory flag to
spark-submit. Does that work?

- Patrick

On Mon, Jun 9, 2014 at 3:24 PM, Henggang Cui <cu...@gmail.com> wrote:
> Hi,
>
> I'm trying to run the SimpleApp example
> (http://spark.apache.org/docs/latest/quick-start.html#a-standalone-app-in-scala)
> on a larger dataset.
>
> The input file is about 1GB, but when I run the Spark program, it
> says:"java.lang.OutOfMemoryError: GC overhead limit exceeded", the full
> error output is attached at the end of the E-mail.
>
> Then I tried multiple ways of setting the memory limit.
>
> In SimpleApp.scala file, I set the following configurations:
>     val conf = new SparkConf()
>                .setAppName("Simple Application")
>                .set("spark.executor.memory", "10g")
>
> And I have also tried appending the following configuration to
> conf/spark-defaults.conf file:
>     spark.executor.memory   10g
>
> But neither of them works. In the error message, it claims "(estimated size
> 103.8 MB, free 191.1 MB)", so the total available memory is still 300MB.
> Why?
>
> Thanks,
> Cui
>
>
> $ ~/spark-1.0.0-bin-hadoop1/bin/spark-submit --class "SimpleApp" --master
> local[4] target/scala-2.10/simple-project_2.10-1.0.jar /tmp/mdata0-10.tsd
> Spark assembly has been built with Hive, including Datanucleus jars on
> classpath
> 14/06/09 15:06:29 INFO SecurityManager: Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
> 14/06/09 15:06:29 INFO SecurityManager: Changing view acls to: cuihe
> 14/06/09 15:06:29 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(cuihe)
> 14/06/09 15:06:29 INFO Slf4jLogger: Slf4jLogger started
> 14/06/09 15:06:29 INFO Remoting: Starting remoting
> 14/06/09 15:06:30 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://spark@131-1.bfc.hpl.hp.com:40779]
> 14/06/09 15:06:30 INFO Remoting: Remoting now listens on addresses:
> [akka.tcp://spark@131-1.bfc.hpl.hp.com:40779]
> 14/06/09 15:06:30 INFO SparkEnv: Registering MapOutputTracker
> 14/06/09 15:06:30 INFO SparkEnv: Registering BlockManagerMaster
> 14/06/09 15:06:30 INFO DiskBlockManager: Created local directory at
> /tmp/spark-local-20140609150630-eaa9
> 14/06/09 15:06:30 INFO MemoryStore: MemoryStore started with capacity 294.9
> MB.
> 14/06/09 15:06:30 INFO ConnectionManager: Bound socket to port 47164 with id
> = ConnectionManagerId(131-1.bfc.hpl.hp.com,47164)
> 14/06/09 15:06:30 INFO BlockManagerMaster: Trying to register BlockManager
> 14/06/09 15:06:30 INFO BlockManagerInfo: Registering block manager
> 131-1.bfc.hpl.hp.com:47164 with 294.9 MB RAM
> 14/06/09 15:06:30 INFO BlockManagerMaster: Registered BlockManager
> 14/06/09 15:06:30 INFO HttpServer: Starting HTTP Server
> 14/06/09 15:06:30 INFO HttpBroadcast: Broadcast server started at
> http://16.106.36.131:48587
> 14/06/09 15:06:30 INFO HttpFileServer: HTTP File server directory is
> /tmp/spark-35e1c47b-bfa1-4fba-bc64-df8eee287bb7
> 14/06/09 15:06:30 INFO HttpServer: Starting HTTP Server
> 14/06/09 15:06:30 INFO SparkUI: Started SparkUI at
> http://131-1.bfc.hpl.hp.com:4040
> 14/06/09 15:06:30 INFO SparkContext: Added JAR
> file:/data/cuihe/spark-app/target/scala-2.10/simple-project_2.10-1.0.jar at
> http://16.106.36.131:35579/jars/simple-project_2.10-1.0.jar with timestamp
> 1402351590741
> 14/06/09 15:06:30 INFO MemoryStore: ensureFreeSpace(32856) called with
> curMem=0, maxMem=309225062
> 14/06/09 15:06:30 INFO MemoryStore: Block broadcast_0 stored as values to
> memory (estimated size 32.1 KB, free 294.9 MB)
> 14/06/09 15:06:30 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 14/06/09 15:06:30 WARN LoadSnappy: Snappy native library not loaded
> 14/06/09 15:06:30 INFO FileInputFormat: Total input paths to process : 1
> 14/06/09 15:06:30 INFO SparkContext: Starting job: count at
> SimpleApp.scala:14
> 14/06/09 15:06:31 INFO DAGScheduler: Got job 0 (count at SimpleApp.scala:14)
> with 7 output partitions (allowLocal=false)
> 14/06/09 15:06:31 INFO DAGScheduler: Final stage: Stage 0(count at
> SimpleApp.scala:14)
> 14/06/09 15:06:31 INFO DAGScheduler: Parents of final stage: List()
> 14/06/09 15:06:31 INFO DAGScheduler: Missing parents: List()
> 14/06/09 15:06:31 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at
> filter at SimpleApp.scala:14), which has no missing parents
> 14/06/09 15:06:31 INFO DAGScheduler: Submitting 7 missing tasks from Stage 0
> (FilteredRDD[2] at filter at SimpleApp.scala:14)
> 14/06/09 15:06:31 INFO TaskSchedulerImpl: Adding task set 0.0 with 7 tasks
> 14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on
> executor localhost: localhost (PROCESS_LOCAL)
> 14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:0 as 1839 bytes
> in 2 ms
> 14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:1 as TID 1 on
> executor localhost: localhost (PROCESS_LOCAL)
> 14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:1 as 1839 bytes
> in 0 ms
> 14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:2 as TID 2 on
> executor localhost: localhost (PROCESS_LOCAL)
> 14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:2 as 1839 bytes
> in 1 ms
> 14/06/09 15:06:31 INFO TaskSetManager: Starting task 0.0:3 as TID 3 on
> executor localhost: localhost (PROCESS_LOCAL)
> 14/06/09 15:06:31 INFO TaskSetManager: Serialized task 0.0:3 as 1839 bytes
> in 1 ms
> 14/06/09 15:06:31 INFO Executor: Running task ID 0
> 14/06/09 15:06:31 INFO Executor: Running task ID 1
> 14/06/09 15:06:31 INFO Executor: Running task ID 2
> 14/06/09 15:06:31 INFO Executor: Running task ID 3
> 14/06/09 15:06:31 INFO Executor: Fetching
> http://16.106.36.131:35579/jars/simple-project_2.10-1.0.jar with timestamp
> 1402351590741
> 14/06/09 15:06:31 INFO Utils: Fetching
> http://16.106.36.131:35579/jars/simple-project_2.10-1.0.jar to
> /tmp/fetchFileTemp7241193225836706654.tmp
> 14/06/09 15:06:31 INFO Executor: Adding
> file:/tmp/spark-68aa13c8-8146-4e6a-80a1-c406a4cef89f/simple-project_2.10-1.0.jar
> to class loader
> 14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
> 14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
> 14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
> 14/06/09 15:06:31 INFO BlockManager: Found block broadcast_0 locally
> 14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_2 not found, computing
> it
> 14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_0 not found, computing
> it
> 14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_1 not found, computing
> it
> 14/06/09 15:06:31 INFO CacheManager: Partition rdd_1_3 not found, computing
> it
> 14/06/09 15:06:31 INFO HadoopRDD: Input split:
> file:/tmp/mdata0-10.tsd:67108864+33554432
> 14/06/09 15:06:31 INFO HadoopRDD: Input split:
> file:/tmp/mdata0-10.tsd:33554432+33554432
> 14/06/09 15:06:31 INFO HadoopRDD: Input split:
> file:/tmp/mdata0-10.tsd:100663296+33554432
> 14/06/09 15:06:31 INFO HadoopRDD: Input split:
> file:/tmp/mdata0-10.tsd:0+33554432
> 14/06/09 15:06:50 INFO MemoryStore: ensureFreeSpace(108800293) called with
> curMem=32856, maxMem=309225062
> 14/06/09 15:06:50 INFO MemoryStore: Block rdd_1_2 stored as values to memory
> (estimated size 103.8 MB, free 191.1 MB)
> 14/06/09 15:06:50 INFO MemoryStore: ensureFreeSpace(108716407) called with
> curMem=108833149, maxMem=309225062
> 14/06/09 15:06:50 ERROR Executor: Exception in task ID 1
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Arrays.copyOfRange(Arrays.java:2694)
>         at java.lang.String.<init>(String.java:203)
>         at java.nio.HeapCharBuffer.toString(HeapCharBuffer.java:561)
>         at java.nio.CharBuffer.toString(CharBuffer.java:1201)
>         at org.apache.hadoop.io.Text.decode(Text.java:350)
>         at org.apache.hadoop.io.Text.decode(Text.java:327)
>         at org.apache.hadoop.io.Text.toString(Text.java:254)
>         at
> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
>         at
> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
>         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>         at
> org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>         at org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         at org.apache.spark.scheduler.Task.run(Task.scala:51)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> 14/06/09 15:06:50 INFO BlockManagerInfo: Added rdd_1_2 in memory on
> 131-1.bfc.hpl.hp.com:47164 (size: 103.8 MB, free: 191.1 MB)
> 14/06/09 15:06:50 INFO MemoryStore: Block rdd_1_3 stored as values to memory
> (estimated size 103.7 MB, free 87.4 MB)
> 14/06/09 15:06:50 INFO BlockManagerMaster: Updated info of block rdd_1_2
> 14/06/09 15:06:50 INFO BlockManagerInfo: Added rdd_1_3 in memory on
> 131-1.bfc.hpl.hp.com:47164 (size: 103.7 MB, free: 87.5 MB)
> 14/06/09 15:06:50 INFO BlockManagerMaster: Updated info of block rdd_1_3
> 14/06/09 15:06:50 ERROR ExecutorUncaughtExceptionHandler: Uncaught exception
> in thread Thread[Executor task launch worker-1,5,main]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Arrays.copyOfRange(Arrays.java:2694)
>         at java.lang.String.<init>(String.java:203)
>         at java.nio.HeapCharBuffer.toString(HeapCharBuffer.java:561)
>         at java.nio.CharBuffer.toString(CharBuffer.java:1201)
>         at org.apache.hadoop.io.Text.decode(Text.java:350)
>         at org.apache.hadoop.io.Text.decode(Text.java:327)
>         at org.apache.hadoop.io.Text.toString(Text.java:254)
>         at
> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
>         at
> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:458)
>         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>         at
> org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>         at org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         at org.apache.spark.scheduler.Task.run(Task.scala:51)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> 14/06/09 15:06:50 INFO TaskSetManager: Starting task 0.0:4 as TID 4 on
> executor localhost: localhost (PROCESS_LOCAL)