You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Yifan LI <ia...@gmail.com> on 2014/07/21 14:48:21 UTC
java.lang.OutOfMemoryError: GC overhead limit exceeded
Hi,
I am trying to load the Graphx example dataset(LiveJournal, 1.08GB) through Scala Shell on my standalone multicore machine(8 cpus, 16GB mem), but an OutOfMemory error was returned when below code was running,
val graph = GraphLoader.edgeListFile(sc, path, minEdgePartitions = 16).partitionBy(PartitionStrategy.RandomVertexCut)
I guess I should set some parameters to JVM? like "-Xmx5120m"
But how to do this in Scala Shell?
I directly used the "bin/spark-shell" to start spark and seems everything works correctly in WebUI.
Or, I should do parameters setting at somewhere(spark-1.0.1)?
Best,
Yifan LI
Re: java.lang.OutOfMemoryError: GC overhead limit exceeded
Posted by Yifan LI <ia...@gmail.com>.
Thanks, Abel.
Best,
Yifan LI
On Jul 21, 2014, at 4:16 PM, Abel Coronado Iruegas <ac...@gmail.com> wrote:
> Hi Yifan
>
> This works for me:
>
> export SPARK_JAVA_OPTS="-Xms10g -Xmx40g -XX:MaxPermSize=10g"
> export ADD_JARS=/home/abel/spark/MLI/target/MLI-assembly-1.0.jar
> export SPARK_MEM=40g
> ./spark-shell
>
>
> Regards
>
>
> On Mon, Jul 21, 2014 at 7:48 AM, Yifan LI <ia...@gmail.com> wrote:
> Hi,
>
> I am trying to load the Graphx example dataset(LiveJournal, 1.08GB) through Scala Shell on my standalone multicore machine(8 cpus, 16GB mem), but an OutOfMemory error was returned when below code was running,
>
> val graph = GraphLoader.edgeListFile(sc, path, minEdgePartitions = 16).partitionBy(PartitionStrategy.RandomVertexCut)
>
> I guess I should set some parameters to JVM? like "-Xmx5120m"
> But how to do this in Scala Shell?
> I directly used the "bin/spark-shell" to start spark and seems everything works correctly in WebUI.
>
> Or, I should do parameters setting at somewhere(spark-1.0.1)?
>
>
>
> Best,
> Yifan LI
>
Re: java.lang.OutOfMemoryError: GC overhead limit exceeded
Posted by Abel Coronado Iruegas <ac...@gmail.com>.
Hi Yifan
This works for me:
export SPARK_JAVA_OPTS="-Xms10g -Xmx40g -XX:MaxPermSize=10g"
export ADD_JARS=/home/abel/spark/MLI/target/MLI-assembly-1.0.jar
export SPARK_MEM=40g
./spark-shell
Regards
On Mon, Jul 21, 2014 at 7:48 AM, Yifan LI <ia...@gmail.com> wrote:
> Hi,
>
> I am trying to load the Graphx example dataset(LiveJournal, 1.08GB)
> through *Scala Shell* on my standalone multicore machine(8 cpus, 16GB
> mem), but an OutOfMemory error was returned when below code was running,
>
> val graph = GraphLoader.edgeListFile(sc, path, minEdgePartitions =
> 16).partitionBy(PartitionStrategy.RandomVertexCut)
>
> I guess I should set some parameters to JVM? like "-Xmx5120m"
> But how to do this in Scala Shell?
> I directly used the "bin/spark-shell" to start spark and seems everything
> works correctly in WebUI.
>
> Or, I should do parameters setting at somewhere(spark-1.0.1)?
>
>
>
> Best,
> Yifan LI
>