You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ofer Eliassaf (JIRA)" <ji...@apache.org> on 2016/09/08 05:12:20 UTC

[jira] [Created] (SPARK-17444) spark memory allocation makes workers non responsive

Ofer Eliassaf created SPARK-17444:
-------------------------------------

             Summary: spark memory allocation makes workers non responsive
                 Key: SPARK-17444
                 URL: https://issues.apache.org/jira/browse/SPARK-17444
             Project: Spark
          Issue Type: Bug
          Components: PySpark
    Affects Versions: 2.0.0
         Environment: spark standalone
            Reporter: Ofer Eliassaf
            Priority: Critical


I am running a cluster of 3 slaves and 2 masters with spark standalone.
total of 12 cores  (4 in each machine)
memory allocated to executors and workers are 4.5GB, and the machine has total of 8GB.

steps to reproduce:
open pyspark and point to the masters

run the following command multiple times:
sc.parallelize(range(1,50000000), 12).count()
after few runs the python will stop respond.

then exit the python shell.

The critical issue that after this happens the cluster is not useful any more:
There is no way to submit application or running another commands on the cluster etc.


Hope this helps!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org