You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Christopher Bourez (JIRA)" <ji...@apache.org> on 2016/01/25 16:27:39 UTC

[jira] [Updated] (SPARK-12980) pyspark crash for large dataset - clone

     [ https://issues.apache.org/jira/browse/SPARK-12980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Christopher Bourez updated SPARK-12980:
---------------------------------------
    Description: 
I installed spark 1.6 on many different computers. 

On Windows, PySpark textfile method, followed by take(1), does not work on a file of 13M.
If I set numpartitions to 2000 or take a smaller file, the method works well.
The Pyspark is set with all RAM memory of the computer thanks to the command --conf spark.driver.memory=5g in local mode.

On Mac OS, I'm able to launch the exact same program with Pyspark with 16G RAM for a file of much bigger in comparison, of 5G. Memory is correctly allocated, removed etc

On Ubuntu, no trouble, I can also launch a cluster http://christopher5106.github.io/big/data/2016/01/19/computation-power-as-you-need-with-EMR-auto-termination-cluster-example-random-forest-python.html

What could be the reason to have the windows spark textfile method fail ?

  was:
I tried to import a local text(over 100mb) file via textFile in pyspark, when i ran data.take(), it failed and gave error messages including:
15/12/10 17:17:43 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "E:/spark_python/test3.py", line 9, in <module>
    lines.take(5)
  File "D:\spark\spark-1.5.2-bin-hadoop2.6\python\pyspark\rdd.py", line 1299, in take
    res = self.context.runJob(self, takeUpToNumLeft, p)
  File "D:\spark\spark-1.5.2-bin-hadoop2.6\python\pyspark\context.py", line 916, in runJob
    port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
  File "C:\Anaconda2\lib\site-packages\py4j\java_gateway.py", line 813, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "D:\spark\spark-1.5.2-bin-hadoop2.6\python\pyspark\sql\utils.py", line 36, in deco
    return f(*a, **kw)
  File "C:\Anaconda2\lib\site-packages\py4j\protocol.py", line 308, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.net.SocketException: Connection reset by peer: socket write error

Then i ran the same code for a small text file, this time .take() worked fine.
How can i solve this problem?


> pyspark crash for large dataset - clone
> ---------------------------------------
>
>                 Key: SPARK-12980
>                 URL: https://issues.apache.org/jira/browse/SPARK-12980
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.5.2
>         Environment: windows
>            Reporter: Christopher Bourez
>
> I installed spark 1.6 on many different computers. 
> On Windows, PySpark textfile method, followed by take(1), does not work on a file of 13M.
> If I set numpartitions to 2000 or take a smaller file, the method works well.
> The Pyspark is set with all RAM memory of the computer thanks to the command --conf spark.driver.memory=5g in local mode.
> On Mac OS, I'm able to launch the exact same program with Pyspark with 16G RAM for a file of much bigger in comparison, of 5G. Memory is correctly allocated, removed etc
> On Ubuntu, no trouble, I can also launch a cluster http://christopher5106.github.io/big/data/2016/01/19/computation-power-as-you-need-with-EMR-auto-termination-cluster-example-random-forest-python.html
> What could be the reason to have the windows spark textfile method fail ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org