You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2014/11/20 00:47:34 UTC
[jira] [Resolved] (SPARK-4384) Too many open files during sort in
pyspark
[ https://issues.apache.org/jira/browse/SPARK-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Rosen resolved SPARK-4384.
-------------------------------
Resolution: Fixed
Fix Version/s: 1.2.0
Issue resolved by pull request 3252
[https://github.com/apache/spark/pull/3252]
> Too many open files during sort in pyspark
> ------------------------------------------
>
> Key: SPARK-4384
> URL: https://issues.apache.org/jira/browse/SPARK-4384
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 1.2.0
> Reporter: Davies Liu
> Priority: Blocker
> Fix For: 1.2.0
>
>
> Reported in maillist:
> On Thu, Nov 13, 2014 at 11:28 AM, santon <st...@gmail.com> wrote:
> > Thanks for the thoughts. I've been testing on Spark 1.1 and haven't seen the
> > IndexError yet. I've run into some other errors ("too many open files"), but
> > these issues seem to have been discussed already. The dataset, by the way,
> > was about 40 Gb and 188 million lines; I'm running a sort on 3 worker nodes
> > with a total of about 80 cores.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org