You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "uncleGen (JIRA)" <ji...@apache.org> on 2015/04/03 11:20:52 UTC

[jira] [Created] (SPARK-6695) Add an external iterator: a hadoop-like output collector

uncleGen created SPARK-6695:
-------------------------------

             Summary: Add an external iterator: a hadoop-like output collector
                 Key: SPARK-6695
                 URL: https://issues.apache.org/jira/browse/SPARK-6695
             Project: Spark
          Issue Type: New Feature
          Components: Spark Core
            Reporter: uncleGen


In practical use, we usually need to create a big iterator, which means too big in `memory usage` or too long in `array size`. On the one hand, it leads to too much memory consumption. On the other hand, one `Array` may not hold all the elements, as java array indices are of type 'int' (4 bytes or 32 bits). So, IMHO, we may provide a `collector`, which has a buffer, 100MB or any others, and could spill data into disk. The use case may like:

```

   rdd.mapPartition { it => 
      ...
      val collector = new ExteranalCollector()
      collector.collect(a)
      ...
      collector.iterator
  }
   
```

I have done some related works, and I need your opinions, thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org