You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2013/06/28 02:15:22 UTC

[jira] [Resolved] (CASSANDRA-3945) Support incremental/batch sizes for BulkRecordWriter, due to GC overhead issues

     [ https://issues.apache.org/jira/browse/CASSANDRA-3945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Ellis resolved CASSANDRA-3945.
---------------------------------------

    Resolution: Duplicate
      Assignee:     (was: Chris Goffinet)

CASSANDRA-5555 fixes this
                
> Support incremental/batch sizes for BulkRecordWriter, due to GC overhead issues
> -------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-3945
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3945
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Chris Goffinet
>            Priority: Minor
>
> When loading large amounts of data, currently the BulkRecordWriter will write out all the sstables, then stream them. This actually caused us GC overhead issues, due to our heap sizes for reducers. We ran into a problem where the number of SSTables on disk that had to be open would cause the jvm process to die. We also wanted a way to incrementally stream them as we created them. I created support for setting this, the default behavior is wait for them to be created. But if you increase to >= 1, you can determine the batch size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira