You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "David Boxenhorn (JIRA)" <ji...@apache.org> on 2011/01/12 13:01:47 UTC

[jira] Issue Comment Edited: (CASSANDRA-1608) Redesigned Compaction

    [ https://issues.apache.org/jira/browse/CASSANDRA-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12980654#action_12980654 ] 

David Boxenhorn edited comment on CASSANDRA-1608 at 1/12/11 7:00 AM:
---------------------------------------------------------------------

> Partitioning by token ranges is functionally equivalent to virtual nodes, no? Which in the OPP case means you now have to deal with intra-node load balancing. 

I don't see any reason why OPP has to be used within a node just because it is used between nodes. I think RP should always be used within a node, and the number of partitions should be chosen to keep SST size optimal. 

RP makes OPP range queries a little harder, but not much: All partitions must be queried to find the next row, but since range queries are done in batches (e.g. get the next 100 rows) I don't think it will slow things down. 

      was (Author: davidboxenhorn):
    > Partitioning by token ranges is functionally equivalent to virtual nodes, no? Which in the OPP case means you now have to deal with intra-node load balancing. 

I don't see any reason why OPP has to be used within a node just because it is used between nodes. I think RP should always be used within a node, and the number of partitions should be chosen to keep SST size optimal. 
  
> Redesigned Compaction
> ---------------------
>
>                 Key: CASSANDRA-1608
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1608
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chris Goffinet
>             Fix For: 0.7.1
>
>
> After seeing the I/O issues in CASSANDRA-1470, I've been doing some more thinking on this subject that I wanted to lay out.
> I propose we redo the concept of how compaction works in Cassandra. At the moment, compaction is kicked off based on a write access pattern, not read access pattern. In most cases, you want the opposite. You want to be able to track how well each SSTable is performing in the system. If we were to keep statistics in-memory of each SSTable, prioritize them based on most accessed, and bloom filter hit/miss ratios, we could intelligently group sstables that are being read most often and schedule them for compaction. We could also schedule lower priority maintenance on SSTable's not often accessed.
> I also propose we limit the size of each SSTable to a fix sized, that gives us the ability to  better utilize our bloom filters in a predictable manner. At the moment after a certain size, the bloom filters become less reliable. This would also allow us to group data most accessed. Currently the size of an SSTable can grow to a point where large portions of the data might not actually be accessed as often.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.