You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Benedict (JIRA)" <ji...@apache.org> on 2014/07/16 13:35:05 UTC
[jira] [Resolved] (CASSANDRA-7549) Heavy Disk Read I/O
[ https://issues.apache.org/jira/browse/CASSANDRA-7549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Benedict resolved CASSANDRA-7549.
---------------------------------
Resolution: Invalid
What you're asking for is not generally optimal behaviour, so is not something we would consider defaulting to in Cassandra. For a dataset larger than memory reading more data than you need to answer the query is going to severely hurt performance. What you probably want to do is to set populate_io_cache_on_flush to true, so that the data is left in memory after a flush (by default in 2.0 it is evicted to ensure other live data is not affected, although in 2.1 this is no longer the case)
> Heavy Disk Read I/O
> -------------------
>
> Key: CASSANDRA-7549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7549
> Project: Cassandra
> Issue Type: Improvement
> Environment: Cassandra 2.0.6
> Reporter: Hanson
>
> We observed heavy disk Read I/O, sometimes almost ~100% disk I/O %util. The block size for Disk Read seems too small per “iostat”:
> - DB Query: ~40KB per read
> - SSTables Compaction : ~120KB per read
> Could it use larger block size for Disk Read? (from Cassandra or OS disk driver tuning)
--
This message was sent by Atlassian JIRA
(v6.2#6252)