You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Eric Plowe <er...@gmail.com> on 2014/03/05 07:44:39 UTC
Noticing really high read latency
Background info:
6 node cluster.
24 gigs of ram per machine
8 gigs of ram dedicated to c*
4 4 core cpu's
2 250 gig SSD's raid 0
Running c* 1.2.6
The CF is configured as followed
CREATE TABLE behaviors (
uid text,
buid int,
name text,
expires text,
value text,
PRIMARY KEY (uid, buid, name)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'sstable_size_in_mb': '160', 'class':
'LeveledCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
I am noticing that the read latency is very high considering when I look at
the output of nodetool cfstats.
This is the example output of one of the nodes:
Column Family: behaviors
SSTable count: 2
SSTables in each level: [1, 1, 0, 0, 0, 0, 0, 0, 0]
Space used (live): 171496198
Space used (total): 171496591
Number of Keys (estimate): 1153664
Memtable Columns Count: 14445
Memtable Data Size: 1048576
Memtable Switch Count: 1
Read Count: 1894
Read Latency: 0.497 ms.
Write Count: 7169
Write Latency: 0.041 ms.
Pending Tasks: 0
Bloom Filter False Positives: 4
Bloom Filter False Ratio: 0.00862
Bloom Filter Space Used: 3533152
Compacted row minimum size: 125
Compacted row maximum size: 9887
Compacted row mean size: 365
The write latency is awesome, but the read latency, not so much. The output
of iostat doesn't show anything out of the ordinary. The cpu utilization is
between 1% to 5%.
All read queries are issued with a CL of ONE. We always include "WHERE uid
= '<somevalue>'" for the queries.
If there is any more info I can provide, please let me know. At this point
in time, I am a bit stumped.
Regards,
Eric Plowe
Re: Noticing really high read latency
Posted by Eric Plowe <er...@gmail.com>.
Disregard... heh. Was reading the latency as SECONDS. Sorry, it's been one
of those weeks.
On Wed, Mar 5, 2014 at 1:44 AM, Eric Plowe <er...@gmail.com> wrote:
> Background info:
>
> 6 node cluster.
> 24 gigs of ram per machine
> 8 gigs of ram dedicated to c*
> 4 4 core cpu's
> 2 250 gig SSD's raid 0
> Running c* 1.2.6
>
> The CF is configured as followed
>
> CREATE TABLE behaviors (
> uid text,
> buid int,
> name text,
> expires text,
> value text,
> PRIMARY KEY (uid, buid, name)
> ) WITH
> bloom_filter_fp_chance=0.010000 AND
> caching='KEYS_ONLY' AND
> comment='' AND
> dclocal_read_repair_chance=0.000000 AND
> gc_grace_seconds=864000 AND
> read_repair_chance=0.100000 AND
> replicate_on_write='true' AND
> populate_io_cache_on_flush='false' AND
> compaction={'sstable_size_in_mb': '160', 'class':
> 'LeveledCompactionStrategy'} AND
> compression={'sstable_compression': 'SnappyCompressor'};
>
> I am noticing that the read latency is very high considering when I look
> at the output of nodetool cfstats.
>
> This is the example output of one of the nodes:
>
> Column Family: behaviors
> SSTable count: 2
> SSTables in each level: [1, 1, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 171496198
> Space used (total): 171496591
> Number of Keys (estimate): 1153664
> Memtable Columns Count: 14445
> Memtable Data Size: 1048576
> Memtable Switch Count: 1
> Read Count: 1894
> Read Latency: 0.497 ms.
> Write Count: 7169
> Write Latency: 0.041 ms.
> Pending Tasks: 0
> Bloom Filter False Positives: 4
> Bloom Filter False Ratio: 0.00862
> Bloom Filter Space Used: 3533152
> Compacted row minimum size: 125
> Compacted row maximum size: 9887
> Compacted row mean size: 365
>
> The write latency is awesome, but the read latency, not so much. The
> output of iostat doesn't show anything out of the ordinary. The cpu
> utilization is between 1% to 5%.
>
> All read queries are issued with a CL of ONE. We always include "WHERE uid
> = '<somevalue>'" for the queries.
>
> If there is any more info I can provide, please let me know. At this point
> in time, I am a bit stumped.
>
> Regards,
>
> Eric Plowe
>
>
>
>