You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Stefania (JIRA)" <ji...@apache.org> on 2015/08/17 09:57:48 UTC

[jira] [Updated] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

     [ https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stefania updated CASSANDRA-8630:
--------------------------------
    Attachment: flight_recorder_002_files.tar.gz

I'm attaching the flight recorder files (002) for a micro benchmark similar to the initial one, compressing 100 sstables of approximately 1.2 GB of data. The dtests to reproduce this scenario can be found on [this branch|https://github.com/stef1927/cassandra-dtest/tree/8630] and are called {{compressed_compaction_stress_test}} and {{uncompressed_compaction_stress_test}}.

Here are the results:

|| ||Total times (two runs)||% of total time spent reading||
|Trunk with compression|191.435s, 176.934s|RAR: 2.79% (isEOF, read, readBytes), AbstractDataInput: 0.17%|
|Trunk without compression|180.146s, 172.86|RAR: 2.12% (isEOF, read, readBytes), AbstractDataInput: 0.22%|
|Patch with compression|173.011s, 195.094s|RAR: 0.43% (seek, readBytes, reBuffer), RebufferingInputStream: 0.02%|
|Patch without compression|233.211s, 231.750s|RAR: 0.79% (readBytes, getPath, Builder.build), RebufferingInputStream: 0.68%|

{{RebufferingInputStream}} is the name of the old {{NIODataInputStream}} and is the only implementation of {{DataInputPlus}}, with the exception of {{BytesReadTracker}}, which is just a wrapper. This class has one abstract method, {{reBuffer}}, which is implemented by the new {{NIODataInputStream}}, {{RandomAccessReader}}, {{MemoryInputStream}} and {{DataInputBuffer}}. I did not rename these classes or move them into a dedicated package to help with rebasing, but this can be done just before committing if required.

I did not change anything in the writers since it seems that both buffered and unbuffered {{DataOutputStreamPlus}} are already doing the right thing and they did not appear as hotspots in flight recorder (total time spent in {{SequentialWriter}} with compression is 1.06%, of which 0.7% rebuffering and 0.3% writing).

As you can see from the numbers above, it is not very easy to reproduce the exact same numbers, I suspect this is because I run the tests on my box, which is busy doing other things. Furthermore, the improvement in absolute numbers is not noticeable because other classes (e.g. {{UnfilteredRowMergeIterator}} and {{MergeIterator}}) are overshadowing the read methods. You can look at the flight recorder files with JMC for further details. However, I've taken from the flight recorder files the total % of time spent reading (RAR and its immediate base class)  and this was reduced from approximately 2.3% to 1.5% for the compressed case and from 2.95% to 0.45% for the compressed case.



> Faster sequential IO (on compaction, streaming, etc)
> ----------------------------------------------------
>
>                 Key: CASSANDRA-8630
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core, Tools
>            Reporter: Oleg Anastasyev
>            Assignee: Stefania
>              Labels: compaction, performance
>             Fix For: 3.x
>
>         Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, flight_recorder_001_files.tar.gz, flight_recorder_002_files.tar.gz
>
>
> When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
> This is because default implementations of readShort,readLong, etc as well as their matching write* are implemented with numerous calls of byte by byte read and write. 
> This makes a lot of syscalls as well.
> A quick microbench shows than just reimplementation of these methods in either way gives 8x speed increase.
> A patch attached implements RandomAccessReader.read<Type> and SequencialWriter.write<Type> methods in more efficient way.
> I also eliminated some extra byte copies in CompositeType.split and ColumnNameHelper.maxComponents, which were on my profiler's hotspot method list during tests.
> A stress tests on my laptop show that this patch makes compaction 25-30% faster  on uncompressed sstables and 15% faster for compressed ones.
> A deployment to production shows much less CPU load for compaction. 
> (I attached a cpu load graph from one of our production, orange is niced CPU load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)