You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Benedict (JIRA)" <ji...@apache.org> on 2015/08/18 23:17:46 UTC

[jira] [Commented] (CASSANDRA-10117) FD Leak with DTCS

    [ https://issues.apache.org/jira/browse/CASSANDRA-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14701996#comment-14701996 ] 

Benedict commented on CASSANDRA-10117:
--------------------------------------

I'm actually finding this tough to reproduce locally, which I was not expecting from your initial report. Can this be reproduced immediately after a restart, as the logs suggest?

> FD Leak with DTCS
> -----------------
>
>                 Key: CASSANDRA-10117
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10117
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Philip Thompson
>            Assignee: Benedict
>             Fix For: 2.1.x
>
>         Attachments: fd.log
>
>
> Using 2.1-HEAD, specifically commit 972ae147247a, I am experiencing issues in a one node test with DTCS. These seem separate from CASSANDRA-9882. 
> Using an ec2 i2.2xlarge node with all default settings and the following schema:
> {code}
> CREATE KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
>   CREATE TABLE tab (
>       key uuid,
>       year int,
>       month int,
>       day int,
>       c0 blob,
>       c1 blob,
>       c2 blob,
>       c3 blob,
>       c4 blob,
>       c5 blob,
>       c6 blob,
>       c7 blob,
>       c8 blob,
>       PRIMARY KEY ((year, month, day), key)
>   ) WITH compaction = {'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'};
> {code}
> I loaded 4500M rows via stress, which totaled ~1.2TB. I then performed a few mixed workloads via stress, which were 50% insert, 50% the following read: {{Select * from tab where year = ? and month = ? and day = ? limit 1000}}.
> This was done to reproduce a separate issue for a user. I then received reports from the user that they were experiencing open FD counts per sstable in the thousands. With absolutely no load on my cluster, I was finding that any sstable with open FDs had between 243 and 245 open FDs. I then started a stress process performing the same read/write workload as before. I was then immediately seeing FD counts as high as 6615 for a single sstable.
> I was determining FD counts per sstable with the following [example] call:
> {{lsof | grep '16119-Data.db' | wc -l}}.
> I still have this cluster running, for you to examine. System.log is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)