You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Daiyi Yang (JIRA)" <ji...@apache.org> on 2016/09/09 19:08:20 UTC

[jira] [Created] (CASSANDRA-12623) Performance impact for the big collection

Daiyi Yang created CASSANDRA-12623:
--------------------------------------

             Summary: Performance impact for the big collection
                 Key: CASSANDRA-12623
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12623
             Project: Cassandra
          Issue Type: Bug
            Reporter: Daiyi Yang
             Fix For: 3.0.x


Hey there, we're recently seeing performance issues with rows with huge collection of udts, we created a table that has one row per partition and the biggest row has around 10M of data, which should be way lower than the expected limitation. However we did see some performance impact when we're trying to update rows with a big number of udts in its collection. 

From what we saw, whenever we're trying to write those rows, the garbage collection was very aggressive and the cpu was very high.

I'm wondering how the collection type is handled and if there's any suggested limitation for it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)