You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by rock zhang <ro...@alohar.com> on 2015/08/10 18:52:45 UTC

OOM when Adding host

Hi All,

Currently i have three hosts. The data is not balanced, one is 79G, another two have 300GB. When I adding a new host, firstly I got "too many open files" error, then i changed file open limit from 100,000 to 1, 000, 000. Then I got OOM error.

 Should I change the limits to 20,0000 instead of 1M?  My memory is 33G, i am using EC2 c2*2xlarge.  Ideally even if the data is large, just slower, should not OOM, don't understand why .

I actually got this error pretty often. I guess the reason is because my data is pretty large?  If cassandra try to split the data evenly on all host, then Cassandra need to copy around 200GB to the new host. 

From my experience, An alternative way to solve this is add new host as seed, do not use "Add host", then data would be move, so so OOM. But not sure data will be lost or cannot be located. 

Thanks
Rock 


Re: OOM when Adding host

Posted by rock zhang <ro...@alohar.com>.
I logged the open files every 10 mins, last record is : 

lsof -p $cassadnraPID | wc -l

74728

lsof |wc-l
5887913       # this is a very large number, don't know why.

After OOM the open file numbers back to few hundreds (lsof | wc -l ). 




On Aug 10, 2015, at 9:59 AM, rock zhang <ro...@alohar.com> wrote:

> My Cassandra version is 2.1.4.
> 
> Thanks
> Rock 
> 
> On Aug 10, 2015, at 9:52 AM, rock zhang <ro...@alohar.com> wrote:
> 
>> Hi All,
>> 
>> Currently i have three hosts. The data is not balanced, one is 79G, another two have 300GB. When I adding a new host, firstly I got "too many open files" error, then i changed file open limit from 100,000 to 1, 000, 000. Then I got OOM error.
>> 
>> Should I change the limits to 20,0000 instead of 1M?  My memory is 33G, i am using EC2 c2*2xlarge.  Ideally even if the data is large, just slower, should not OOM, don't understand why .
>> 
>> I actually got this error pretty often. I guess the reason is because my data is pretty large?  If cassandra try to split the data evenly on all host, then Cassandra need to copy around 200GB to the new host. 
>> 
>> From my experience, An alternative way to solve this is add new host as seed, do not use "Add host", then data would be move, so so OOM. But not sure data will be lost or cannot be located. 
>> 
>> Thanks
>> Rock 
>> 
> 


Re: OOM when Adding host

Posted by rock zhang <ro...@alohar.com>.
My Cassandra version is 2.1.4.

Thanks
Rock 

On Aug 10, 2015, at 9:52 AM, rock zhang <ro...@alohar.com> wrote:

> Hi All,
> 
> Currently i have three hosts. The data is not balanced, one is 79G, another two have 300GB. When I adding a new host, firstly I got "too many open files" error, then i changed file open limit from 100,000 to 1, 000, 000. Then I got OOM error.
> 
> Should I change the limits to 20,0000 instead of 1M?  My memory is 33G, i am using EC2 c2*2xlarge.  Ideally even if the data is large, just slower, should not OOM, don't understand why .
> 
> I actually got this error pretty often. I guess the reason is because my data is pretty large?  If cassandra try to split the data evenly on all host, then Cassandra need to copy around 200GB to the new host. 
> 
> From my experience, An alternative way to solve this is add new host as seed, do not use "Add host", then data would be move, so so OOM. But not sure data will be lost or cannot be located. 
> 
> Thanks
> Rock 
>