You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Tom van der Woerdt <to...@booking.com> on 2018/06/27 22:28:03 UTC

Re: [External] Maximum SSTable size

I’ve had SSTables as big as 11TB. It works, read performance is fine. But,
compaction is hell, because you’ll need twice that in disk space and it
will take many hours 🙂

Avoid large SSTables unless you really know what you’re doing. LCS is a
great default for almost every workload, especially if your cluster has a
single large table. STCS is the actual Cassandra default but it often
causes more trouble than it solves, because of large SSTables 🙂

Hope that helps!

Tom


On Wed, 27 Jun 2018 at 08:02, Lucas Benevides <lu...@maurobenevides.com.br>
wrote:

> Hello Community,
>
> Is there a maximum SSTable Size?
> If there is not, does it go up to the maximum Operational System values?
>
> Thanks in advance,
> Lucas Benevides
>
-- 
Tom van der Woerdt
Site Reliability Engineer

Booking.com B.V.
Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
Direct +31207153426
[image: Booking.com] <https://www.booking.com/>
The world's #1 accommodation site
43 languages, 198+ offices worldwide, 120,000+ global destinations,
1,550,000+ room nights booked every day
No booking fees, best price always guaranteed
Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)