You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Vladoiu Catalin <vl...@gmail.com> on 2016/01/13 13:09:37 UTC

How to chose the size of a Kafka broker

Hi guys,

I've run into a long conversation with my colleagues when we discussed the
size of the Brokers for our new Kafka cluster and we still haven't reached
a final conclusion.

Our main concern is the size of the requests 10-20MB per request (producer
will send big requests), maybe more and we estimate that we will have 4-5TB
per day.

Our debate is between:
1. Having a smaller cluster(not so many brokers) but big config, something
like this:
Disk: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration
because our Hadoop cluster has that config and can easily handle that
amount of data.
2. Having a bigger number of brokers but smaller broker config.

I was hopping that somebody with more experience in using Kafka can advice
on this.

Thanks,
Catalin

Re: How to chose the size of a Kafka broker

Posted by Jens Rantil <je...@tink.se>.
Hi Vladoiu,

I am by no means a Kafka expert, but what are you optimizing for?

   - Cost could be a variable.
   - Time to bring on a new broker could be another variable. For large
   machines that could take longer since they need to stream more data.

Cheers,
Jens

On Wed, Jan 13, 2016 at 1:09 PM, Vladoiu Catalin <vl...@gmail.com>
wrote:

> Hi guys,
>
> I've run into a long conversation with my colleagues when we discussed the
> size of the Brokers for our new Kafka cluster and we still haven't reached
> a final conclusion.
>
> Our main concern is the size of the requests 10-20MB per request (producer
> will send big requests), maybe more and we estimate that we will have 4-5TB
> per day.
>
> Our debate is between:
> 1. Having a smaller cluster(not so many brokers) but big config, something
> like this:
> Disk: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration
> because our Hadoop cluster has that config and can easily handle that
> amount of data.
> 2. Having a bigger number of brokers but smaller broker config.
>
> I was hopping that somebody with more experience in using Kafka can advice
> on this.
>
> Thanks,
> Catalin
>



-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.rantil@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary>
 Twitter <https://twitter.com/tink>

Re: How to chose the size of a Kafka broker

Posted by Stephen Powis <sp...@salesforce.com>.
I can't really answer your question, but you don't mention your network
layout/hardware.  May want to add that as a data point in your decision
(wouldn't want to overrun your network device(s) on the brokers).


On Wed, Jan 13, 2016 at 7:09 PM, Vladoiu Catalin <vl...@gmail.com>
wrote:

> Hi guys,
>
> I've run into a long conversation with my colleagues when we discussed the
> size of the Brokers for our new Kafka cluster and we still haven't reached
> a final conclusion.
>
> Our main concern is the size of the requests 10-20MB per request (producer
> will send big requests), maybe more and we estimate that we will have 4-5TB
> per day.
>
> Our debate is between:
> 1. Having a smaller cluster(not so many brokers) but big config, something
> like this:
> Disk: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration
> because our Hadoop cluster has that config and can easily handle that
> amount of data.
> 2. Having a bigger number of brokers but smaller broker config.
>
> I was hopping that somebody with more experience in using Kafka can advice
> on this.
>
> Thanks,
> Catalin
>