You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by adrien ruffie <ad...@hotmail.fr> on 2018/03/01 17:09:27 UTC

Hardware Guidance

Hi all,


on the slide 5 in the following link:

https://fr.slideshare.net/HadoopSummit/apache-kafka-best-practices/1



The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)" Kafka Brokers

but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for each broker ?


Thank you very much,


and best regards,


Adrien

Re: Hardware Guidance

Posted by Thomas Aley <Th...@ibm.com>.
Hi Adrien,

Without asking the author directly I can't give the exact answer but I 
would interpret that as per broker. Kafka will make use of as much 
hardware as you give it so it's not uncommon to see many CPU cores and 
lots or RAM per broker. That being said it's completely down to your use 
case how much hardware you would require. 

Tom Aley
thomas.aley@ibm.com



From:   adrien ruffie <ad...@hotmail.fr>
To:     "users@kafka.apache.org" <us...@kafka.apache.org>
Date:   01/03/2018 17:09
Subject:        Hardware Guidance



Hi all,


on the slide 5 in the following link:

https://urldefense.proofpoint.com/v2/url?u=https-3A__fr.slideshare.net_HadoopSummit_apache-2Dkafka-2Dbest-2Dpractices_1&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=HAGsP00blUVdJLMAhrBgwP8nKbGaKIkfoe2NJdvKRM0&m=htK1S8vK62xmNDXSuIRHlIB_mR9GYbKv9C4yCA4XYw4&s=s2LkKpG3bZAYZFoaqdXPQ7cZGWL7EUl5aqJ8Qkm-0W4&e=




The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)" Kafka 
Brokers

but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for 
each broker ?


Thank you very much,


and best regards,


Adrien



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

RE: Hardware Guidance

Posted by adrien ruffie <ad...@hotmail.fr>.
Thank all,


okay it per broker. As you can say it's really depend of the use case. Even if for me it seems huge, it will really depend on the use in each IS

and the needed throughput to carry out a project.


More again,


thank all,


Adrien

________________________________
De : Svante Karlsson <sv...@csi.se>
Envoyé : jeudi 1 mars 2018 19:09:52
À : users@kafka.apache.org
Objet : Re: Hardware Guidance

It's per broker. Usually you run with 4-6GB of java heap. The rest is used
as disk cache and it's more that 64GB seems like a sweet spot between
memory cost and performance.

/svante

2018-03-01 18:30 GMT+01:00 Michal Michalski <mi...@zalando.ie>:

> I'm quite sure it's per broker (it's a standard way to provide
> recommendation on node sizes in systems like Kafka), but you should
> definitely read it in the context of the data size and traffic the cluster
> has to handle. I didn't read the presentation, so not sure if it contains
> such information (if it doesn't, maybe the video does?), but this context
> is necessary to size Kafka properly (that includes const efficiency). To
> put that in context: I've been running small Kafka cluster on AWS'
> m4.xlarge instances in the past with no issues (low number of terabytes
> stored in total, low single-digit thousands of messages produced per second
> in peak) - I actually think it was oversized for that use case.
>
> On 1 March 2018 at 17:09, adrien ruffie <ad...@hotmail.fr> wrote:
>
> > Hi all,
> >
> >
> > on the slide 5 in the following link:
> >
> > https://fr.slideshare.net/HadoopSummit/apache-kafka-best-practices/1
> >
> >
> >
> > The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)"
> Kafka
> > Brokers
> >
> > but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for
> > each broker ?
> >
> >
> > Thank you very much,
> >
> >
> > and best regards,
> >
> >
> > Adrien
> >
>

Re: Hardware Guidance

Posted by Svante Karlsson <sv...@csi.se>.
It's per broker. Usually you run with 4-6GB of java heap. The rest is used
as disk cache and it's more that 64GB seems like a sweet spot between
memory cost and performance.

/svante

2018-03-01 18:30 GMT+01:00 Michal Michalski <mi...@zalando.ie>:

> I'm quite sure it's per broker (it's a standard way to provide
> recommendation on node sizes in systems like Kafka), but you should
> definitely read it in the context of the data size and traffic the cluster
> has to handle. I didn't read the presentation, so not sure if it contains
> such information (if it doesn't, maybe the video does?), but this context
> is necessary to size Kafka properly (that includes const efficiency). To
> put that in context: I've been running small Kafka cluster on AWS'
> m4.xlarge instances in the past with no issues (low number of terabytes
> stored in total, low single-digit thousands of messages produced per second
> in peak) - I actually think it was oversized for that use case.
>
> On 1 March 2018 at 17:09, adrien ruffie <ad...@hotmail.fr> wrote:
>
> > Hi all,
> >
> >
> > on the slide 5 in the following link:
> >
> > https://fr.slideshare.net/HadoopSummit/apache-kafka-best-practices/1
> >
> >
> >
> > The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)"
> Kafka
> > Brokers
> >
> > but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for
> > each broker ?
> >
> >
> > Thank you very much,
> >
> >
> > and best regards,
> >
> >
> > Adrien
> >
>

Re: Hardware Guidance

Posted by Michal Michalski <mi...@zalando.ie>.
I'm quite sure it's per broker (it's a standard way to provide
recommendation on node sizes in systems like Kafka), but you should
definitely read it in the context of the data size and traffic the cluster
has to handle. I didn't read the presentation, so not sure if it contains
such information (if it doesn't, maybe the video does?), but this context
is necessary to size Kafka properly (that includes const efficiency). To
put that in context: I've been running small Kafka cluster on AWS'
m4.xlarge instances in the past with no issues (low number of terabytes
stored in total, low single-digit thousands of messages produced per second
in peak) - I actually think it was oversized for that use case.

On 1 March 2018 at 17:09, adrien ruffie <ad...@hotmail.fr> wrote:

> Hi all,
>
>
> on the slide 5 in the following link:
>
> https://fr.slideshare.net/HadoopSummit/apache-kafka-best-practices/1
>
>
>
> The "Memory" mentions that "24GB+ (for small) and 64GB+ (for large)" Kafka
> Brokers
>
> but is it 24 or 64 GB spread over all brokers ? Or 24 GB for example for
> each broker ?
>
>
> Thank you very much,
>
>
> and best regards,
>
>
> Adrien
>