You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Xuri Nagarin <se...@gmail.com> on 2013/08/29 23:33:57 UTC

TB per core sweet spot

Hi,

I realize there is no perfect spec for data nodes as lot depends on use
cases and work loads but I am curious if there are any rules of thumb or
no-go zones in terms of how many terabytes per core is ok?

So a few questions assuming 1 core per hdd holds:
Is there a no-go zone in terms of tb/core? I ask because I am seeing
4TB/core nodes in some of the clusters and wondering if that's too much?
Does tb/core depend on the core speed? For example, while a 1.8Ghz might be
able to handle 1TB, going to 4TB requires a 3.6Ghz E5 Xeon core?
Dramatic difference between Xeon E3 vs E5 or incremental?
Any comments on disk choice - SATA vs SAS, 5.9k vs 7.2k vs 10k, SATA2 vs 3?

Again, I realize there is a huge YMMV factor here but I would love to hear
experiences or research people have done before picking specs for their
nodes including vendors/models.


Thanks,

Xuri