You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Zesheng Wu <wu...@gmail.com> on 2014/09/15 14:43:24 UTC

Support multiple block placement policies

Hi there,

According to the code, the current implement of HDFS only supports one
specific type of block placement policy, which is
BlockPlacementPolicyDefault by default.
The default policy is enough for most of the circumstances, but under some
special circumstances, it works not so well.

For example, on a shared cluster, we want to erasure encode all the files
under some specified directories. So the files under these directories need
to use a new placement policy.
But at the same time, other files still use the default placement policy.
Here we need to support multiple placement policies for the HDFS.

One plain thought is that, the default placement policy is still configured
as the default. On the other hand, HDFS can let user specify customized
placement policy through the extended attributes(xattr). When the HDFS
choose the replica targets, it firstly check the customized placement
policy, if not specified, it fallbacks to the default one.

Any thoughts?

-- 
Best Wishes!

Yours, Zesheng