You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by ping wang <pi...@gmail.com> on 2018/01/17 04:03:16 UTC

performance issue when using "hdfs setfacl -R"

Hi advisers,
We use "hdfs setfacl -R"  for file ACL control. As the data directory is
big with 60,000+ sub-directories and files, the command is very
time-consuming. Seems it can not finish in hours, we can not image this
command will cost several days.
Any settings can help improve this?
Thanks a lot for any help!

Re: performance issue when using "hdfs setfacl -R"

Posted by Rushabh Shah <ru...@oath.com.INVALID>.
Try increasing heap size of the client via HADOOP_CLIENT_OPTS. The default
is 128M IIRC
This might improve the performance.
You can bump it upto 1G.

On Tue, Jan 16, 2018 at 10:03 PM, ping wang <pi...@gmail.com> wrote:

> Hi advisers,
> We use "hdfs setfacl -R"  for file ACL control. As the data directory is
> big with 60,000+ sub-directories and files, the command is very
> time-consuming. Seems it can not finish in hours, we can not image this
> command will cost several days.
> Any settings can help improve this?
> Thanks a lot for any help!
>
>