You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "JiangHua Zhu (Jira)" <ji...@apache.org> on 2021/10/13 05:35:00 UTC

[jira] [Created] (HDFS-16270) Improve NNThroughputBenchmark#printUsage() related to block size

JiangHua Zhu created HDFS-16270:
-----------------------------------

             Summary: Improve NNThroughputBenchmark#printUsage() related to block size
                 Key: HDFS-16270
                 URL: https://issues.apache.org/jira/browse/HDFS-16270
             Project: Hadoop HDFS
          Issue Type: Improvement
            Reporter: JiangHua Zhu


When using the NNThroughputBenchmark test, if the usage is not correct, we will get some prompt messages.
E.g:
'
If connecting to a remote NameNode with -fs option, dfs.namenode.fs-limits.min-block-size should be set to 16.
21/10/13 11:55:32 INFO util.ExitUtil: Exiting with status -1: ExitException
'
Yes, this way is good.
However, the setting of'dfs.blocksize' has been completed before execution, for example:
conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
We will still get the above prompt, which is wrong.

At the same time, it should also be explained. The hint here should not be for'dfs.namenode.fs-limits.min-block-size', but should be'dfs.blocksize'.
Because in the NNThroughputBenchmark construction, the'dfs.namenode.fs-limits.min-block-size' has been set to 0 in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org