You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by sudhakara st <su...@gmail.com> on 2015/07/07 14:29:01 UTC

Re: How to monitor what hdfs block is served to a client?

You have to customize *inputformat *by extending *FileInputFormat *and
override methods *getSplits*(JobContext jobc), *computeSplitSize*(long
blockSize, long minSize, long maxSize)

On Sat, Jun 20, 2015 at 4:55 AM, Shiyao Ma <i...@introo.me> wrote:

> Hi.
>
> How to monitor the block transmission log of datanodes?
>
>
> A more detailed example:
>
> My hdfs block size is 128MB. I have a file stored on hdfs with size
> 167.08MB.
>
> Also, I have a client, requesting the whole file with three splits, e.g.,
>
> hdfs://myserver:9000/myfile:0+58397994  (0-56MB)
>
> hdfs://myserver:9000/myfile:58397994+58397994 (56MB-112MB)
>
> hdfs://myserver:9000/myfile:116795988+58397994 (112MB-168MB)
>
>
> The situation is kinda fixed and I cannot modify the split size.
> Nevertheless, I'd like to know what block tranmission is happening
> under the earth.
>



-- 

Regards,
...sudhakara