You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Cagdas Gerede <ca...@gmail.com> on 2008/05/01 20:23:10 UTC

Re: Block reports: memory vs. file system, and Dividing offerService into 2 threads

 As far as I understand, the current focus is on how to reduce namenode's
CPU time to process block reports from a lot of datanodes.

Don't we miss another issue? Doesn't the way a block report is computed
delays the master startup time. I have to make sure the master is up as
quick as possible for maximum availability. The bottleneck seems like the
scanning of the local disk. I wrote a simple java program that only scanned
the datanode directories as Hadoop code did, and the time the java program
took was equivalent to the 90% of the time that took for block report
generation and sending. It seems scanning is very costly. It takes about 2-4
minutes.

 To address the problem, can we have *two types of block reports*. Once is
generated from memory and the other from localfs. For master starts, we can
trigger the block report that is generated from memory, and for periodic
ones we can trigger the block report that is computed from localfs.



Another issue I have is even if we do it block reports every 10 days, once
it happens, it will almost freeze the datanode functions. More specifically,
data node won't be able to report to namenode about new blocks until this
report is computed. This takes at least a couple of minutes in my system for
each datanode. As a result, master thinks a block is not yet replicated
enough and it rejects addition of a new block to a file. Then, since it does
not wait for enough time, it eventually causes the failure of writing a
file. To address the first problem, can we separate this process of scanning
the underlying disk as a separate thread then reporting of newly received
blocks?

Dhruba points out
> This sequential nature is critical in ensuring that there is no erroneous
race condition in the Namenode

I do not have any insight to this.


Cagdas

-- 
------------
Best Regards, Cagdas Evren Gerede
Home Page: http://cagdasgerede.info