You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2008/03/15 03:14:24 UTC
[jira] Updated: (HADOOP-2148) Inefficient FSDataset.getBlockFile()
[ https://issues.apache.org/jira/browse/HADOOP-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Konstantin Shvachko updated HADOOP-2148:
----------------------------------------
Attachment: getBlockFile.patch
This patch optimizes FSDataset.getBlockFile() and FSDataset.getLength() so that they perform the data-node
blockMap lookup only once. The patch is pretty straightforward. I don't think we should benchmark this.
> Inefficient FSDataset.getBlockFile()
> ------------------------------------
>
> Key: HADOOP-2148
> URL: https://issues.apache.org/jira/browse/HADOOP-2148
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.14.0
> Reporter: Konstantin Shvachko
> Attachments: getBlockFile.patch
>
>
> FSDataset.getBlockFile() first verifies that the block is valid and then returns the file name corresponding to the block.
> Doing that it performs the data-node blockMap lookup twice. Only one lookup is needed here.
> This is important since the data-node blockMap is big.
> Another observation is that data-nodes do not need the blockMap at all. File names can be derived from the block IDs,
> there is no need to hold Block to File mapping in memory.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.