You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2007/10/12 01:29:51 UTC

[jira] Resolved: (HADOOP-620) replication factor should be calucalated based on actual dfs block sizes at the NameNode.

     [ https://issues.apache.org/jira/browse/HADOOP-620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi resolved HADOOP-620.
---------------------------------

    Resolution: Won't Fix

> replication factor should be calucalated based on actual dfs block sizes at the NameNode.
> -----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-620
>                 URL: https://issues.apache.org/jira/browse/HADOOP-620
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Minor
>
> Currently 'dfs -report' calculates replication facto like the following :
>      (totalCapacity - totalDiskRemaining) / (totalSize of dfs files in Name space).
> Problem with this is that this includes disk space used by non-dfs files (e.g. map reduce jobs) on data node. On my single node test, I get replication factor of 100 since I have a 1 GB dfs file with out replication and there is 99GB of unrelated data on the same volume.
> ideally name should calculate it with : (total size of all the blocks known to it) / (total size of files in Name space).
> Initial proposal to keep 'total size of all the blocks' update is to track it in datanode descriptor and update it when namenode receives block reports from the datanode ( and subtract when the datanode is removed).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.