You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/05/30 22:33:30 UTC

[jira] Commented: (HADOOP-158) dfs should allocate a random blockid range to a file, then assign ids sequentially to blocks in the file

    [ http://issues.apache.org/jira/browse/HADOOP-158?page=comments#action_12413893 ] 

Doug Cutting commented on HADOOP-158:
-------------------------------------

Why must the file-id part of the block id be random?  Can't that be sequential?

> dfs should allocate a random blockid range to a file, then assign ids sequentially to blocks in the file
> --------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-158
>          URL: http://issues.apache.org/jira/browse/HADOOP-158
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.1.0
>     Reporter: Doug Cutting
>     Assignee: Konstantin Shvachko
>      Fix For: 0.4

>
> A random number generator is used to allocate block ids in dfs.  Sometimes a block id is allocated that is already used in the filesystem, which causes filesystem corruption.
> A short-term fix for this is to simply check when allocating block ids whether any file is already using the newly allocated id, and, if it is, generate another one.  There can still be collisions in some rare conditions, but these are harder to fix and will wait, since this simple fix will handle the vast majority of collisions.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira