You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2008/11/06 01:08:44 UTC

[jira] Created: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
---------------------------------------------------------------------------------------

                 Key: HADOOP-4597
                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.18.0
            Reporter: Konstantin Shvachko
            Assignee: Konstantin Shvachko
            Priority: Blocker
             Fix For: 0.18.3


Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
When the name-node leaves safe-mode manually it does not perform the checkup.
In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12645628#action_12645628 ] 

Konstantin Shvachko commented on HADOOP-4597:
---------------------------------------------

I did manual testing, which confirms the change works as suspected.
# Create a new file system containing a few files by starting name-node and 2 data-nodes, and loading a couple of files into it. Then stop the cluster.
# Start name-node with {{dfs.safemode.threshold.pct = 1.1}}
# Start one data-node, which contains exactly one copy of each block.
# Call {{dfsadmin -metasave tmp.txt}}. File tmp.txt will show that there is 0 "Blocks waiting for replication:".
# Call {{dfsadmin -safemode leave}}. The name-node will leave safe-mode.
# Call {{dfsadmin -metasave tmp.txt}}. File tmp.txt will show that the number of "Blocks waiting for replication:" > 0, 
and will list all blocks of the file system because they are all under-replicated.

Without the patch the last step would still show "Blocks waiting for replication: 0".

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Attachment: NeededRepl-18.patch

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Attachment: NeededRepl.patch

Yes, we are going to always verify misreplicated blocks then.
I am removing the boolean parameter then, since it always has the same value true.

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12645612#action_12645612 ] 

Hadoop QA commented on HADOOP-4597:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12393414/NeededRepl.patch
  against trunk revision 711734.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    -1 core tests.  The patch failed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3543/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3543/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3543/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3543/console

This message is automatically generated.

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12645642#action_12645642 ] 

Raghu Angadi commented on HADOOP-4597:
--------------------------------------


Does the call to leaveSafeMode() in checkMode() also need to pass 'true' for second arg?

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12645772#action_12645772 ] 

Hudson commented on HADOOP-4597:
--------------------------------

Integrated in Hadoop-trunk #654 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/654/])
    . Calculate mis-replicated blocks when safe-mode is turned of manually. Contributed by Konstantin Shvachko.


> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Attachment:     (was: NeededRepl.patch)

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Attachment: NeededRepl.patch

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12645659#action_12645659 ] 

Konstantin Shvachko commented on HADOOP-4597:
---------------------------------------------

I'll fix Raghu's issue in subsequent issue.

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Status: Patch Available  (was: Open)

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Attachment:     (was: NeededRepl-18.patch)

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

    Attachment: NeededRepl-18.patch

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl-18.patch, NeededRepl.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4597) Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-4597:
----------------------------------------

      Resolution: Fixed
    Hadoop Flags: [Reviewed]
          Status: Resolved  (was: Patch Available)

I just committed this.

> Under-replicated blocks are not calculated if the name-node is forced out of safe-mode.
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4597
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4597
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.18.3
>
>         Attachments: NeededRepl-18.patch, NeededRepl.patch
>
>
> Currently during name-node startup under-replicated blocks are not added to the neededReplications queue until the name-node leaves safe mode. This is an optimization since otherwise all blocks will first go into the under-replicated queue and then most of them will be removed from it.
> When the name-node leaves safe-mode automatically it checks all blocks to have a correct number of replicas ({{processMisReplicatedBlocks()}}). 
> When the name-node leaves safe-mode manually it does not perform the checkup.
> In the latter case all under-replicated blocks remain not replicated forever because there is no alternative mechanism to trigger replications.
> The proposal is to call {{processMisReplicatedBlocks()}} any time the name-node leaves safe mode - automatically or manually.
> In addition to solving that problem this could be an alternative mechanism for refreshing {{neededReplications}} and {{excessReplicateMap}} sets.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.