You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2008/08/12 23:33:44 UTC

[jira] Created: (HBASE-823) Concurrent "open mapfile reader" limit

Concurrent "open mapfile reader" limit
--------------------------------------

                 Key: HBASE-823
                 URL: https://issues.apache.org/jira/browse/HBASE-823
             Project: Hadoop HBase
          Issue Type: Improvement
            Reporter: stack


Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12657985#action_12657985 ] 

stack commented on HBASE-823:
-----------------------------

Need new file format to be able to do stuff like limit indices or to store them in soft references.

> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack resolved HBASE-823.
-------------------------

    Resolution: Won't Fix

This issue has gone stale.  We no longerr use MapFiles and while we should probably bound the set of HFiles open in the regionserver, the number of concurrent open resources has come down a good bit since this issue was opened and we'll likely go other routes to conserve datanode resource usage (nio) before we put back an upper limit on hfiles open.

> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HBASE-823:
------------------------


Moving out of 0.20.0 -- we won't get to it I'm thinking.

> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.20.0
>
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HBASE-823:
------------------------

    Fix Version/s:     (was: 0.20.0)

Actually move this out.

> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>            Priority: Blocker
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671677#action_12671677 ] 

stack commented on HBASE-823:
-----------------------------

Luo: On 1., there is also the HADOOP-3856 tactic that Andrew Purtell is working on to get over the xceiver limit and work in HBASE-61 should shrink index size in heap dramatically but even then your point stands, that we need to be counting the indices and capping their size.   Your HBASE-24 patch where we limit number of open store files is probably only avenue open to us until HADOOP-3856 is done (Its a blocker on 0.20.0 hbase).

> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.20.0
>
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "Luo Ning (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671562#action_12671562 ] 

Luo Ning commented on HBASE-823:
--------------------------------

i think we need archiving 2 goals for the 'open reader limit':
1. fit xceiverCount limitation of hadoop, see HBASE-24
2. control memory usage of mapfile indexes.

we may got 2 by control 'concurrent open readers', but it is not efficient. because mapfile index size is very different. for regionserver stable, we should make sure 'open file limit' * 'max  mapfile index size' < 'memory limit of regionserver'. this mean we have to set the 'open file limit' low enough.

so besides 'open reader limit', there should a 'max index size limit' for concurrent mapfile reader controlling. the eviction policy should considering these 2 limitation together. 




> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.20.0
>
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "Luo Ning (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671562#action_12671562 ] 

ln@webcate.net edited comment on HBASE-823 at 2/7/09 10:18 PM:
---------------------------------------------------------

i think we need archiving 2 goals for the 'open reader limit':
1. fit xceiverCount limitation of hadoop, see HBASE-24
2. control memory usage of mapfile indexes.

we may got 2 by control 'concurrent open readers', but it is not efficient. because mapfile index size is very different. for regionserver stable, we should make sure 'open file limit' * 'max  mapfile index size' < 'memory limit of regionserver'. this mean set the 'open file limit' low enough.

so besides 'open reader limit', there should a 'max index size limit' for concurrent mapfile reader controlling. the eviction policy need considering these 2 limitation together. 




      was (Author: ln@webcate.net):
    i think we need archiving 2 goals for the 'open reader limit':
1. fit xceiverCount limitation of hadoop, see HBASE-24
2. control memory usage of mapfile indexes.

we may got 2 by control 'concurrent open readers', but it is not efficient. because mapfile index size is very different. for regionserver stable, we should make sure 'open file limit' * 'max  mapfile index size' < 'memory limit of regionserver'. this mean we have to set the 'open file limit' low enough.

so besides 'open reader limit', there should a 'max index size limit' for concurrent mapfile reader controlling. the eviction policy should considering these 2 limitation together. 



  
> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.20.0
>
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HBASE-823) Concurrent "open mapfile reader" limit

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HBASE-823:
------------------------

         Priority: Blocker  (was: Major)
    Fix Version/s: 0.20.0

> Concurrent "open mapfile reader" limit
> --------------------------------------
>
>                 Key: HBASE-823
>                 URL: https://issues.apache.org/jira/browse/HBASE-823
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 0.20.0
>
>
> Over in HBASE-745, Luo Ning profiling found that the number of open Readers has direct impact on memory used.  This issue is about putting an upper bound on the number of open Readers doing something like a bounded pool w/ a LRU eviction policy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.