You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2008/04/19 01:08:22 UTC

[jira] Created: (HBASE-594) tables are being reassigned instead of deleted

tables are being reassigned instead of deleted
----------------------------------------------

                 Key: HBASE-594
                 URL: https://issues.apache.org/jira/browse/HBASE-594
             Project: Hadoop HBase
          Issue Type: Bug
          Components: master, regionserver
    Affects Versions: 0.2.0
         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
            Reporter: Andrew Purtell


We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 

I have been trying to use the HBase shell to drop tables for quite a few minutes now. 

The master schedules the table for deletion and the region server processes the deletion:

08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323

but then a META scan happens and the table is reassigned to another server to live on as a zombie:

08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020

Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:

08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
961323 from meta region: .META.,,1 because HRegionInfo was empty
08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned

yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-594) tables are being reassigned instead of deleted

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591361#action_12591361 ] 

stack commented on HBASE-594:
-----------------------------

Thread dumps when server is hung would be good to (You can do it from UI).  Thanks for reporiing (and cleaning up after you couldn't reproduce).

> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HBASE-594) tables are being reassigned instead of deleted

Posted by "Andrew Purtell (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591184#action_12591184 ] 

apurtell edited comment on HBASE-594 at 4/21/08 8:18 PM:
---------------------------------------------------------------

Not loaded at all. 1 region. No entries.

Eventually the tables do disappear. I logged on to our cluster the following day and all zombie tables were gone. 


      was (Author: apurtell):
    Not loaded at all. 1 region. No entries.
  
> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-594) tables are being reassigned instead of deleted

Posted by "Andrew Purtell (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591184#action_12591184 ] 

Andrew Purtell commented on HBASE-594:
--------------------------------------

Not loaded at all. 1 region. No entries.

> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-594) tables are being reassigned instead of deleted

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591182#action_12591182 ] 

stack commented on HBASE-594:
-----------------------------

How loaded was your table?  How many regions?

> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HBASE-594) tables are being reassigned instead of deleted

Posted by "Andrew Purtell (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Purtell resolved HBASE-594.
----------------------------------

    Resolution: Cannot Reproduce

Thanks for looking into this. I thought it was a reproductable bug but I was wrong. With your example, I was unable to reproduce the behavior I saw earlier. So, I went back and tried to reproduce the problem I saw earlier with the same schema as before and could not. 

Thinking about it further, I recall that between that time and now I restarted our cluster, and that during the shutdown the master would not quit normally.  I had to manually shut down the region servers using "ssh hadoop@foo '/path/to/hbase/bin/hbase-daemon.sh stop regionserver'" and then kill the master with kill -9. I'm not sure how the master came to be in that bad state.

If our cluster enters this state again, I will restart with DEBUG logging enabled and try to catch it again.


> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-594) tables are being reassigned instead of deleted

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591196#action_12591196 ] 

stack commented on HBASE-594:
-----------------------------

I just tried on 4 node cluster (not a 15 node cluster).

{code}
  --> create table 'x' ('x');
Table created successfully.
hql >
  --> show tables;
+--------------------------------------+--------------------------------------+
| Name                                 | Descriptor                           |
+--------------------------------------+--------------------------------------+
| x                                    | name: x, families: {x:={name: x, max |
|                                      | versions: 3, compression: NONE, in me|
|                                      | mory: false, block cache enabled: fal|
|                                      | se, max length: 2147483647, bloom fil|
|                                      | ter: none}}                          |
+--------------------------------------+--------------------------------------+
1 table(s) in set. (0.02 sec)
hql > drop table 'x';
1 table(s) dropped successfully. (10.19 sec)
hql > show tables;
No tables found.
hql > select * from .META.;
0 row(s) in set. (0.05 sec) 
{code}

What you think the difference is?

> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.