You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2008/04/22 05:19:21 UTC

[jira] Issue Comment Edited: (HBASE-594) tables are being reassigned instead of deleted

    [ https://issues.apache.org/jira/browse/HBASE-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591184#action_12591184 ] 

apurtell edited comment on HBASE-594 at 4/21/08 8:18 PM:
---------------------------------------------------------------

Not loaded at all. 1 region. No entries.

Eventually the tables do disappear. I logged on to our cluster the following day and all zombie tables were gone. 


      was (Author: apurtell):
    Not loaded at all. 1 region. No entries.
  
> tables are being reassigned instead of deleted
> ----------------------------------------------
>
>                 Key: HBASE-594
>                 URL: https://issues.apache.org/jira/browse/HBASE-594
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.2.0
>         Environment: Linux CentOS 5.1 x86_64 / JDK 1.6.0_03
>            Reporter: Andrew Purtell
>
> We are running HBase TRUNK (updated yesterday) and Hadoop TRUNK (updated a few days ago) on a 15 node cluster. One node doubles as master and region server. The remainder are region servers. 
> I have been trying to use the HBase shell to drop tables for quite a few minutes now. 
> The master schedules the table for deletion and the region server processes the deletion:
> 08/04/18 16:57:29 INFO master.HMaster: deleted table: content.20b16c29
> 08/04/18 16:57:34 INFO master.ServerManager: 10.30.94.35:60020 no longer serving regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, maxlength: 2147483647, bloom filter: none}}}
> 08/04/18 16:57:34 INFO master.ProcessRegionClose$1: region closed: content.20b16c29,,1208549961323
> but then a META scan happens and the table is reassigned to another server to live on as a zombie:
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 16:57:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 16:57:48 INFO master.BaseScanner: all meta regions scanned
> 08/04/18 16:57:49 INFO master.RegionManager: assigning region content.20b16c29,,1208549961323 to server 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020}
> 08/04/18 16:57:52 INFO master.BaseScanner: RegionManager.rootScanner scan of meta region {regionname: -ROOT-,,0, startKey: <>, server: 10.30.94.31:60020} complete
> 08/04/18 16:57:52 INFO master.ServerManager: 10.30.94.39:60020 serving content.20b16c29,,1208549961323
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: regionname: content.20b16c29,,1208549961323, startKey: <>, endKey: <>, encodedName: 385178593, tableDesc: {name: content.20b16c29, families: {content:={name: content, max versions: 1, compression: RECORD, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}, info:={name: info, max versions: 1, compression: NONE, in memory: false, block cache enabled: true, max length: 2147483647, bloom filter: none}}} open on 10.30.94.39:60020
> 08/04/18 16:57:52 INFO master.ProcessRegionOpen$1: updating row content.20b16c29,,1208549961323 in table .META.,,1 with startcode 1208552149355 and server 10.30.94.39:60020
> Approximately 50 META region scans then happen, then the following occurs and reoccurs over many many subsequent META scans:
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scanning meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020}
> 08/04/18 17:26:48 WARN master.HMaster: info:regioninfo is empty for row: content.20b16c29,,1208549961323; has keys: [info:server, info:serverstartcode]
> 08/04/18 17:26:48 WARN master.BaseScanner: Found 1 rows with empty HRegionInfo while scanning meta region .META.,,1
> 08/04/18 17:26:48 WARN master.HMaster: Removed region: content.20b16c29,,1208549
> 961323 from meta region: .META.,,1 because HRegionInfo was empty
> 08/04/18 17:26:48 INFO master.BaseScanner: RegionManager.metaScanner scan of meta region {regionname: .META.,,1, startKey: <>, server: 10.30.94.37:60020} complete
> 08/04/18 17:26:48 INFO master.BaseScanner: all meta regions scanned
> yet finally the table disappears for a reason that does not appear in the logs... at least for this particular example. There is another table that is simply refusing to die...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.