You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "fulin wang (JIRA)" <ji...@apache.org> on 2011/07/14 05:23:59 UTC

[jira] [Created] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
-----------------------------------------------------------------------------------

                 Key: HBASE-4093
                 URL: https://issues.apache.org/jira/browse/HBASE-4093
             Project: HBase
          Issue Type: Bug
          Components: master
    Affects Versions: 0.90.3
            Reporter: fulin wang


When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.


HMaster log:
2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
	at java.io.DataInputStream.read(DataInputStream.java:132)
	at java.io.DataInputStream.readFully(DataInputStream.java:178)
	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
	... 10 more
2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
	at $Proxy6.getRegionInfo(Unknown Source)
	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
	at java.io.FilterInputStream.read(FilterInputStream.java:116)
	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
	at java.io.DataInputStream.readInt(DataInputStream.java:370)
	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
	at $Proxy6.getRegionInfo(Unknown Source)
	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
	at java.io.FilterInputStream.read(FilterInputStream.java:116)
	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
	at java.io.DataInputStream.readInt(DataInputStream.java:370)
	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
...
2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (HBASE-4093) When verifyAndAssignRoot throws exception, the deadServers state cannot be changed

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ted Yu resolved HBASE-4093.
---------------------------

    Resolution: Fixed

> When verifyAndAssignRoot throws exception, the deadServers state cannot be changed
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

fulin wang updated HBASE-4093:
------------------------------

    Attachment: surefire-report.html
                HBASE-4093-trunk_V2.patch
                HBASE-4093-0.90_V2.patch

I make two patches for 0.90 and trunk.
Unit testing has passed.Please check, Thanks.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-trunk_V2.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13066910#comment-13066910 ] 

Ted Yu commented on HBASE-4093:
-------------------------------

For patch version 3, I was thinking of using some code similar to the following:
{code}
      wait(remaining);
      remaining = timeout - (System.currentTimeMillis() - startTime);
{code}
so that we can avoid introducing hbase.catalog.verification.times parameter.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throws exception, the deadServers state cannot be changed

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067572#comment-13067572 ] 

Hudson commented on HBASE-4093:
-------------------------------

Integrated in HBase-TRUNK #2039 (See [https://builds.apache.org/job/HBase-TRUNK/2039/])
    HBASE-4093  When verifyAndAssignRoot throws exception, the deadServers state
               cannot be changed (fulin wang via Ted Yu)

tedyu : 
Files : 
* /hbase/trunk/CHANGES.txt
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


> When verifyAndAssignRoot throws exception, the deadServers state cannot be changed
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throws exception, the deadServers state cannot be changed

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067507#comment-13067507 ] 

Ted Yu commented on HBASE-4093:
-------------------------------

I changed default retry count to 10.
Applied to branch and TRUNK.

Thanks for the patch fulin.

> When verifyAndAssignRoot throws exception, the deadServers state cannot be changed
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Assigned] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ted Yu reassigned HBASE-4093:
-----------------------------

    Assignee: fulin wang

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-trunk_V2.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067464#comment-13067464 ] 

Ted Yu commented on HBASE-4093:
-------------------------------

>From your earlier comment:
The verifyAndAssignRoot failed many times after restart Hmaster.

I wonder if 5 retries are enough. Shall we increase it ?

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throws exception, the deadServers state cannot be changed

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067512#comment-13067512 ] 

fulin wang commented on HBASE-4093:
-----------------------------------

Thanks, Ted Yu.

> When verifyAndAssignRoot throws exception, the deadServers state cannot be changed
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067046#comment-13067046 ] 

Ted Yu commented on HBASE-4093:
-------------------------------

verifyAndAssignRoot() already honors verification timeout. There is no need to use that for waiting in verifyAndAssignRootWithRetries().
You can remove the call to sleep() and increase default value for hbase.catalog.verification.retries

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067488#comment-13067488 ] 

fulin wang commented on HBASE-4093:
-----------------------------------

Yes, I write error, it is 'hbase.catalog.verification.retries'.
can you give it a reasonable value?
I wonder that 5 times is enough.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

fulin wang updated HBASE-4093:
------------------------------

    Attachment: HBASE-4093-trunk_V3.patch
                HBASE-4093-0.90_V3.patch

According to review, I made two patches.
Please check, Thanks.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throws exception, the deadServers state cannot be changed

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ted Yu updated HBASE-4093:
--------------------------

    Summary: When verifyAndAssignRoot throws exception, the deadServers state cannot be changed  (was: When verifyAndAssignRoot throw exception, The deadServers state can not be changed.)

> When verifyAndAssignRoot throws exception, the deadServers state cannot be changed
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13066984#comment-13066984 ] 

fulin wang commented on HBASE-4093:
-----------------------------------

I'm not sure the metod of verifyAndAssignRoot execution time, So it can not using a fixed time.
The 'hbase.catalog.verification.retries' retries times.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

fulin wang updated HBASE-4093:
------------------------------

    Attachment: HBASE-4093-0.90.patch

The verifyAndAssignRoot failed many times after restart Hmaster.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>         Attachments: HBASE-4093-0.90.patch
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067457#comment-13067457 ] 

fulin wang commented on HBASE-4093:
-----------------------------------

This is a protection when the system is fault state.
When is 'this.data' of blockUntilAvailable null, The verifyAndAssignRoot() would be wait. The 'this.data' is not null, It is not wait.
This issue haapened the verifyRegionLocation() method, the exception is SocketTimeoutException, 
So I think that sleep one second and retry five times, Try to handle this fault state.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "fulin wang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067480#comment-13067480 ] 

fulin wang commented on HBASE-4093:
-----------------------------------

I think that the 'hbase.catalog.verification.times' is not must. It retries times. If 5 retries failed, we should restart Hmaster.
can we remove the 'hbase.catalog.verification.times'? If you agree, I will make a patch. Thanks.


> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067483#comment-13067483 ] 

Ted Yu commented on HBASE-4093:
-------------------------------

I think you're referring to hbase.catalog.verification.retries in patch 3.
We should keep it.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>            Assignee: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.

Posted by "Ted Yu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13066429#comment-13066429 ] 

Ted Yu commented on HBASE-4093:
-------------------------------

w.r.t. how much time loopVerifyAndAssignRoot() should wait:
{code}
+        "hbase.catalog.verification.times", 10);
+
+    long waitTime = this.server.getConfiguration().getLong(
+        "hbase.catalog.verification.loop.time", 1000);
{code}
Since you're not using exponential backoff, one parameter should be enough.

In fact, we already have the following in HMaster:
{code}
    long timeout = this.conf.getLong("hbase.catalog.verification.timeout", 1000);
{code}
Take a look at ZooKeeperNodeTracker.blockUntilAvailable().

I suggest renaming loopVerifyAndAssignRoot() to verifyAndAssignRootWithRetries().

w.r.t. InterruptedException handling:
{code}
+        } catch (InterruptedException e1) {
+          LOG.warn("Thread exception.", e1);
+        }
{code}
Current practice is to rethrow e1 in an InterruptedIOException.

> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> -----------------------------------------------------------------------------------
>
>                 Key: HBASE-4093
>                 URL: https://issues.apache.org/jira/browse/HBASE-4093
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.3
>            Reporter: fulin wang
>         Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-trunk_V2.patch, surefire-report.html
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
> The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information.
> HMaster log:
> 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/0000000000204525422 (wrote 8 edits in 61583ms)
> 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056
> java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262)
> 	at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188)
> 	at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441)
> 	at java.io.DataInputStream.read(DataInputStream.java:132)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> 	at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884)
> 	at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930)
> 	at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198)
> 	... 10 more
> 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056]
> 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:331)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnectionDefault(CatalogTracker.java:364)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:255)
> 	at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:237)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:116)
> 	at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:85)
> 	at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:39:29,946 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_META_SERVER_SHUTDOWN
> java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775)
> 	at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> 	at $Proxy6.getRegionInfo(Unknown Source)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> 	at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> 	at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> 	at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:116)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:299)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:539)
> 	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:477)
> 2011-07-09 01:40:26,474 DEBUG org.apache.hadoop.hbase.master.ServerManager: Server 162-2-6-187,20020,1310146825674 came back up, removed it from the dead servers list
> 2011-07-09 01:40:26,515 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=162-2-6-187,20020,1310146825674, regionCount=0, userLoad=false
> 2011-07-09 01:40:28,410 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=162-2-6-187:20020; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1
> ...
> 2011-07-09 01:53:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 01:58:33,060 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:03:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []
> 2011-07-09 02:08:33,061 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): []

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira