You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by chen peng <ch...@hotmail.com> on 2010/06/14 20:25:13 UTC

Too many open files

hi, all:
        I had met a question after my program continued for 28+ hours under the circumstances of cluster which have three machine that had set ulimit to 32K.
............
2010-06-13 02:06:14,812 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegion: regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available; sequence id is 15639538
2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
2010-06-13 02:06:15,589 INFOorg.apache.hadoop.hbase.regionserver.HRegion: regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available; sequence id is 15639539
2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region split,META updated, and report to master all successful. Old region=REGION=> {NAME =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY => 'com.pacsun.shop:http/js_external/sj_flyout.js', ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME => 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
 >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', 
 IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt', COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prvfch', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig', COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
 SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig', COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt', COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Starting compaction onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
2010-06-13 02:06:15,663 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
2010-06-13 02:06:15,663 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive  FloorMonitor_-49972869,1276391174724
2010-06-13 02:06:15,664 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5104950836598570436_20226 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5104950836598570436_20226 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-6330529819693039456_20275 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion: compaction completed onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in 7sec
2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Starting compaction onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegion: regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available; sequence id is 15639825
2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2 WayPassive  Floor Monitor_-49972869,1276391174724
2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5772421768525630859_20164 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-1227684848175029882_20172 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-8420981703314551273_20168 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-2559191036262569688_20333 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-8420981703314551273_20168 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-2559191036262569688_20333 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5563912881422417996_20180 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-2559191036262569688_20333 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_2343005765236386064_20192 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
    at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
    at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
    at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)


2010-06-13 02:06:41,677 ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split failed for regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
java.io.IOException: Could not obtain block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
    at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
    at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
    at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
2010-06-13 02:06:41,677 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Starting compaction onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
2010-06-13 02:06:41,693 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-4724151989818868275_20334
2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_2343005765236386064_20192 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_2343005765236386064_20192 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_2197619404089718071_20334
2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
    at java.lang.Thread.run(Thread.java:619)


2010-06-13 02:06:50,655 WARNorg.apache.hadoop.hbase.regionserver.Store: Failed open ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption is that file was corrupted at flush and lost edits pickedup by commit log replay. Verify!
java.io.IOException: Could not obtain block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
    at java.lang.Thread.run(Thread.java:619)
2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-153353228097894218_20196 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_4832263854844000864_20200 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-8490642742553142526_20334
2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_4832263854844000864_20200 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_4832263854844000864_20200 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_8167205924627743813_20334
2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)


2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
    at java.lang.Thread.run(Thread.java:619)


2010-06-13 02:07:02,674 WARNorg.apache.hadoop.hbase.regionserver.Store: Failed open ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption is that file was corrupted at flush and lost edits pickedup by commit log replay. Verify!
java.io.IOException: Could not obtain block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
    at java.lang.Thread.run(Thread.java:619)
2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_8179737564656994784_20204 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient: Failedto connect to /172.0.8.251:50010 for file/hbase/.META./1028785192/info/515957856915851220 for block-3334740230832671768:java.net.SocketException: Too many open files
    at sun.nio.ch.Net.socket0(Native Method)
    at sun.nio.ch.Net.socket(Net.java:94)
    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)


2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient: Failedto connect to /172.0.8.248:50010 for file/hbase/.META./1028785192/info/515957856915851220 for block-3334740230832671768:java.net.SocketException: Too many open files
    at sun.nio.ch.Net.socket0(Native Method)
    at sun.nio.ch.Net.socket(Net.java:94)
    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)


2010-06-13 02:07:03,820 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: 
java.net.SocketException: Too many open files
    at sun.nio.ch.Net.socket0(Native Method)
    at sun.nio.ch.Net.socket(Net.java:94)
    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer: IPCServer handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab, [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException: Too many open files
java.net.SocketException: Too many open files
    at sun.nio.ch.Net.socket0(Native Method)
    at sun.nio.ch.Net.socket(Net.java:94)
    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_8179737564656994784_20204 from any node: java.io.IOException: No live nodes contain current block
2010-06-13 02:07:05,699 WARN org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) 		 	   		  
_________________________________________________________________
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

Re: Too many open files

Posted by Ted Yu <yu...@gmail.com>.
This is what I see:
[sjc1-hadoop3.sjc1:hadoop 1249]grep ulimit
hbase-hadoop-regionserver-sjc1-hadoop3.sjc1.carrieriq.com.log
ulimit -n 65535

On Tue, Jan 18, 2011 at 10:49 AM, Stack <st...@duboce.net> wrote:

> Ted:
>
> The first line in hbase logs is what hbase sees for ulimit.  Check
> your log.  Whats it say?  There is a bit on ulimit on ubuntu if that
> is what you are running here in the hbase book:
>
> http://people.apache.org/~stack/hbase-0.90.0-candidate-3/docs/notsoquick.html#ulimit<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-3/docs/notsoquick.html#ulimit>
>
> St.Ack
>
> On Tue, Jan 18, 2011 at 9:48 AM, Ted Yu <yu...@gmail.com> wrote:
> > In /etc/security/limits.conf on all nodes, I see:
> > *       soft    nofile  65535
> > *       hard    nofile  65535
> >
> > HBase 0.90 RC3 is deployed on this cluster, in regionserver log I see:
> >
> > 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> > obtain block blk_628272724324759643_2784573 from any node:
> > java.io.IOException: No live nodes contain current block. Will get new
> block
> > locations from namenode and retry...
> > 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> > obtain block blk_-266346913956002831_2784643 from any node:
> > java.io.IOException: No live nodes contain current block. Will get new
> block
> > locations from namenode and retry...
> > ...
> > 2011-01-18 07:41:32,889 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> > obtain block blk_5858710028860745380_2785280 from any node:
> > java.io.IOException: No live nodes contain current block. Will get new
> block
> > locations from namenode and retry...
> >
> > Is there other parameter I should tune ?
> >
> > Thanks
> >
> > On Mon, Jun 14, 2010 at 12:02 PM, Ryan Rawson <ry...@gmail.com>
> wrote:
> >
> >> Please don't email the 'issues' list.
> >>
> >> http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6
> >>
> >> 2010/6/14 chen peng <ch...@hotmail.com>:
> >> >
> >> > hi, all:
> >> >        I had met a question after my program continued for 28+ hours
> >> under the circumstances of cluster which have three machine that had set
> >> ulimit to 32K.
> >> > ............
> >> > 2010-06-13 02:06:14,812
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>
> Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
> >> > 2010-06-13 02:06:15,373
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>
> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available;
> >> sequence id is 15639538
> >> > 2010-06-13 02:06:15,373
> >> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>
> Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> >> > 2010-06-13 02:06:15,589
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>
> regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available;
> >> sequence id is 15639539
> >> > 2010-06-13 02:06:15,645
> >> INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region
> >> split,META updated, and report to master all successful. Old
> region=REGION=>
> >> {NAME
> >>
> =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY
> >> => 'com.pacsun.shop:http/js_external/sj_flyout.js',
> >>
> ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED
> >> => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME =>
> >> 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE',
> >> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
> >> 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE',
> >> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
> >> >  >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=>
> >> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536',
> IN_MEMORY
> >> => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION =>
> 'NONE',
> >> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY =>
> >> 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION =>
> >> 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> '65536',IN_MEMORY
> >> => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION =>
> 'NONE',
> >> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
> >> 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE',
> >> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> >> =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=> 'NONE',
> >> VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY =>
> >> 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE',
> >> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536',
> >> >  IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk',
> COMPRESSION
> >> => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> >> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt',
> >> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> >> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt',
> >> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> >> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME
> =>'prvfch',
> >> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> >> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig',
> >> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> >> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr',
> >> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> >> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs',
> >> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
> >> >  SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>
> >> 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL =>
> '2147483647',BLOCKSIZE
> >> => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig',
> >> COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE =>
> >> '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt',
> >> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> >> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl',
> >> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> >> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt',
> >> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> >> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new
> >>
> regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M
> >> 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
> >> > 2010-06-13 02:06:15,645
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >> Starting compaction
> >>
> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
> >> > 2010-06-13 02:06:15,663
> >> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>
> MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> >> > 2010-06-13 02:06:15,663
> >> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >> MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way
> Passive
> >>  FloorMonitor_-49972869,1276391174724
> >> > 2010-06-13 02:06:15,664
> >> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>
> Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> >> > 2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-5104950836598570436_20226 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-5104950836598570436_20226 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-6330529819693039456_20275 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:23,474
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >> compaction completed
> >>
> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in
> >> 7sec
> >> > 2010-06-13 02:06:23,474
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >> Starting compaction
> >>
> onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> >> > 2010-06-13 02:06:26,376
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>
> regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available;
> >> sequence id is 15639825
> >> > 2010-06-13 02:06:26,376
> >> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >> Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2
> >> WayPassive  Floor Monitor_-49972869,1276391174724
> >> > 2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-5772421768525630859_20164 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-1227684848175029882_20172 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-8420981703314551273_20168 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-2559191036262569688_20333 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-8420981703314551273_20168 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-2559191036262569688_20333 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-5563912881422417996_20180 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-2559191036262569688_20333 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_2343005765236386064_20192 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >> java.io.IOException: Could not obtain
> >>
> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >> >    at
> >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
> >> >    at
> org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> >> >
> >> >
> >> > 2010-06-13 02:06:41,677
> >>
> ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split
> >> failed for
> >>
> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> >> > java.io.IOException: Could not obtain
> >>
> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >> >    at
> >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
> >> >    at
> org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> >> > 2010-06-13 02:06:41,677
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >> Starting compaction
> >>
> onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> >> > 2010-06-13 02:06:41,693 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >> > 2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >> block blk_-4724151989818868275_20334
> >> > 2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_2343005765236386064_20192 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_2343005765236386064_20192 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:47,695 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >> block blk_2197619404089718071_20334
> >> > 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >> java.io.IOException: Could not obtain
> >>
> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >> >    at java.lang.Thread.run(Thread.java:619)
> >> >
> >> >
> >> > 2010-06-13 02:06:50,655
> WARNorg.apache.hadoop.hbase.regionserver.Store:
> >> Failed open
> >>
> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption
> >> is that file was corrupted at flush and lost edits pickedup by commit
> log
> >> replay. Verify!
> >> > java.io.IOException: Could not obtain
> >>
> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >> >    at java.lang.Thread.run(Thread.java:619)
> >> > 2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-153353228097894218_20196 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-3334740230832671768_20314 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_4832263854844000864_20200 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:53,697 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >> block blk_-8490642742553142526_20334
> >> > 2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-3334740230832671768_20314 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_4832263854844000864_20200 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-3334740230832671768_20314 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_4832263854844000864_20200 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:06:59,698 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >> block blk_8167205924627743813_20334
> >> > 2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >> java.io.IOException: Could not obtain
> >>
> block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at java.io.DataInputStream.read(DataInputStream.java:132)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
> >> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >> >    at
> >> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
> >> >    at
> >> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
> >> >    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
> >> >    at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >> >    at
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >> >
> >> >
> >> > 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-3334740230832671768_20314 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >> java.io.IOException: Could not obtain
> >>
> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >> >    at java.lang.Thread.run(Thread.java:619)
> >> >
> >> >
> >> > 2010-06-13 02:07:02,674
> WARNorg.apache.hadoop.hbase.regionserver.Store:
> >> Failed open
> >>
> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption
> >> is that file was corrupted at flush and lost edits pickedup by commit
> log
> >> replay. Verify!
> >> > java.io.IOException: Could not obtain
> >>
> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >> >    at java.lang.Thread.run(Thread.java:619)
> >> > 2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_8179737564656994784_20204 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient:
> Failedto
> >> connect to /172.0.8.251:50010 for
> >> file/hbase/.META./1028785192/info/515957856915851220 for
> >> block-3334740230832671768:java.net.SocketException: Too many open files
> >> >    at sun.nio.ch.Net.socket0(Native Method)
> >> >    at sun.nio.ch.Net.socket(Net.java:94)
> >> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >> >    at
> >>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >> >    at
> >>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >> >    at
> >> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >> >    at
> >> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >> >    at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >> >    at
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >> >
> >> >
> >> > 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient:
> Failedto
> >> connect to /172.0.8.248:50010 for
> >> file/hbase/.META./1028785192/info/515957856915851220 for
> >> block-3334740230832671768:java.net.SocketException: Too many open files
> >> >    at sun.nio.ch.Net.socket0(Native Method)
> >> >    at sun.nio.ch.Net.socket(Net.java:94)
> >> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >> >    at
> >>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >> >    at
> >>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >> >    at
> >> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >> >    at
> >> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >> >    at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >> >    at
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >> >
> >> >
> >> > 2010-06-13 02:07:03,820 ERROR
> >> org.apache.hadoop.hbase.regionserver.HRegionServer:
> >> > java.net.SocketException: Too many open files
> >> >    at sun.nio.ch.Net.socket0(Native Method)
> >> >    at sun.nio.ch.Net.socket(Net.java:94)
> >> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >> >    at
> >>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >> >    at
> >>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >> >    at
> >> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >> >    at
> >> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >> >    at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >> >    at
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >> > 2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_-3334740230832671768_20314 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer:
> IPCServer
> >> handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab,
> >> [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException: Too
> >> many open files
> >> > java.net.SocketException: Too many open files
> >> >    at sun.nio.ch.Net.socket0(Native Method)
> >> >    at sun.nio.ch.Net.socket(Net.java:94)
> >> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >> >    at
> >>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >> >    at
> >>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >> >    at
> >> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >> >    at
> >> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >> >    at
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >> >    at
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >> >    at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >> >    at
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >> > 2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >> obtain block blk_8179737564656994784_20204 from any node:
> >> java.io.IOException: No live nodes contain current block
> >> > 2010-06-13 02:07:05,699 WARN
> >> org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception:
> >> java.io.IOException: Unable to create new block.
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
> >> >    at
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> >> > _________________________________________________________________
> >> > Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
> >> > https://signup.live.com/signup.aspx?id=60969
> >>
> >
>

Re: Too many open files

Posted by Ted Yu <yu...@gmail.com>.
dfs.datanode.max.xcievers was set to 8023.

I will channel future questions, if any, on this subject to hadoop mailing
list.

Thanks

On Tue, Jan 18, 2011 at 11:51 AM, Stack <st...@duboce.net> wrote:

> Is the issue xceivers then Ted?  Check your DN logs.
> St.Ack
>
> On Tue, Jan 18, 2011 at 10:49 AM, Stack <st...@duboce.net> wrote:
> > Ted:
> >
> > The first line in hbase logs is what hbase sees for ulimit.  Check
> > your log.  Whats it say?  There is a bit on ulimit on ubuntu if that
> > is what you are running here in the hbase book:
> >
> http://people.apache.org/~stack/hbase-0.90.0-candidate-3/docs/notsoquick.html#ulimit<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-3/docs/notsoquick.html#ulimit>
> >
> > St.Ack
> >
> > On Tue, Jan 18, 2011 at 9:48 AM, Ted Yu <yu...@gmail.com> wrote:
> >> In /etc/security/limits.conf on all nodes, I see:
> >> *       soft    nofile  65535
> >> *       hard    nofile  65535
> >>
> >> HBase 0.90 RC3 is deployed on this cluster, in regionserver log I see:
> >>
> >> 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> >> obtain block blk_628272724324759643_2784573 from any node:
> >> java.io.IOException: No live nodes contain current block. Will get new
> block
> >> locations from namenode and retry...
> >> 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> >> obtain block blk_-266346913956002831_2784643 from any node:
> >> java.io.IOException: No live nodes contain current block. Will get new
> block
> >> locations from namenode and retry...
> >> ...
> >> 2011-01-18 07:41:32,889 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> >> obtain block blk_5858710028860745380_2785280 from any node:
> >> java.io.IOException: No live nodes contain current block. Will get new
> block
> >> locations from namenode and retry...
> >>
> >> Is there other parameter I should tune ?
> >>
> >> Thanks
> >>
> >> On Mon, Jun 14, 2010 at 12:02 PM, Ryan Rawson <ry...@gmail.com>
> wrote:
> >>
> >>> Please don't email the 'issues' list.
> >>>
> >>> http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6
> >>>
> >>> 2010/6/14 chen peng <ch...@hotmail.com>:
> >>> >
> >>> > hi, all:
> >>> >        I had met a question after my program continued for 28+ hours
> >>> under the circumstances of cluster which have three machine that had
> set
> >>> ulimit to 32K.
> >>> > ............
> >>> > 2010-06-13 02:06:14,812
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>>
> Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
> >>> > 2010-06-13 02:06:15,373
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>>
> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available;
> >>> sequence id is 15639538
> >>> > 2010-06-13 02:06:15,373
> >>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>>
> Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> >>> > 2010-06-13 02:06:15,589
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>>
> regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available;
> >>> sequence id is 15639539
> >>> > 2010-06-13 02:06:15,645
> >>> INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region
> >>> split,META updated, and report to master all successful. Old
> region=REGION=>
> >>> {NAME
> >>>
> =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY
> >>> => 'com.pacsun.shop:http/js_external/sj_flyout.js',
> >>>
> ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED
> >>> => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME =>
> >>> 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE',
> >>> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
> >>> 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE',
> >>> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
> >>> >  >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=>
> >>> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536',
> IN_MEMORY
> >>> => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION =>
> 'NONE',
> >>> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY =>
> >>> 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION =>
> >>> 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> '65536',IN_MEMORY
> >>> => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION =>
> 'NONE',
> >>> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
> >>> 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE',
> >>> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> >>> =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=>
> 'NONE',
> >>> VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY =>
> >>> 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE',
> >>> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536',
> >>> >  IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk',
> COMPRESSION
> >>> => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> >>> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt',
> >>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE
> =>
> >>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt',
> >>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> >>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME
> =>'prvfch',
> >>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE
> =>
> >>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig',
> >>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> >>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr',
> >>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE
> =>
> >>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs',
> >>> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
> >>> >  SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME
> =>
> >>> 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL =>
> '2147483647',BLOCKSIZE
> >>> => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig',
> >>> COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE
> =>
> >>> '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt',
> >>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> >>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl',
> >>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE
> =>
> >>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt',
> >>> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> =>
> >>> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new
> >>>
> regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M
> >>> 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
> >>> > 2010-06-13 02:06:15,645
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>> Starting compaction
> >>>
> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
> >>> > 2010-06-13 02:06:15,663
> >>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>>
> MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> >>> > 2010-06-13 02:06:15,663
> >>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>> MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way
> Passive
> >>>  FloorMonitor_-49972869,1276391174724
> >>> > 2010-06-13 02:06:15,664
> >>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>>
> Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> >>> > 2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-5104950836598570436_20226 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-5104950836598570436_20226 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-6330529819693039456_20275 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:23,474
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>> compaction completed
> >>>
> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in
> >>> 7sec
> >>> > 2010-06-13 02:06:23,474
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>> Starting compaction
> >>>
> onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> >>> > 2010-06-13 02:06:26,376
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>>
> regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available;
> >>> sequence id is 15639825
> >>> > 2010-06-13 02:06:26,376
> >>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> >>> Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2
> >>> WayPassive  Floor Monitor_-49972869,1276391174724
> >>> > 2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-5772421768525630859_20164 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-1227684848175029882_20172 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-8420981703314551273_20168 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-2559191036262569688_20333 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-8420981703314551273_20168 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-2559191036262569688_20333 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-5563912881422417996_20180 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-2559191036262569688_20333 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_2343005765236386064_20192 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >>> java.io.IOException: Could not obtain
> >>>
> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >>> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >>> >    at
> >>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
> >>> >    at
> org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> >>> >
> >>> >
> >>> > 2010-06-13 02:06:41,677
> >>>
> ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split
> >>> failed for
> >>>
> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> >>> > java.io.IOException: Could not obtain
> >>>
> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >>> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >>> >    at
> >>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
> >>> >    at
> org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> >>> > 2010-06-13 02:06:41,677
> INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> >>> Starting compaction
> >>>
> onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> >>> > 2010-06-13 02:06:41,693 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >>> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >>> > 2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >>> block blk_-4724151989818868275_20334
> >>> > 2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_2343005765236386064_20192 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_2343005765236386064_20192 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:47,695 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >>> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >>> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >>> block blk_2197619404089718071_20334
> >>> > 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >>> java.io.IOException: Could not obtain
> >>>
> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >>> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >>> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >>> >    at java.lang.Thread.run(Thread.java:619)
> >>> >
> >>> >
> >>> > 2010-06-13 02:06:50,655
> WARNorg.apache.hadoop.hbase.regionserver.Store:
> >>> Failed open
> >>>
> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption
> >>> is that file was corrupted at flush and lost edits pickedup by commit
> log
> >>> replay. Verify!
> >>> > java.io.IOException: Could not obtain
> >>>
> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >>> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >>> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >>> >    at java.lang.Thread.run(Thread.java:619)
> >>> > 2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-153353228097894218_20196 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-3334740230832671768_20314 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_4832263854844000864_20200 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:53,697 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >>> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >>> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >>> block blk_-8490642742553142526_20334
> >>> > 2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-3334740230832671768_20314 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_4832263854844000864_20200 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-3334740230832671768_20314 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_4832263854844000864_20200 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:06:59,698 INFO
> org.apache.hadoop.hdfs.DFSClient:Exception
> >>> in createBlockOutputStream java.net.SocketException: Too manyopen files
> >>> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:
> Abandoning
> >>> block blk_8167205924627743813_20334
> >>> > 2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >>> java.io.IOException: Could not obtain
> >>>
> block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at java.io.DataInputStream.read(DataInputStream.java:132)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
> >>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >>> >    at
> >>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
> >>> >    at
> >>> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
> >>> >    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
> >>> >    at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >>> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >>> >
> >>> >
> >>> > 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-3334740230832671768_20314 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient:
> DFSRead:
> >>> java.io.IOException: Could not obtain
> >>>
> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >>> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >>> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >>> >    at java.lang.Thread.run(Thread.java:619)
> >>> >
> >>> >
> >>> > 2010-06-13 02:07:02,674
> WARNorg.apache.hadoop.hbase.regionserver.Store:
> >>> Failed open
> >>>
> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption
> >>> is that file was corrupted at flush and lost edits pickedup by commit
> log
> >>> replay. Verify!
> >>> > java.io.IOException: Could not obtain
> >>>
> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >>> >    at
> org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >>> >    at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >>> >    at java.lang.Thread.run(Thread.java:619)
> >>> > 2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_8179737564656994784_20204 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient:
> Failedto
> >>> connect to /172.0.8.251:50010 for
> >>> file/hbase/.META./1028785192/info/515957856915851220 for
> >>> block-3334740230832671768:java.net.SocketException: Too many open files
> >>> >    at sun.nio.ch.Net.socket0(Native Method)
> >>> >    at sun.nio.ch.Net.socket(Net.java:94)
> >>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >>> >    at
> >>>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >>> >    at
> >>>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >>> >    at
> >>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >>> >    at
> >>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >>> >    at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >>> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >>> >
> >>> >
> >>> > 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient:
> Failedto
> >>> connect to /172.0.8.248:50010 for
> >>> file/hbase/.META./1028785192/info/515957856915851220 for
> >>> block-3334740230832671768:java.net.SocketException: Too many open files
> >>> >    at sun.nio.ch.Net.socket0(Native Method)
> >>> >    at sun.nio.ch.Net.socket(Net.java:94)
> >>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >>> >    at
> >>>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >>> >    at
> >>>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >>> >    at
> >>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >>> >    at
> >>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >>> >    at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >>> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >>> >
> >>> >
> >>> > 2010-06-13 02:07:03,820 ERROR
> >>> org.apache.hadoop.hbase.regionserver.HRegionServer:
> >>> > java.net.SocketException: Too many open files
> >>> >    at sun.nio.ch.Net.socket0(Native Method)
> >>> >    at sun.nio.ch.Net.socket(Net.java:94)
> >>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >>> >    at
> >>>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >>> >    at
> >>>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >>> >    at
> >>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >>> >    at
> >>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >>> >    at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >>> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >>> > 2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_-3334740230832671768_20314 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer:
> IPCServer
> >>> handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab,
> >>> [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException:
> Too
> >>> many open files
> >>> > java.net.SocketException: Too many open files
> >>> >    at sun.nio.ch.Net.socket0(Native Method)
> >>> >    at sun.nio.ch.Net.socket(Net.java:94)
> >>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >>> >    at
> >>>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >>> >    at
> >>>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >>> >    at
> >>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >>> >    at
> >>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >>> >    at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >>> >    at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >>> >    at
> >>>
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >>> > 2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient:
> Couldnot
> >>> obtain block blk_8179737564656994784_20204 from any node:
> >>> java.io.IOException: No live nodes contain current block
> >>> > 2010-06-13 02:07:05,699 WARN
> >>> org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception:
> >>> java.io.IOException: Unable to create new block.
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
> >>> >    at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> >>> > _________________________________________________________________
> >>> > Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
> >>> > https://signup.live.com/signup.aspx?id=60969
> >>>
> >>
> >
>

Re: Too many open files

Posted by Stack <st...@duboce.net>.
Is the issue xceivers then Ted?  Check your DN logs.
St.Ack

On Tue, Jan 18, 2011 at 10:49 AM, Stack <st...@duboce.net> wrote:
> Ted:
>
> The first line in hbase logs is what hbase sees for ulimit.  Check
> your log.  Whats it say?  There is a bit on ulimit on ubuntu if that
> is what you are running here in the hbase book:
> http://people.apache.org/~stack/hbase-0.90.0-candidate-3/docs/notsoquick.html#ulimit
>
> St.Ack
>
> On Tue, Jan 18, 2011 at 9:48 AM, Ted Yu <yu...@gmail.com> wrote:
>> In /etc/security/limits.conf on all nodes, I see:
>> *       soft    nofile  65535
>> *       hard    nofile  65535
>>
>> HBase 0.90 RC3 is deployed on this cluster, in regionserver log I see:
>>
>> 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
>> obtain block blk_628272724324759643_2784573 from any node:
>> java.io.IOException: No live nodes contain current block. Will get new block
>> locations from namenode and retry...
>> 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
>> obtain block blk_-266346913956002831_2784643 from any node:
>> java.io.IOException: No live nodes contain current block. Will get new block
>> locations from namenode and retry...
>> ...
>> 2011-01-18 07:41:32,889 INFO org.apache.hadoop.hdfs.DFSClient: Could not
>> obtain block blk_5858710028860745380_2785280 from any node:
>> java.io.IOException: No live nodes contain current block. Will get new block
>> locations from namenode and retry...
>>
>> Is there other parameter I should tune ?
>>
>> Thanks
>>
>> On Mon, Jun 14, 2010 at 12:02 PM, Ryan Rawson <ry...@gmail.com> wrote:
>>
>>> Please don't email the 'issues' list.
>>>
>>> http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6
>>>
>>> 2010/6/14 chen peng <ch...@hotmail.com>:
>>> >
>>> > hi, all:
>>> >        I had met a question after my program continued for 28+ hours
>>> under the circumstances of cluster which have three machine that had set
>>> ulimit to 32K.
>>> > ............
>>> > 2010-06-13 02:06:14,812 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
>>> > 2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available;
>>> sequence id is 15639538
>>> > 2010-06-13 02:06:15,373
>>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>>> Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
>>> > 2010-06-13 02:06:15,589 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available;
>>> sequence id is 15639539
>>> > 2010-06-13 02:06:15,645
>>> INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region
>>> split,META updated, and report to master all successful. Old region=REGION=>
>>> {NAME
>>> =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY
>>> => 'com.pacsun.shop:http/js_external/sj_flyout.js',
>>> ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED
>>> => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME =>
>>> 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE',
>>> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
>>> 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE',
>>> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
>>> >  >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=>
>>> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY
>>> => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION => 'NONE',
>>> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY =>
>>> 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION =>
>>> 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY
>>> => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION => 'NONE',
>>> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
>>> 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE',
>>> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
>>> =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=> 'NONE',
>>> VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY =>
>>> 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE',
>>> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536',
>>> >  IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk', COMPRESSION
>>> => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
>>> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt',
>>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt',
>>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
>>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prvfch',
>>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig',
>>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
>>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr',
>>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs',
>>> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
>>> >  SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>
>>> 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE
>>> => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig',
>>> COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE =>
>>> '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt',
>>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
>>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl',
>>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt',
>>> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
>>> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new
>>> regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M
>>> 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
>>> > 2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> Starting compaction
>>> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
>>> > 2010-06-13 02:06:15,663
>>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>>> MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
>>> > 2010-06-13 02:06:15,663
>>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>>> MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive
>>>  FloorMonitor_-49972869,1276391174724
>>> > 2010-06-13 02:06:15,664
>>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>>> Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
>>> > 2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-5104950836598570436_20226 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-5104950836598570436_20226 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-6330529819693039456_20275 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> compaction completed
>>> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in
>>> 7sec
>>> > 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> Starting compaction
>>> onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
>>> > 2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available;
>>> sequence id is 15639825
>>> > 2010-06-13 02:06:26,376
>>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>>> Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2
>>> WayPassive  Floor Monitor_-49972869,1276391174724
>>> > 2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-5772421768525630859_20164 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-1227684848175029882_20172 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-8420981703314551273_20168 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-2559191036262569688_20333 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-8420981703314551273_20168 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-2559191036262569688_20333 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-5563912881422417996_20180 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-2559191036262569688_20333 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_2343005765236386064_20192 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>>> java.io.IOException: Could not obtain
>>> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
>>> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
>>> >    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
>>> >
>>> >
>>> > 2010-06-13 02:06:41,677
>>> ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split
>>> failed for
>>> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
>>> > java.io.IOException: Could not obtain
>>> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
>>> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
>>> >    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
>>> > 2010-06-13 02:06:41,677 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>>> Starting compaction
>>> onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
>>> > 2010-06-13 02:06:41,693 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>>> > 2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_-4724151989818868275_20334
>>> > 2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_2343005765236386064_20192 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_2343005765236386064_20192 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>>> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_2197619404089718071_20334
>>> > 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>>> java.io.IOException: Could not obtain
>>> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>>> >    at java.lang.Thread.run(Thread.java:619)
>>> >
>>> >
>>> > 2010-06-13 02:06:50,655 WARNorg.apache.hadoop.hbase.regionserver.Store:
>>> Failed open
>>> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption
>>> is that file was corrupted at flush and lost edits pickedup by commit log
>>> replay. Verify!
>>> > java.io.IOException: Could not obtain
>>> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>>> >    at java.lang.Thread.run(Thread.java:619)
>>> > 2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-153353228097894218_20196 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-3334740230832671768_20314 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_4832263854844000864_20200 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>>> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_-8490642742553142526_20334
>>> > 2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-3334740230832671768_20314 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_4832263854844000864_20200 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-3334740230832671768_20314 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_4832263854844000864_20200 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>>> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>>> block blk_8167205924627743813_20334
>>> > 2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>>> java.io.IOException: Could not obtain
>>> block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at java.io.DataInputStream.read(DataInputStream.java:132)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
>>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
>>> >    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
>>> >    at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>>> >    at
>>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>>> >
>>> >
>>> > 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-3334740230832671768_20314 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>>> java.io.IOException: Could not obtain
>>> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>>> >    at java.lang.Thread.run(Thread.java:619)
>>> >
>>> >
>>> > 2010-06-13 02:07:02,674 WARNorg.apache.hadoop.hbase.regionserver.Store:
>>> Failed open
>>> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption
>>> is that file was corrupted at flush and lost edits pickedup by commit log
>>> replay. Verify!
>>> > java.io.IOException: Could not obtain
>>> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>>> >    at java.lang.Thread.run(Thread.java:619)
>>> > 2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_8179737564656994784_20204 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient: Failedto
>>> connect to /172.0.8.251:50010 for
>>> file/hbase/.META./1028785192/info/515957856915851220 for
>>> block-3334740230832671768:java.net.SocketException: Too many open files
>>> >    at sun.nio.ch.Net.socket0(Native Method)
>>> >    at sun.nio.ch.Net.socket(Net.java:94)
>>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>>> >    at
>>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>>> >    at
>>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>>> >    at
>>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>>> >    at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>>> >    at
>>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>>> >
>>> >
>>> > 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient: Failedto
>>> connect to /172.0.8.248:50010 for
>>> file/hbase/.META./1028785192/info/515957856915851220 for
>>> block-3334740230832671768:java.net.SocketException: Too many open files
>>> >    at sun.nio.ch.Net.socket0(Native Method)
>>> >    at sun.nio.ch.Net.socket(Net.java:94)
>>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>>> >    at
>>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>>> >    at
>>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>>> >    at
>>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>>> >    at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>>> >    at
>>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>>> >
>>> >
>>> > 2010-06-13 02:07:03,820 ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> > java.net.SocketException: Too many open files
>>> >    at sun.nio.ch.Net.socket0(Native Method)
>>> >    at sun.nio.ch.Net.socket(Net.java:94)
>>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>>> >    at
>>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>>> >    at
>>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>>> >    at
>>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>>> >    at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>>> >    at
>>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>>> > 2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_-3334740230832671768_20314 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer: IPCServer
>>> handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab,
>>> [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException: Too
>>> many open files
>>> > java.net.SocketException: Too many open files
>>> >    at sun.nio.ch.Net.socket0(Native Method)
>>> >    at sun.nio.ch.Net.socket(Net.java:94)
>>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>>> >    at
>>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>>> >    at
>>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>>> >    at
>>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>>> >    at
>>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>>> >    at
>>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>>> >    at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>>> >    at
>>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>>> > 2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>>> obtain block blk_8179737564656994784_20204 from any node:
>>> java.io.IOException: No live nodes contain current block
>>> > 2010-06-13 02:07:05,699 WARN
>>> org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception:
>>> java.io.IOException: Unable to create new block.
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>> >    at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>> > _________________________________________________________________
>>> > Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
>>> > https://signup.live.com/signup.aspx?id=60969
>>>
>>
>

Re: Too many open files

Posted by Stack <st...@duboce.net>.
Ted:

The first line in hbase logs is what hbase sees for ulimit.  Check
your log.  Whats it say?  There is a bit on ulimit on ubuntu if that
is what you are running here in the hbase book:
http://people.apache.org/~stack/hbase-0.90.0-candidate-3/docs/notsoquick.html#ulimit

St.Ack

On Tue, Jan 18, 2011 at 9:48 AM, Ted Yu <yu...@gmail.com> wrote:
> In /etc/security/limits.conf on all nodes, I see:
> *       soft    nofile  65535
> *       hard    nofile  65535
>
> HBase 0.90 RC3 is deployed on this cluster, in regionserver log I see:
>
> 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> obtain block blk_628272724324759643_2784573 from any node:
> java.io.IOException: No live nodes contain current block. Will get new block
> locations from namenode and retry...
> 2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> obtain block blk_-266346913956002831_2784643 from any node:
> java.io.IOException: No live nodes contain current block. Will get new block
> locations from namenode and retry...
> ...
> 2011-01-18 07:41:32,889 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> obtain block blk_5858710028860745380_2785280 from any node:
> java.io.IOException: No live nodes contain current block. Will get new block
> locations from namenode and retry...
>
> Is there other parameter I should tune ?
>
> Thanks
>
> On Mon, Jun 14, 2010 at 12:02 PM, Ryan Rawson <ry...@gmail.com> wrote:
>
>> Please don't email the 'issues' list.
>>
>> http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6
>>
>> 2010/6/14 chen peng <ch...@hotmail.com>:
>> >
>> > hi, all:
>> >        I had met a question after my program continued for 28+ hours
>> under the circumstances of cluster which have three machine that had set
>> ulimit to 32K.
>> > ............
>> > 2010-06-13 02:06:14,812 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
>> > 2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available;
>> sequence id is 15639538
>> > 2010-06-13 02:06:15,373
>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>> Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
>> > 2010-06-13 02:06:15,589 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available;
>> sequence id is 15639539
>> > 2010-06-13 02:06:15,645
>> INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region
>> split,META updated, and report to master all successful. Old region=REGION=>
>> {NAME
>> =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY
>> => 'com.pacsun.shop:http/js_external/sj_flyout.js',
>> ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED
>> => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME =>
>> 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE',
>> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
>> 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE',
>> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
>> >  >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=>
>> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY
>> => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION => 'NONE',
>> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY =>
>> 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION =>
>> 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY
>> => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION => 'NONE',
>> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
>> 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE',
>> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
>> =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=> 'NONE',
>> VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY =>
>> 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE',
>> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536',
>> >  IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk', COMPRESSION
>> => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
>> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt',
>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt',
>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prvfch',
>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig',
>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr',
>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs',
>> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
>> >  SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>
>> 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE
>> => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig',
>> COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE =>
>> '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt',
>> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
>> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl',
>> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
>> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt',
>> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
>> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new
>> regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M
>> 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
>> > 2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> Starting compaction
>> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
>> > 2010-06-13 02:06:15,663
>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>> MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
>> > 2010-06-13 02:06:15,663
>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>> MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive
>>  FloorMonitor_-49972869,1276391174724
>> > 2010-06-13 02:06:15,664
>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>> Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
>> > 2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-5104950836598570436_20226 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-5104950836598570436_20226 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-6330529819693039456_20275 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> compaction completed
>> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in
>> 7sec
>> > 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> Starting compaction
>> onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
>> > 2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available;
>> sequence id is 15639825
>> > 2010-06-13 02:06:26,376
>> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
>> Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2
>> WayPassive  Floor Monitor_-49972869,1276391174724
>> > 2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-5772421768525630859_20164 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-1227684848175029882_20172 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-8420981703314551273_20168 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-2559191036262569688_20333 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-8420981703314551273_20168 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-2559191036262569688_20333 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-5563912881422417996_20180 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-2559191036262569688_20333 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_2343005765236386064_20192 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>> java.io.IOException: Could not obtain
>> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
>> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
>> >    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
>> >    at
>> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
>> >
>> >
>> > 2010-06-13 02:06:41,677
>> ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split
>> failed for
>> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
>> > java.io.IOException: Could not obtain
>> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
>> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
>> >    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
>> >    at
>> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
>> > 2010-06-13 02:06:41,677 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
>> Starting compaction
>> onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
>> > 2010-06-13 02:06:41,693 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>> > 2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>> block blk_-4724151989818868275_20334
>> > 2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_2343005765236386064_20192 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_2343005765236386064_20192 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>> block blk_2197619404089718071_20334
>> > 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>> java.io.IOException: Could not obtain
>> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>> >    at java.lang.Thread.run(Thread.java:619)
>> >
>> >
>> > 2010-06-13 02:06:50,655 WARNorg.apache.hadoop.hbase.regionserver.Store:
>> Failed open
>> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption
>> is that file was corrupted at flush and lost edits pickedup by commit log
>> replay. Verify!
>> > java.io.IOException: Could not obtain
>> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>> >    at java.lang.Thread.run(Thread.java:619)
>> > 2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-153353228097894218_20196 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-3334740230832671768_20314 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_4832263854844000864_20200 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>> block blk_-8490642742553142526_20334
>> > 2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-3334740230832671768_20314 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_4832263854844000864_20200 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-3334740230832671768_20314 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_4832263854844000864_20200 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:Exception
>> in createBlockOutputStream java.net.SocketException: Too manyopen files
>> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
>> block blk_8167205924627743813_20334
>> > 2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>> java.io.IOException: Could not obtain
>> block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at java.io.DataInputStream.read(DataInputStream.java:132)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
>> >    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
>> >    at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>> >    at
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>> >
>> >
>> > 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-3334740230832671768_20314 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
>> java.io.IOException: Could not obtain
>> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>> >    at java.lang.Thread.run(Thread.java:619)
>> >
>> >
>> > 2010-06-13 02:07:02,674 WARNorg.apache.hadoop.hbase.regionserver.Store:
>> Failed open
>> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption
>> is that file was corrupted at flush and lost edits pickedup by commit log
>> replay. Verify!
>> > java.io.IOException: Could not obtain
>> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>> >    at
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>> >    at java.lang.Thread.run(Thread.java:619)
>> > 2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_8179737564656994784_20204 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient: Failedto
>> connect to /172.0.8.251:50010 for
>> file/hbase/.META./1028785192/info/515957856915851220 for
>> block-3334740230832671768:java.net.SocketException: Too many open files
>> >    at sun.nio.ch.Net.socket0(Native Method)
>> >    at sun.nio.ch.Net.socket(Net.java:94)
>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>> >    at
>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>> >    at
>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>> >    at
>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>> >    at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>> >    at
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>> >
>> >
>> > 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient: Failedto
>> connect to /172.0.8.248:50010 for
>> file/hbase/.META./1028785192/info/515957856915851220 for
>> block-3334740230832671768:java.net.SocketException: Too many open files
>> >    at sun.nio.ch.Net.socket0(Native Method)
>> >    at sun.nio.ch.Net.socket(Net.java:94)
>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>> >    at
>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>> >    at
>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>> >    at
>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>> >    at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>> >    at
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>> >
>> >
>> > 2010-06-13 02:07:03,820 ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> > java.net.SocketException: Too many open files
>> >    at sun.nio.ch.Net.socket0(Native Method)
>> >    at sun.nio.ch.Net.socket(Net.java:94)
>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>> >    at
>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>> >    at
>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>> >    at
>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>> >    at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>> >    at
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>> > 2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_-3334740230832671768_20314 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer: IPCServer
>> handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab,
>> [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException: Too
>> many open files
>> > java.net.SocketException: Too many open files
>> >    at sun.nio.ch.Net.socket0(Native Method)
>> >    at sun.nio.ch.Net.socket(Net.java:94)
>> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>> >    at
>> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>> >    at
>> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>> >    at
>> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>> >    at
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>> >    at
>> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>> >    at
>> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>> >    at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >    at java.lang.reflect.Method.invoke(Method.java:597)
>> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>> >    at
>> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>> > 2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
>> obtain block blk_8179737564656994784_20204 from any node:
>> java.io.IOException: No live nodes contain current block
>> > 2010-06-13 02:07:05,699 WARN
>> org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception:
>> java.io.IOException: Unable to create new block.
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>> >    at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>> > _________________________________________________________________
>> > Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
>> > https://signup.live.com/signup.aspx?id=60969
>>
>

Re: Too many open files

Posted by Ted Yu <yu...@gmail.com>.
In /etc/security/limits.conf on all nodes, I see:
*       soft    nofile  65535
*       hard    nofile  65535

HBase 0.90 RC3 is deployed on this cluster, in regionserver log I see:

2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
obtain block blk_628272724324759643_2784573 from any node:
java.io.IOException: No live nodes contain current block. Will get new block
locations from namenode and retry...
2011-01-18 07:41:32,887 INFO org.apache.hadoop.hdfs.DFSClient: Could not
obtain block blk_-266346913956002831_2784643 from any node:
java.io.IOException: No live nodes contain current block. Will get new block
locations from namenode and retry...
...
2011-01-18 07:41:32,889 INFO org.apache.hadoop.hdfs.DFSClient: Could not
obtain block blk_5858710028860745380_2785280 from any node:
java.io.IOException: No live nodes contain current block. Will get new block
locations from namenode and retry...

Is there other parameter I should tune ?

Thanks

On Mon, Jun 14, 2010 at 12:02 PM, Ryan Rawson <ry...@gmail.com> wrote:

> Please don't email the 'issues' list.
>
> http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6
>
> 2010/6/14 chen peng <ch...@hotmail.com>:
> >
> > hi, all:
> >        I had met a question after my program continued for 28+ hours
> under the circumstances of cluster which have three machine that had set
> ulimit to 32K.
> > ............
> > 2010-06-13 02:06:14,812 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
> > 2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available;
> sequence id is 15639538
> > 2010-06-13 02:06:15,373
> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> > 2010-06-13 02:06:15,589 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available;
> sequence id is 15639539
> > 2010-06-13 02:06:15,645
> INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region
> split,META updated, and report to master all successful. Old region=REGION=>
> {NAME
> =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY
> => 'com.pacsun.shop:http/js_external/sj_flyout.js',
> ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED
> => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME =>
> 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE',
> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
> 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE',
> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
> >  >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=>
> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY
> => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION => 'NONE',
> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY =>
> 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION =>
> 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY
> => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION => 'NONE',
> VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY =>
> 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE',
> VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=> 'NONE',
> VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY =>
> 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE',
> VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536',
> >  IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk', COMPRESSION
> => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt',
> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt',
> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prvfch',
> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig',
> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr',
> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs',
> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
> >  SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>
> 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE
> => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig',
> COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt',
> COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE
> =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl',
> COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt',
> COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>
> '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new
> regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M
> 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
> > 2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction
> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
> > 2010-06-13 02:06:15,663
> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> > 2010-06-13 02:06:15,663
> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive
>  FloorMonitor_-49972869,1276391174724
> > 2010-06-13 02:06:15,664
> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> > 2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-5104950836598570436_20226 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-5104950836598570436_20226 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-6330529819693039456_20275 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed
> onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in
> 7sec
> > 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction
> onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> > 2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available;
> sequence id is 15639825
> > 2010-06-13 02:06:26,376
> INFOorg.apache.hadoop.hbase.regionserver.HRegionServer:
> Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2
> WayPassive  Floor Monitor_-49972869,1276391174724
> > 2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-5772421768525630859_20164 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-1227684848175029882_20172 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-8420981703314551273_20168 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-2559191036262569688_20333 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-8420981703314551273_20168 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-2559191036262569688_20333 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-5563912881422417996_20180 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-2559191036262569688_20333 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_2343005765236386064_20192 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
> java.io.IOException: Could not obtain
> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
> >    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
> >    at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> >
> >
> > 2010-06-13 02:06:41,677
> ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split
> failed for
> regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> > java.io.IOException: Could not obtain
> block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >    at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
> >    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
> >    at
> org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> > 2010-06-13 02:06:41,677 INFOorg.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction
> onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> > 2010-06-13 02:06:41,693 INFO org.apache.hadoop.hdfs.DFSClient:Exception
> in createBlockOutputStream java.net.SocketException: Too manyopen files
> > 2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> block blk_-4724151989818868275_20334
> > 2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_2343005765236386064_20192 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_2343005765236386064_20192 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:Exception
> in createBlockOutputStream java.net.SocketException: Too manyopen files
> > 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> block blk_2197619404089718071_20334
> > 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
> java.io.IOException: Could not obtain
> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >    at java.lang.Thread.run(Thread.java:619)
> >
> >
> > 2010-06-13 02:06:50,655 WARNorg.apache.hadoop.hbase.regionserver.Store:
> Failed open
> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption
> is that file was corrupted at flush and lost edits pickedup by commit log
> replay. Verify!
> > java.io.IOException: Could not obtain
> block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >    at java.lang.Thread.run(Thread.java:619)
> > 2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-153353228097894218_20196 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-3334740230832671768_20314 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_4832263854844000864_20200 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:Exception
> in createBlockOutputStream java.net.SocketException: Too manyopen files
> > 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> block blk_-8490642742553142526_20334
> > 2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-3334740230832671768_20314 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_4832263854844000864_20200 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-3334740230832671768_20314 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_4832263854844000864_20200 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:Exception
> in createBlockOutputStream java.net.SocketException: Too manyopen files
> > 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning
> block blk_8167205924627743813_20334
> > 2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
> java.io.IOException: Could not obtain
> block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at java.io.DataInputStream.read(DataInputStream.java:132)
> >    at
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
> >    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
> >    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >    at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >
> >
> > 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-3334740230832671768_20314 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead:
> java.io.IOException: Could not obtain
> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >    at java.lang.Thread.run(Thread.java:619)
> >
> >
> > 2010-06-13 02:07:02,674 WARNorg.apache.hadoop.hbase.regionserver.Store:
> Failed open
> ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption
> is that file was corrupted at flush and lost edits pickedup by commit log
> replay. Verify!
> > java.io.IOException: Could not obtain
> block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
> >    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
> >    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
> >    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
> >    at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
> >    at java.lang.Thread.run(Thread.java:619)
> > 2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_8179737564656994784_20204 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient: Failedto
> connect to /172.0.8.251:50010 for
> file/hbase/.META./1028785192/info/515957856915851220 for
> block-3334740230832671768:java.net.SocketException: Too many open files
> >    at sun.nio.ch.Net.socket0(Native Method)
> >    at sun.nio.ch.Net.socket(Net.java:94)
> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >    at
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >    at
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >    at
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >    at
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >    at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >
> >
> > 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient: Failedto
> connect to /172.0.8.248:50010 for
> file/hbase/.META./1028785192/info/515957856915851220 for
> block-3334740230832671768:java.net.SocketException: Too many open files
> >    at sun.nio.ch.Net.socket0(Native Method)
> >    at sun.nio.ch.Net.socket(Net.java:94)
> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >    at
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >    at
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >    at
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >    at
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >    at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> >
> >
> > 2010-06-13 02:07:03,820 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> > java.net.SocketException: Too many open files
> >    at sun.nio.ch.Net.socket0(Native Method)
> >    at sun.nio.ch.Net.socket(Net.java:94)
> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >    at
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >    at
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >    at
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >    at
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >    at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> > 2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_-3334740230832671768_20314 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer: IPCServer
> handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab,
> [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException: Too
> many open files
> > java.net.SocketException: Too many open files
> >    at sun.nio.ch.Net.socket0(Native Method)
> >    at sun.nio.ch.Net.socket(Net.java:94)
> >    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
> >    at
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
> >    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
> >    at
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
> >    at
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >    at
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
> >    at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
> >    at
> org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
> >    at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
> >    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
> >    at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
> >    at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> > 2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot
> obtain block blk_8179737564656994784_20204 from any node:
> java.io.IOException: No live nodes contain current block
> > 2010-06-13 02:07:05,699 WARN
> org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception:
> java.io.IOException: Unable to create new block.
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
> >    at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> > _________________________________________________________________
> > Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
> > https://signup.live.com/signup.aspx?id=60969
>

Re: Too many open files

Posted by Ryan Rawson <ry...@gmail.com>.
Please don't email the 'issues' list.

http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A6

2010/6/14 chen peng <ch...@hotmail.com>:
>
> hi, all:
>        I had met a question after my program continued for 28+ hours under the circumstances of cluster which have three machine that had set ulimit to 32K.
> ............
> 2010-06-13 02:06:14,812 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Closednutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971
> 2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegion: regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001available; sequence id is 15639538
> 2010-06-13 02:06:15,373 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: Worker:MSG_REGION_OPEN:nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> 2010-06-13 02:06:15,589 INFOorg.apache.hadoop.hbase.regionserver.HRegion: regionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882available; sequence id is 15639539
> 2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.CompactSplitThread: region split,META updated, and report to master all successful. Old region=REGION=> {NAME =>'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971',STARTKEY => 'com.pacsun.shop:http/js_external/sj_flyout.js', ENDKEY=>'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x252F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_',ENCODED => 908568317, OFFLINE => true, SPLIT => true, TABLE=> {{NAME => 'nutchtabletest', FAMILIES => [{NAME => 'bas',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'cnt', COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =
>  >'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION=> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME=> 'fchi', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION=> 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE=> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME=> 'mtdt', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536',
>  IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'olnk', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prsstt', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prtstt', COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'prvfch', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'prvsig', COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'repr', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'rtrs', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCK
>  SIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'scr',COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647',BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>'true'}, {NAME => 'sig', COMPRESSION => 'NONE', VERSIONS =>'3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =>'false', BLOCKCACHE => 'true'}, {NAME => 'stt', COMPRESSION =>'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE =>'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME =>'ttl', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>'2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',BLOCKCACHE => 'true'}, {NAME => 'txt', COMPRESSION => 'NONE',VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536',IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724,nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive  FloorMonitor_-49972869,1276391174724. Split took 0sec
> 2010-06-13 02:06:15,645 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Starting compaction onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680
> 2010-06-13 02:06:15,663 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> 2010-06-13 02:06:15,663 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive  FloorMonitor_-49972869,1276391174724
> 2010-06-13 02:06:15,664 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: Worker:MSG_REGION_OPEN:nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724
> 2010-06-13 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5104950836598570436_20226 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:19,582 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5104950836598570436_20226 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:22,814 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-6330529819693039456_20275 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion: compaction completed onregionnutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680in 7sec
> 2010-06-13 02:06:23,474 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Starting compaction onregionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> 2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegion: regionnutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566available; sequence id is 15639825
> 2010-06-13 02:06:26,376 INFOorg.apache.hadoop.hbase.regionserver.HRegionServer: Worker:MSG_REGION_OPEN: nutchtabletest,com.samash.www:http/p/BR15M 15 2 WayPassive  Floor Monitor_-49972869,1276391174724
> 2010-06-13 02:06:26,598 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5772421768525630859_20164 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:29,612 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-1227684848175029882_20172 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:32,618 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-8420981703314551273_20168 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:32,672 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-2559191036262569688_20333 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:35,619 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-8420981703314551273_20168 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:35,674 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-2559191036262569688_20333 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:38,637 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-5563912881422417996_20180 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:38,675 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-2559191036262569688_20333 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:41,650 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_2343005765236386064_20192 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:41,677 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at java.io.DataInputStream.readFully(DataInputStream.java:178)
>    at java.io.DataInputStream.readFully(DataInputStream.java:152)
>    at org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>    at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
>    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
>    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
>    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
>    at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
>
>
> 2010-06-13 02:06:41,677 ERRORorg.apache.hadoop.hbase.regionserver.CompactSplitThread:Compaction/Split failed for regionnutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177
> java.io.IOException: Could not obtain block:blk_-2559191036262569688_20333file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at java.io.DataInputStream.readFully(DataInputStream.java:178)
>    at java.io.DataInputStream.readFully(DataInputStream.java:152)
>    at org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>    at org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974)
>    at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766)
>    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832)
>    at org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785)
>    at org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)
> 2010-06-13 02:06:41,677 INFOorg.apache.hadoop.hbase.regionserver.HRegion: Starting compaction onregionnutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177
> 2010-06-13 02:06:41,693 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
> 2010-06-13 02:06:41,694 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-4724151989818868275_20334
> 2010-06-13 02:06:44,652 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_2343005765236386064_20192 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:47,653 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_2343005765236386064_20192 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
> 2010-06-13 02:06:47,695 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_2197619404089718071_20334
> 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>    at java.lang.Thread.run(Thread.java:619)
>
>
> 2010-06-13 02:06:50,655 WARNorg.apache.hadoop.hbase.regionserver.Store: Failed open ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317;presumption is that file was corrupted at flush and lost edits pickedup by commit log replay. Verify!
> java.io.IOException: Could not obtain block:blk_2343005765236386064_20192file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>    at java.lang.Thread.run(Thread.java:619)
> 2010-06-13 02:06:50,659 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-153353228097894218_20196 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:51,804 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:53,668 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_4832263854844000864_20200 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
> 2010-06-13 02:06:53,697 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-8490642742553142526_20334
> 2010-06-13 02:06:54,806 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:56,669 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_4832263854844000864_20200 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:57,808 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_4832263854844000864_20200 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient:Exception in createBlockOutputStream java.net.SocketException: Too manyopen files
> 2010-06-13 02:06:59,698 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_8167205924627743813_20334
> 2010-06-13 02:07:00,809 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_-3334740230832671768_20314file=/hbase/.META./1028785192/info/515957856915851220
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72)
>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion.java:1883)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877)
>    at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>
>
> 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient: DFSRead: java.io.IOException: Could not obtain block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>    at java.lang.Thread.run(Thread.java:619)
>
>
> 2010-06-13 02:07:02,674 WARNorg.apache.hadoop.hbase.regionserver.Store: Failed open ofhdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317;presumption is that file was corrupted at flush and lost edits pickedup by commit log replay. Verify!
> java.io.IOException: Could not obtain block:blk_4832263854844000864_20200file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
>    at java.io.DataInputStream.readBoolean(DataInputStream.java:225)
>    at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117)
>    at org.apache.hadoop.hbase.io.Reference.read(Reference.java:151)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126)
>    at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>    at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641)
>    at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462)
>    at java.lang.Thread.run(Thread.java:619)
> 2010-06-13 02:07:02,676 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_8179737564656994784_20204 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:07:03,817 WARN org.apache.hadoop.hdfs.DFSClient: Failedto connect to /172.0.8.251:50010 for file/hbase/.META./1028785192/info/515957856915851220 for block-3334740230832671768:java.net.SocketException: Too many open files
>    at sun.nio.ch.Net.socket0(Native Method)
>    at sun.nio.ch.Net.socket(Net.java:94)
>    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>
>
> 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient: Failedto connect to /172.0.8.248:50010 for file/hbase/.META./1028785192/info/515957856915851220 for block-3334740230832671768:java.net.SocketException: Too many open files
>    at sun.nio.ch.Net.socket0(Native Method)
>    at sun.nio.ch.Net.socket(Net.java:94)
>    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
>
>
> 2010-06-13 02:07:03,820 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.net.SocketException: Too many open files
>    at sun.nio.ch.Net.socket0(Native Method)
>    at sun.nio.ch.Net.socket(Net.java:94)
>    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> 2010-06-13 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_-3334740230832671768_20314 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:07:03,827 INFO org.apache.hadoop.ipc.HBaseServer: IPCServer handler 6 on 60020, call getClosestRowBefore([B@d0a973,[B@124c6ab, [B@16f25f6) from 172.0.8.251:36613: error:java.net.SocketException: Too many open files
> java.net.SocketException: Too many open files
>    at sun.nio.ch.Net.socket0(Native Method)
>    at sun.nio.ch.Net.socket(Net.java:94)
>    at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>    at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>    at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>    at org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300)
>    at org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182)
>    at org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164)
>    at org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131)
>    at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729)
>    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> 2010-06-13 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient: Couldnot obtain block blk_8179737564656994784_20204 from any node: java.io.IOException: No live nodes contain current block
> 2010-06-13 02:07:05,699 WARN org.apache.hadoop.hdfs.DFSClient:DataStreamer Exception: java.io.IOException: Unable to create new block.
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> _________________________________________________________________
> Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
> https://signup.live.com/signup.aspx?id=60969