You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Lucas Nazário dos Santos <na...@gmail.com> on 2009/06/17 15:45:24 UTC

Can't list table that exists inside HBase

Hi all,

I'm running HBase 0.19.3 with Hadoop 0.19.1 on a clusters of 2 machines
operating Linux Ubuntu. Files are not being stored inside the /tmp folder.

The problem that already occurred 3 times is that, suddenly, all data stored
in my table is gone after the entire cluster is restarted, either because
computers turned off as consequence of lack of energy (here the machines are
also restarted) or because the connection between computers had gone (some
times the router crashes and in this case, machines are not restarted, only
the Hadoop/HBase cluster).

Does anybody have a clue on what may be happening? Any help is appreciate.

Cheers,
Lucas


Some facts:

* The problem happens one to two days after the cluster is set up and Hadoop
jobs begin to insert data into HBase.

* I list all HBase data inside HDFS and the table seems to be there, like
shown bellow. The table name is "document".

root@server2:/usr/local/hadoop-0.19.1# bin/hadoop dfs -ls /hbase
Found 8 items
drwxr-xr-x   - root supergroup          0 2009-06-16 10:00 /hbase/-ROOT-
drwxr-xr-x   - root supergroup          0 2009-06-16 10:00 /hbase/.META.
drwxr-xr-x   - root supergroup          0 2009-06-16 15:20 /hbase/document
-rw-r--r--   3 root supergroup          3 2009-06-16 10:00
/hbase/hbase.version
drwxr-xr-x   - root supergroup          0 2009-06-17 04:00
/hbase/log_192.168.1.2_1245157275572_60020
drwxr-xr-x   - root supergroup          0 2009-06-17 09:26
/hbase/log_192.168.1.2_1245241565117_60020
drwxr-xr-x   - root supergroup          0 2009-06-16 11:00
/hbase/log_192.168.1.3_1245157227327_60020
drwxr-xr-x   - root supergroup          0 2009-06-17 09:26
/hbase/log_192.168.1.3_1245241512351_60020


* HBase logs don't seem to acuse anything wrong.


Master log (region log is bellow):

2009-06-17 04:40:27,097 INFO org.apache.hadoop.hbase.master.BaseScanner: All
1 .META. region(s) scanned
2009-06-17 04:41:27,007 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0,
startKey: <>, server: 192.168.1.3:60020}
2009-06-17 04:41:27,034 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scan of 1 row(s) of meta region {regionname:
-ROOT-,,0, startKey: <>, server: 192.168.1.3:60020} complete
2009-06-17 04:41:27,092 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scanning meta region {regionname: .META.,,1,
startKey: <>, server: 192.168.1.2:60020}
2009-06-17 04:41:27,109 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scan of 1 row(s) of meta region {regionname:
.META.,,1, startKey: <>, server: 192.168.1.2:60020} complete
2009-06-17 04:41:27,109 INFO org.apache.hadoop.hbase.master.BaseScanner: All
1 .META. region(s) scanned
Wed Jun 17 09:25:09 BRT 2009 Starting master on server2
ulimit -n 1024
2009-06-17 09:25:10,443 INFO org.apache.hadoop.hbase.master.HMaster:
vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc.,
vmVersion=10.0-b23
2009-06-17 09:25:10,443 INFO org.apache.hadoop.hbase.master.HMaster:
vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
-Dhbase.log.dir=/usr/local/hbase-0.19.3/bin/../logs,
-Dhbase.log.file=hbase-root-master-server2.log,
-Dhbase.home.dir=/usr/local/hbase-0.19.3/bin/.., -Dhbase.id.str=root,
-Dhbase.root.logger=INFO,DRFA,
-Djava.library.path=/usr/local/hbase-0.19.3/bin/../lib/native/Linux-i386-32]
2009-06-17 09:25:11,574 INFO org.apache.hadoop.hbase.master.HMaster: Waiting
for dfs to exit safe mode...
2009-06-17 09:25:21,577 INFO org.apache.hadoop.hbase.master.HMaster: Waiting
for dfs to exit safe mode...
2009-06-17 09:25:31,580 INFO org.apache.hadoop.hbase.master.HMaster: Waiting
for dfs to exit safe mode...
2009-06-17 09:25:41,583 INFO org.apache.hadoop.hbase.master.HMaster: Waiting
for dfs to exit safe mode...
2009-06-17 09:25:51,586 INFO org.apache.hadoop.hbase.master.HMaster: Waiting
for dfs to exit safe mode...
2009-06-17 09:26:01,656 INFO org.apache.hadoop.hbase.master.HMaster: Root
region dir: hdfs://server2:9000/hbase/-ROOT-/70236052
2009-06-17 09:26:02,183 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
Initializing RPC Metrics with hostName=HMaster, port=60000
2009-06-17 09:26:02,421 INFO org.apache.hadoop.hbase.master.HMaster: HMaster
initialized on 192.168.1.3:60000
2009-06-17 09:26:02,432 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=Master, sessionId=HMaster
2009-06-17 09:26:02,433 INFO
org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
2009-06-17 09:26:03,029 INFO org.mortbay.util.Credential: Checking Resource
aliases
2009-06-17 09:26:03,062 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2009-06-17 09:26:03,063 INFO org.mortbay.util.Container: Started
HttpContext[/logs,/logs]
2009-06-17 09:26:03,561 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@162e295
2009-06-17 09:26:03,667 INFO org.mortbay.util.Container: Started
WebApplicationContext[/static,/static]
2009-06-17 09:26:03,865 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@3c9c31
2009-06-17 09:26:03,869 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2009-06-17 09:26:04,085 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@a23610
2009-06-17 09:26:04,113 INFO org.mortbay.util.Container: Started
WebApplicationContext[/api,rest]
2009-06-17 09:26:04,198 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:60010
2009-06-17 09:26:04,199 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.Server@da3a1e
2009-06-17 09:26:04,199 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
Responder: starting
2009-06-17 09:26:04,264 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 1 on 60000: starting
2009-06-17 09:26:04,241 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 0 on 60000: starting
2009-06-17 09:26:04,239 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
listener on 60000: starting
2009-06-17 09:26:04,265 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 2 on 60000: starting
2009-06-17 09:26:04,289 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 3 on 60000: starting
2009-06-17 09:26:04,289 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 4 on 60000: starting
2009-06-17 09:26:04,294 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 5 on 60000: starting
2009-06-17 09:26:04,303 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 6 on 60000: starting
2009-06-17 09:26:04,305 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 7 on 60000: starting
2009-06-17 09:26:04,310 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 8 on 60000: starting
2009-06-17 09:26:04,310 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 9 on 60000: starting
2009-06-17 09:26:04,312 INFO org.apache.hadoop.hbase.master.ServerManager:
Received start message from: 192.168.1.3:60020
2009-06-17 09:26:04,317 INFO org.apache.hadoop.hbase.master.ServerManager:
Received start message from: 192.168.1.2:60020
2009-06-17 09:26:05,266 INFO org.apache.hadoop.hbase.master.RegionManager:
Assigning region -ROOT-,,0 to 192.168.1.3:60020
2009-06-17 09:26:06,961 INFO org.apache.hadoop.hbase.master.RegionManager:
in safe mode
2009-06-17 09:26:08,277 INFO org.apache.hadoop.hbase.master.ServerManager:
Received MSG_REPORT_OPEN: -ROOT-,,0: safeMode=false from 192.168.1.3:60020
2009-06-17 09:26:08,278 INFO org.apache.hadoop.hbase.master.RegionManager:
exiting safe mode
2009-06-17 09:26:08,279 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0,
startKey: <>, server: 192.168.1.3:60020}
2009-06-17 09:26:08,372 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scan of 1 row(s) of meta region {regionname:
-ROOT-,,0, startKey: <>, server: 192.168.1.3:60020} complete
2009-06-17 09:26:09,966 INFO org.apache.hadoop.hbase.master.RegionManager:
Assigning region .META.,,1 to 192.168.1.2:60020
2009-06-17 09:26:12,973 INFO org.apache.hadoop.hbase.master.ServerManager:
Received MSG_REPORT_OPEN: .META.,,1: safeMode=false from 192.168.1.2:60020
2009-06-17 09:26:12,978 INFO
org.apache.hadoop.hbase.master.ProcessRegionOpen$1: .META.,,1 open on
192.168.1.2:60020
2009-06-17 09:26:12,978 INFO
org.apache.hadoop.hbase.master.ProcessRegionOpen$1: updating row .META.,,1
in region -ROOT-,,0 with startcode 1245241565117 and server
192.168.1.2:60020
2009-06-17 09:27:02,434 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scanning meta region {regionname: -ROOT-,,0,
startKey: <>, server: 192.168.1.3:60020}
2009-06-17 09:27:02,435 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scanning meta region {regionname: .META.,,1,
startKey: <>, server: 192.168.1.2:60020}
2009-06-17 09:27:02,471 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scan of 1 row(s) of meta region {regionname:
-ROOT-,,0, startKey: <>, server: 192.168.1.3:60020} complete
2009-06-17 09:27:02,485 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scan of 0 row(s) of meta region {regionname:
.META.,,1, startKey: <>, server: 192.168.1.2:60020} complete
2009-06-17 09:27:02,485 INFO org.apache.hadoop.hbase.master.BaseScanner: All
1 .META. region(s) scanned


Region log:

Tue Jun 16 10:00:26 BRT 2009 Starting regionserver on server2
ulimit -n 1024
2009-06-16 10:00:27,107 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer:
vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
-Dhbase.log.dir=/usr/local/hbase-0.19.3/bin/../logs,
-Dhbase.log.file=hbase-root-regionserver-server2.log,
-Dhbase.home.dir=/usr/local/hbase-0.19.3/bin/.., -Dhbase.id.str=root,
-Dhbase.root.logger=INFO,DRFA,
-Djava.library.path=/usr/local/hbase-0.19.3/bin/../lib/native/Linux-i386-32]
2009-06-16 10:00:27,242 INFO
org.apache.hadoop.hbase.regionserver.MemcacheFlusher:
globalMemcacheLimit=385.2m, globalMemcacheLimitLowMark=240.8m,
maxHeap=963.0m
2009-06-16 10:00:27,246 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 10000000ms
2009-06-16 10:00:27,323 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
Initializing RPC Metrics with hostName=HRegionServer, port=60020
2009-06-16 10:00:28,287 INFO org.apache.hadoop.hbase.regionserver.HLog: HLog
configuration: blocksize=67108864, maxlogentries=100000,
flushlogentries=100, optionallogflushinternal=10000ms
2009-06-16 10:00:28,323 INFO org.apache.hadoop.hbase.regionserver.HLog: New
log writer:
/hbase/log_192.168.1.3_1245157227327_60020/hlog.dat.1245157228288
2009-06-16 10:00:28,326 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=RegionServer,
sessionId=regionserver/0:0:0:0:0:0:0:0:60020
2009-06-16 10:00:28,327 INFO
org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
Initialized
2009-06-16 10:00:28,421 INFO org.mortbay.util.Credential: Checking Resource
aliases
2009-06-16 10:00:28,425 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2009-06-16 10:00:28,426 INFO org.mortbay.util.Container: Started
HttpContext[/logs,/logs]
2009-06-16 10:00:28,736 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@d75415
2009-06-16 10:00:28,792 INFO org.mortbay.util.Container: Started
WebApplicationContext[/static,/static]
2009-06-16 10:00:28,909 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@1e78c96
2009-06-16 10:00:28,912 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2009-06-16 10:00:28,914 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:60030
2009-06-16 10:00:28,914 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.Server@737371
2009-06-16 10:00:28,914 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
Responder: starting
2009-06-16 10:00:28,916 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
listener on 60020: starting
2009-06-16 10:00:28,916 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 0 on 60020: starting
2009-06-16 10:00:28,916 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 1 on 60020: starting
2009-06-16 10:00:28,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 2 on 60020: starting
2009-06-16 10:00:28,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 3 on 60020: starting
2009-06-16 10:00:28,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 4 on 60020: starting
2009-06-16 10:00:28,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 5 on 60020: starting
2009-06-16 10:00:28,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 6 on 60020: starting
2009-06-16 10:00:28,918 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 7 on 60020: starting
2009-06-16 10:00:28,938 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 8 on 60020: starting
2009-06-16 10:00:28,965 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: HRegionServer started
at: 192.168.1.3:60020
2009-06-16 10:00:28,966 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 9 on 60020: starting
2009-06-16 10:00:28,970 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
-ROOT-,,0: safeMode=false
2009-06-16 10:00:28,971 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
-ROOT-,,0: safeMode=false
2009-06-16 10:00:29,093 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded
the native-hadoop library
2009-06-16 10:00:29,095 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory:
Successfully loaded & initialized native-zlib library
2009-06-16 10:00:29,096 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-16 10:00:29,096 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-16 10:00:29,096 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-16 10:00:29,096 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-16 10:00:29,109 INFO org.apache.hadoop.hbase.regionserver.HRegion:
region -ROOT-,,0/70236052 available
2009-06-16 10:00:48,329 INFO org.apache.hadoop.hbase.regionserver.HRegion:
starting  compaction on region -ROOT-,,0
2009-06-16 10:00:48,370 INFO org.apache.hadoop.hbase.regionserver.HRegion:
compaction completed on region -ROOT-,,0 in 0sec
2009-06-16 10:10:08,972 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: compactions no longer
limited
2009-06-16 11:00:28,451 INFO org.apache.hadoop.hbase.regionserver.HLog:
Closed
hdfs://server2:9000/hbase/log_192.168.1.3_1245157227327_60020/hlog.dat.0,
entries=2. New log writer:
/hbase/log_192.168.1.3_1245157227327_60020/hlog.dat.1245160828447
Wed Jun 17 09:25:11 BRT 2009 Starting regionserver on server2
ulimit -n 1024
2009-06-17 09:25:12,100 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer:
vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
-Dhbase.log.dir=/usr/local/hbase-0.19.3/bin/../logs,
-Dhbase.log.file=hbase-root-regionserver-server2.log,
-Dhbase.home.dir=/usr/local/hbase-0.19.3/bin/.., -Dhbase.id.str=root,
-Dhbase.root.logger=INFO,DRFA,
-Djava.library.path=/usr/local/hbase-0.19.3/bin/../lib/native/Linux-i386-32]
2009-06-17 09:25:12,224 INFO
org.apache.hadoop.hbase.regionserver.MemcacheFlusher:
globalMemcacheLimit=385.2m, globalMemcacheLimitLowMark=240.8m,
maxHeap=963.0m
2009-06-17 09:25:12,228 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 10000000ms
2009-06-17 09:25:12,347 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
Initializing RPC Metrics with hostName=HRegionServer, port=60020
2009-06-17 09:25:13,406 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 0 time(s).
2009-06-17 09:25:14,406 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 1 time(s).
2009-06-17 09:25:15,407 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 2 time(s).
2009-06-17 09:25:16,408 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 3 time(s).
2009-06-17 09:25:17,409 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 4 time(s).
2009-06-17 09:25:18,409 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 5 time(s).
2009-06-17 09:25:19,410 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 6 time(s).
2009-06-17 09:25:20,411 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 7 time(s).
2009-06-17 09:25:21,411 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 8 time(s).
2009-06-17 09:25:22,412 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 9 time(s).
2009-06-17 09:25:22,415 INFO org.apache.hadoop.ipc.HbaseRPC: Server at
server2/192.168.1.3:60000 not available yet, Zzzzz...
2009-06-17 09:25:24,416 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 0 time(s).
2009-06-17 09:25:25,416 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 1 time(s).
2009-06-17 09:25:26,417 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 2 time(s).
2009-06-17 09:25:27,473 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 3 time(s).
2009-06-17 09:25:28,473 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 4 time(s).
2009-06-17 09:25:29,474 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 5 time(s).
2009-06-17 09:25:30,475 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 6 time(s).
2009-06-17 09:25:31,475 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 7 time(s).
2009-06-17 09:25:32,476 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 8 time(s).
2009-06-17 09:25:33,477 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 9 time(s).
2009-06-17 09:25:33,478 INFO org.apache.hadoop.ipc.HbaseRPC: Server at
server2/192.168.1.3:60000 not available yet, Zzzzz...
2009-06-17 09:25:35,479 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 0 time(s).
2009-06-17 09:25:36,479 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 1 time(s).
2009-06-17 09:25:37,480 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 2 time(s).
2009-06-17 09:25:38,481 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 3 time(s).
2009-06-17 09:25:39,481 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 4 time(s).
2009-06-17 09:25:40,482 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 5 time(s).
2009-06-17 09:25:41,483 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 6 time(s).
2009-06-17 09:25:42,484 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 7 time(s).
2009-06-17 09:25:43,484 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 8 time(s).
2009-06-17 09:25:44,485 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 9 time(s).
2009-06-17 09:25:44,486 INFO org.apache.hadoop.ipc.HbaseRPC: Server at
server2/192.168.1.3:60000 not available yet, Zzzzz...
2009-06-17 09:25:46,487 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 0 time(s).
2009-06-17 09:25:47,487 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 1 time(s).
2009-06-17 09:25:48,488 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 2 time(s).
2009-06-17 09:25:49,489 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 3 time(s).
2009-06-17 09:25:50,490 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 4 time(s).
2009-06-17 09:25:51,490 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 5 time(s).
2009-06-17 09:25:52,491 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 6 time(s).
2009-06-17 09:25:53,492 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 7 time(s).
2009-06-17 09:25:54,492 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 8 time(s).
2009-06-17 09:25:55,493 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 9 time(s).
2009-06-17 09:25:55,494 INFO org.apache.hadoop.ipc.HbaseRPC: Server at
server2/192.168.1.3:60000 not available yet, Zzzzz...
2009-06-17 09:25:57,495 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 0 time(s).
2009-06-17 09:25:58,495 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 1 time(s).
2009-06-17 09:25:59,496 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 2 time(s).
2009-06-17 09:26:00,497 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 3 time(s).
2009-06-17 09:26:01,497 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 4 time(s).
2009-06-17 09:26:02,498 INFO org.apache.hadoop.ipc.HBaseClass: Retrying
connect to server: server2/192.168.1.3:60000. Already tried 5 time(s).
2009-06-17 09:26:04,480 INFO org.apache.hadoop.hbase.regionserver.HLog: HLog
configuration: blocksize=67108864, maxlogentries=100000,
flushlogentries=100, optionallogflushinternal=10000ms
2009-06-17 09:26:04,545 INFO org.apache.hadoop.hbase.regionserver.HLog: New
log writer:
/hbase/log_192.168.1.3_1245241512351_60020/hlog.dat.1245241564480
2009-06-17 09:26:04,548 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=RegionServer,
sessionId=regionserver/0:0:0:0:0:0:0:0:60020
2009-06-17 09:26:04,548 INFO
org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
Initialized
2009-06-17 09:26:04,644 INFO org.mortbay.util.Credential: Checking Resource
aliases
2009-06-17 09:26:04,649 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2009-06-17 09:26:04,650 INFO org.mortbay.util.Container: Started
HttpContext[/logs,/logs]
2009-06-17 09:26:05,025 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@109ea96
2009-06-17 09:26:05,053 INFO org.mortbay.util.Container: Started
WebApplicationContext[/static,/static]
2009-06-17 09:26:05,193 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@15d3388
2009-06-17 09:26:05,196 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2009-06-17 09:26:05,217 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:60030
2009-06-17 09:26:05,217 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.Server@131303f
2009-06-17 09:26:05,235 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
Responder: starting
2009-06-17 09:26:05,258 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
listener on 60020: starting
2009-06-17 09:26:05,259 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 0 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 1 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 2 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 3 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 4 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 5 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 6 on 60020: starting
2009-06-17 09:26:05,261 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 7 on 60020: starting
2009-06-17 09:26:05,262 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 8 on 60020: starting
2009-06-17 09:26:05,262 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: HRegionServer started
at: 192.168.1.3:60020
2009-06-17 09:26:05,262 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
handler 9 on 60020: starting
2009-06-17 09:26:05,268 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
-ROOT-,,0: safeMode=false
2009-06-17 09:26:05,269 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
-ROOT-,,0: safeMode=false
2009-06-17 09:26:05,686 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded
the native-hadoop library
2009-06-17 09:26:05,689 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory:
Successfully loaded & initialized native-zlib library
2009-06-17 09:26:05,690 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-17 09:26:05,691 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-17 09:26:05,706 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-17 09:26:05,707 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2009-06-17 09:26:05,717 INFO org.apache.hadoop.hbase.regionserver.HRegion:
region -ROOT-,,0/70236052 available
2009-06-17 09:26:24,551 INFO org.apache.hadoop.hbase.regionserver.HRegion:
starting  compaction on region -ROOT-,,0
2009-06-17 09:26:24,576 INFO org.apache.hadoop.hbase.regionserver.HRegion:
compaction completed on region -ROOT-,,0 in 0sec
2009-06-17 09:35:45,278 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: compactions no longer
limited
2009-06-17 10:25:14,623 INFO org.apache.hadoop.hbase.regionserver.HLog:
Closed
hdfs://server2:9000/hbase/log_192.168.1.3_1245241512351_60020/hlog.dat.0,
entries=2. New log writer:
/hbase/log_192.168.1.3_1245241512351_60020/hlog.dat.1245245114620

Re: Can't list table that exists inside HBase

Posted by Erik Holstad <er...@gmail.com>.
Hi Lucas!
Yeah, have a look at HBaseAdmin and you will find flush and compact. Not
sure that compact is going to make
a big difference in your case, since you only have one flush or so per day,
but might be nice for you to run it too.
Running a compaction means that all you flushed files will be rewritten into
one single file.

Regards Erik

Re: Can't list table that exists inside HBase

Posted by stack <st...@duboce.net>.
See also 'tools' in the hbase shell.  There is a tool to flush all in a
table or an individual region.

I also need to roll a 0.19.4 candidate.  It has a few issues that have us
flushing catalog tables way more frequently that we used to.

St.Ack

On Wed, Jun 17, 2009 at 10:04 AM, Lucas Nazário dos Santos <
nazario.lucas@gmail.com> wrote:

> Helped a lot! Thanks for the replies. I'll keep coding and move to newer
> versions of HBase and Hadoop as soon as they are out. I'll also have a look
> at the flush operation from HBaseAdmin.
>
> Lucas
>
>
>
> On Wed, Jun 17, 2009 at 1:58 PM, Erik Holstad <er...@gmail.com>
> wrote:
>
> > Hi Lucas!
> > Not sure if you have had a look at the BigTable paper, link in the
> > beginning
> > of http://hadoop.apache.org/hbase/ might clear some of the confusion.
> > But basically what happens is to support fast writes we only write to
> > memory  and periodically flush this data to disk, so while data is still
> in
> > memory
> > it is not persisted, needs to be written to disk/HDFS for that to be
> true.
> > We have a second mechanism for dealing with not losing data while sitting
> > in
> > memory. This is called WriteAheadLog and we are still waiting for Hadoop
> to
> > support one of the features to make this happen, which hopefully will
> > not be too long.
> >
> > Hope this helped.
> >
> > Erik
> >
>

Re: Can't list table that exists inside HBase

Posted by Lucas Nazário dos Santos <na...@gmail.com>.
Helped a lot! Thanks for the replies. I'll keep coding and move to newer
versions of HBase and Hadoop as soon as they are out. I'll also have a look
at the flush operation from HBaseAdmin.

Lucas



On Wed, Jun 17, 2009 at 1:58 PM, Erik Holstad <er...@gmail.com> wrote:

> Hi Lucas!
> Not sure if you have had a look at the BigTable paper, link in the
> beginning
> of http://hadoop.apache.org/hbase/ might clear some of the confusion.
> But basically what happens is to support fast writes we only write to
> memory  and periodically flush this data to disk, so while data is still in
> memory
> it is not persisted, needs to be written to disk/HDFS for that to be true.
> We have a second mechanism for dealing with not losing data while sitting
> in
> memory. This is called WriteAheadLog and we are still waiting for Hadoop to
> support one of the features to make this happen, which hopefully will
> not be too long.
>
> Hope this helped.
>
> Erik
>

Re: Can't list table that exists inside HBase

Posted by Erik Holstad <er...@gmail.com>.
Hi Lucas!
Not sure if you have had a look at the BigTable paper, link in the beginning
of http://hadoop.apache.org/hbase/ might clear some of the confusion.
But basically what happens is to support fast writes we only write to
memory  and periodically flush this data to disk, so while data is still in
memory
it is not persisted, needs to be written to disk/HDFS for that to be true.
We have a second mechanism for dealing with not losing data while sitting in
memory. This is called WriteAheadLog and we are still waiting for Hadoop to
support one of the features to make this happen, which hopefully will
not be too long.

Hope this helped.

Erik

Re: Can't list table that exists inside HBase

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Lucas,

Your table is "missing" because the edits in the META table aren't flush, in
0.20 we "fix" this by setting a very small maximum memcache size on both
ROOT and META tables so that the edits go to disk often. If all the nodes in
your cluster are shutdown at the same moment, another problem that happens
is that the region servers logs won't be processed by the Master.

J-D

On Wed, Jun 17, 2009 at 12:50 PM, Lucas Nazário dos Santos <
nazario.lucas@gmail.com> wrote:

> But isn't it strange that the whole table suddenly became unavailable?
> Specially because it's inside HDFS.
>
> Also, I've already created tables with very few rows, 250 for instance,
> that
> kept available after shutting down and starting again HBase. Is it because
> when HBase is properly shut data is flushed to disk?
>
> Lucas
>
>
>
> On Wed, Jun 17, 2009 at 1:41 PM, Lucas Nazário dos Santos <
> nazario.lucas@gmail.com> wrote:
>
> > Hi Erik,
> >
> > I have only a small amount of data, something between 1500 e 3000
> > documents. Is there a way to force a flush of those documents?
> >
> > 1500 to 3000 is the number of new documents that the application I'm
> > currently working on inserts everyday, so I think it would be nice to
> flush
> > them all to disk at least once a day.
> >
> > Thanks!
> > Lucas
> >
> >
> >
> >
> > On Wed, Jun 17, 2009 at 1:28 PM, Erik Holstad <erikholstad@gmail.com
> >wrote:
> >
> >> Hi Lucas!
> >> Just a quick thought. Do you have a lot of data in your cluster or just
> a
> >> few things in there?
> >> If you don't have that much data in HBase it might not have been flushed
> >> to
> >> disk/HDFS yet
> >> and therefore only sits in the internal memcache in HBase, so when your
> >> machines are turned
> >> of, that data is lost.
> >>
> >> Regards Erik
> >>
> >
> >
>

Re: Can't list table that exists inside HBase

Posted by Lucas Nazário dos Santos <na...@gmail.com>.
But isn't it strange that the whole table suddenly became unavailable?
Specially because it's inside HDFS.

Also, I've already created tables with very few rows, 250 for instance, that
kept available after shutting down and starting again HBase. Is it because
when HBase is properly shut data is flushed to disk?

Lucas



On Wed, Jun 17, 2009 at 1:41 PM, Lucas Nazário dos Santos <
nazario.lucas@gmail.com> wrote:

> Hi Erik,
>
> I have only a small amount of data, something between 1500 e 3000
> documents. Is there a way to force a flush of those documents?
>
> 1500 to 3000 is the number of new documents that the application I'm
> currently working on inserts everyday, so I think it would be nice to flush
> them all to disk at least once a day.
>
> Thanks!
> Lucas
>
>
>
>
> On Wed, Jun 17, 2009 at 1:28 PM, Erik Holstad <er...@gmail.com>wrote:
>
>> Hi Lucas!
>> Just a quick thought. Do you have a lot of data in your cluster or just a
>> few things in there?
>> If you don't have that much data in HBase it might not have been flushed
>> to
>> disk/HDFS yet
>> and therefore only sits in the internal memcache in HBase, so when your
>> machines are turned
>> of, that data is lost.
>>
>> Regards Erik
>>
>
>

Re: Can't list table that exists inside HBase

Posted by Lucas Nazário dos Santos <na...@gmail.com>.
Hi Erik,

I have only a small amount of data, something between 1500 e 3000 documents.
Is there a way to force a flush of those documents?

1500 to 3000 is the number of new documents that the application I'm
currently working on inserts everyday, so I think it would be nice to flush
them all to disk at least once a day.

Thanks!
Lucas



On Wed, Jun 17, 2009 at 1:28 PM, Erik Holstad <er...@gmail.com> wrote:

> Hi Lucas!
> Just a quick thought. Do you have a lot of data in your cluster or just a
> few things in there?
> If you don't have that much data in HBase it might not have been flushed to
> disk/HDFS yet
> and therefore only sits in the internal memcache in HBase, so when your
> machines are turned
> of, that data is lost.
>
> Regards Erik
>

Re: Can't list table that exists inside HBase

Posted by Erik Holstad <er...@gmail.com>.
Hi Lucas!
Just a quick thought. Do you have a lot of data in your cluster or just a
few things in there?
If you don't have that much data in HBase it might not have been flushed to
disk/HDFS yet
and therefore only sits in the internal memcache in HBase, so when your
machines are turned
of, that data is lost.

Regards Erik