You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Ted Yu <yu...@gmail.com> on 2010/03/09 07:02:39 UTC

SafeModeException

Hi,
I saw this in master server log:
2010-03-08 21:13:47,428 INFO  [Thread-14]
master.ServerManager$ServerMonitor(130): 3 region servers, 0 dead, average
load 0.0
2010-03-08 21:13:50,505 INFO  [WrapperSimpleAppMain-EventThread]
master.ServerManager$ServerExpirer(813):
snv-it-lin-010.projectrialto.com,60020,1268109747635
znode expired
2010-03-08 21:13:52,854 DEBUG [HMaster] regionserver.HLog(912): Pushed=50725
entries from hdfs://
snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083819081
2010-03-08 21:13:52,856 DEBUG [HMaster] regionserver.HLog(885): Splitting
hlog 5 of 21: hdfs://
snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083833046,
length=58788942
2010-03-08 21:14:47,441 INFO  [Thread-14]
master.ServerManager$ServerMonitor(130): 2 region servers, 1 dead, average
load 0.0[snv-it-lin-010.projectrialto.com,60020,1268109747635]
....
2010-03-08 22:01:10,078 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
hlog writers to terminate, iteration #143
2010-03-08 22:01:15,080 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
hlog writers to terminate, iteration #144
2010-03-08 22:01:20,082 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
hlog writers to terminate, iteration #145
2010-03-08 22:01:25,085 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
hlog writers to terminate, iteration #146
2010-03-08 22:01:30,087 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
hlog writers to terminate, iteration #147


And this in region server log on snv-it-lin-011:
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.SafeModeException:
Cannot renew lease for DFSClient_-1882710079. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
mode will be turned off automatically in 1 seconds.
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:1972)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.renewLease(NameNode.java:550)
        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:739)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy1.renewLease(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy1.renewLease(Unknown Source)
        at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.renew(DFSClient.java:1046)
        at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1058)
        at java.lang.Thread.run(Thread.java:619)
2010-03-08 20:50:38,805 WARN  [regionserver/10.10.31.136:60020]
regionserver.HRegionServer(583): Attempt=14
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

But I was able to browse hdfs using:
http://snv-it-lin-011.projectrialto.com:50075/browseDirectory.jsp?dir=%2Fdatacatalog&namenodeInfoPort=50070

and:
su -m hadoopadmin -c "bin/hadoop dfsadmin  -report"
Configured Capacity: 8675356323840 (7.89 TB)
Present Capacity: 8141537836173 (7.4 TB)
DFS Remaining: 7662688919552 (6.97 TB)
DFS Used: 478848916621 (445.96 GB)
DFS Used%: 5.88%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)

Name: 10.10.31.135:50010
Decommission Status : Normal
Configured Capacity: 2891785441280 (2.63 TB)
DFS Used: 159618845809 (148.66 GB)
Non DFS Used: 174569169807 (162.58 GB)
DFS Remaining: 2557597425664(2.33 TB)
DFS Used%: 5.52%
DFS Remaining%: 88.44%
Last contact: Mon Mar 08 21:59:37 PST 2010


Name: 10.10.31.136:50010
Decommission Status : Normal
Configured Capacity: 2891785441280 (2.63 TB)
DFS Used: 159616450574 (148.65 GB)
Non DFS Used: 175689850866 (163.62 GB)
DFS Remaining: 2556479139840(2.33 TB)
DFS Used%: 5.52%
DFS Remaining%: 88.4%
Last contact: Mon Mar 08 21:59:36 PST 2010


Name: 10.10.31.137:50010
Decommission Status : Normal
Configured Capacity: 2891785441280 (2.63 TB)
DFS Used: 159613620238 (148.65 GB)
Non DFS Used: 183559466994 (170.95 GB)
DFS Remaining: 2548612354048(2.32 TB)
DFS Used%: 5.52%
DFS Remaining%: 88.13%
Last contact: Mon Mar 08 21:59:37 PST 2010

How can I bring HBase cluster fully up ?

Thanks

Re: SafeModeException

Posted by Stack <st...@duboce.net>.
It can take hdfs a while before it leaves hdfs.  Is this what
happened?  Usually hbase will wait on hdfs to leave safe mode.  If you
look in your logs, can you figure what happened around hbase startup?
It didn't wait long enough?
St.Ack

On Mon, Mar 8, 2010 at 11:03 PM, Ted Yu <yu...@gmail.com> wrote:
> This happened after we restarted our servers.
> The load on the servers was light.
> We have 3 data nodes.
>
> On Monday, March 8, 2010, Stack <st...@duboce.net> wrote:
>> Your namenode flipped your hdfs into safe mode -- i.e. read-only mode.
>>   This happens on startup -- did you restart the hdfs under your
>> hbase? -- or it can happen if hdfs suffers extreme duress such as
>> losing a good proportion of all datanodes.  Did something like the
>> latter happen in your case?  You seem to have 3 hbase nodes.  Do you
>> have 3 datanodes only?  What kind of a loading were you running?
>>
>> St.Ack
>>
>> On Mon, Mar 8, 2010 at 10:02 PM, Ted Yu <yu...@gmail.com> wrote:
>>> Hi,
>>> I saw this in master server log:
>>> 2010-03-08 21:13:47,428 INFO  [Thread-14]
>>> master.ServerManager$ServerMonitor(130): 3 region servers, 0 dead, average
>>> load 0.0
>>> 2010-03-08 21:13:50,505 INFO  [WrapperSimpleAppMain-EventThread]
>>> master.ServerManager$ServerExpirer(813):
>>> snv-it-lin-010.projectrialto.com,60020,1268109747635
>>> znode expired
>>> 2010-03-08 21:13:52,854 DEBUG [HMaster] regionserver.HLog(912): Pushed=50725
>>> entries from hdfs://
>>> snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083819081
>>> 2010-03-08 21:13:52,856 DEBUG [HMaster] regionserver.HLog(885): Splitting
>>> hlog 5 of 21: hdfs://
>>> snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083833046,
>>> length=58788942
>>> 2010-03-08 21:14:47,441 INFO  [Thread-14]
>>> master.ServerManager$ServerMonitor(130): 2 region servers, 1 dead, average
>>> load 0.0[snv-it-lin-010.projectrialto.com,60020,1268109747635]
>>> ....
>>> 2010-03-08 22:01:10,078 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>>> hlog writers to terminate, iteration #143
>>> 2010-03-08 22:01:15,080 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>>> hlog writers to terminate, iteration #144
>>> 2010-03-08 22:01:20,082 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>>> hlog writers to terminate, iteration #145
>>> 2010-03-08 22:01:25,085 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>>> hlog writers to terminate, iteration #146
>>> 2010-03-08 22:01:30,087 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>>> hlog writers to terminate, iteration #147
>>>
>>>
>>> And this in region server log on snv-it-lin-011:
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.hdfs.server.namenode.SafeModeException:
>>> Cannot renew lease for DFSClient_-1882710079. Name node is in safe mode.
>>> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
>>> mode will be turned off automatically in 1 seconds.
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:1972)
>>>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.renewLease(NameNode.java:550)
>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>>        at org.apache.hadoop.ipc.Client.call(Client.java:739)
>>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>        at $Proxy1.renewLease(Unknown Source)
>>>        at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>
>

Re: SafeModeException

Posted by Ted Yu <yu...@gmail.com>.
This happened after we restarted our servers.
The load on the servers was light.
We have 3 data nodes.

On Monday, March 8, 2010, Stack <st...@duboce.net> wrote:
> Your namenode flipped your hdfs into safe mode -- i.e. read-only mode.
>   This happens on startup -- did you restart the hdfs under your
> hbase? -- or it can happen if hdfs suffers extreme duress such as
> losing a good proportion of all datanodes.  Did something like the
> latter happen in your case?  You seem to have 3 hbase nodes.  Do you
> have 3 datanodes only?  What kind of a loading were you running?
>
> St.Ack
>
> On Mon, Mar 8, 2010 at 10:02 PM, Ted Yu <yu...@gmail.com> wrote:
>> Hi,
>> I saw this in master server log:
>> 2010-03-08 21:13:47,428 INFO  [Thread-14]
>> master.ServerManager$ServerMonitor(130): 3 region servers, 0 dead, average
>> load 0.0
>> 2010-03-08 21:13:50,505 INFO  [WrapperSimpleAppMain-EventThread]
>> master.ServerManager$ServerExpirer(813):
>> snv-it-lin-010.projectrialto.com,60020,1268109747635
>> znode expired
>> 2010-03-08 21:13:52,854 DEBUG [HMaster] regionserver.HLog(912): Pushed=50725
>> entries from hdfs://
>> snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083819081
>> 2010-03-08 21:13:52,856 DEBUG [HMaster] regionserver.HLog(885): Splitting
>> hlog 5 of 21: hdfs://
>> snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083833046,
>> length=58788942
>> 2010-03-08 21:14:47,441 INFO  [Thread-14]
>> master.ServerManager$ServerMonitor(130): 2 region servers, 1 dead, average
>> load 0.0[snv-it-lin-010.projectrialto.com,60020,1268109747635]
>> ....
>> 2010-03-08 22:01:10,078 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>> hlog writers to terminate, iteration #143
>> 2010-03-08 22:01:15,080 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>> hlog writers to terminate, iteration #144
>> 2010-03-08 22:01:20,082 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>> hlog writers to terminate, iteration #145
>> 2010-03-08 22:01:25,085 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>> hlog writers to terminate, iteration #146
>> 2010-03-08 22:01:30,087 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
>> hlog writers to terminate, iteration #147
>>
>>
>> And this in region server log on snv-it-lin-011:
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException:
>> Cannot renew lease for DFSClient_-1882710079. Name node is in safe mode.
>> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
>> mode will be turned off automatically in 1 seconds.
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:1972)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.renewLease(NameNode.java:550)
>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>        at org.apache.hadoop.ipc.Client.call(Client.java:739)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy1.renewLease(Unknown Source)
>>        at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>

Re: SafeModeException

Posted by Stack <st...@duboce.net>.
Your namenode flipped your hdfs into safe mode -- i.e. read-only mode.
  This happens on startup -- did you restart the hdfs under your
hbase? -- or it can happen if hdfs suffers extreme duress such as
losing a good proportion of all datanodes.  Did something like the
latter happen in your case?  You seem to have 3 hbase nodes.  Do you
have 3 datanodes only?  What kind of a loading were you running?

St.Ack

On Mon, Mar 8, 2010 at 10:02 PM, Ted Yu <yu...@gmail.com> wrote:
> Hi,
> I saw this in master server log:
> 2010-03-08 21:13:47,428 INFO  [Thread-14]
> master.ServerManager$ServerMonitor(130): 3 region servers, 0 dead, average
> load 0.0
> 2010-03-08 21:13:50,505 INFO  [WrapperSimpleAppMain-EventThread]
> master.ServerManager$ServerExpirer(813):
> snv-it-lin-010.projectrialto.com,60020,1268109747635
> znode expired
> 2010-03-08 21:13:52,854 DEBUG [HMaster] regionserver.HLog(912): Pushed=50725
> entries from hdfs://
> snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083819081
> 2010-03-08 21:13:52,856 DEBUG [HMaster] regionserver.HLog(885): Splitting
> hlog 5 of 21: hdfs://
> snv-it-lin-006.projectrialto.com:9000/hbase/.logs/snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1268083833046,
> length=58788942
> 2010-03-08 21:14:47,441 INFO  [Thread-14]
> master.ServerManager$ServerMonitor(130): 2 region servers, 1 dead, average
> load 0.0[snv-it-lin-010.projectrialto.com,60020,1268109747635]
> ....
> 2010-03-08 22:01:10,078 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
> hlog writers to terminate, iteration #143
> 2010-03-08 22:01:15,080 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
> hlog writers to terminate, iteration #144
> 2010-03-08 22:01:20,082 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
> hlog writers to terminate, iteration #145
> 2010-03-08 22:01:25,085 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
> hlog writers to terminate, iteration #146
> 2010-03-08 22:01:30,087 DEBUG [HMaster] regionserver.HLog(1024): Waiting for
> hlog writers to terminate, iteration #147
>
>
> And this in region server log on snv-it-lin-011:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException:
> Cannot renew lease for DFSClient_-1882710079. Name node is in safe mode.
> The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
> mode will be turned off automatically in 1 seconds.
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:1972)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.renewLease(NameNode.java:550)
>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>        at org.apache.hadoop.ipc.Client.call(Client.java:739)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy1.renewLease(Unknown Source)
>        at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at $Proxy1.renewLease(Unknown Source)
>        at
> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.renew(DFSClient.java:1046)
>        at
> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1058)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-03-08 20:50:38,805 WARN  [regionserver/10.10.31.136:60020]
> regionserver.HRegionServer(583): Attempt=14
> java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>
> But I was able to browse hdfs using:
> http://snv-it-lin-011.projectrialto.com:50075/browseDirectory.jsp?dir=%2Fdatacatalog&namenodeInfoPort=50070
>
> and:
> su -m hadoopadmin -c "bin/hadoop dfsadmin  -report"
> Configured Capacity: 8675356323840 (7.89 TB)
> Present Capacity: 8141537836173 (7.4 TB)
> DFS Remaining: 7662688919552 (6.97 TB)
> DFS Used: 478848916621 (445.96 GB)
> DFS Used%: 5.88%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 3 (3 total, 0 dead)
>
> Name: 10.10.31.135:50010
> Decommission Status : Normal
> Configured Capacity: 2891785441280 (2.63 TB)
> DFS Used: 159618845809 (148.66 GB)
> Non DFS Used: 174569169807 (162.58 GB)
> DFS Remaining: 2557597425664(2.33 TB)
> DFS Used%: 5.52%
> DFS Remaining%: 88.44%
> Last contact: Mon Mar 08 21:59:37 PST 2010
>
>
> Name: 10.10.31.136:50010
> Decommission Status : Normal
> Configured Capacity: 2891785441280 (2.63 TB)
> DFS Used: 159616450574 (148.65 GB)
> Non DFS Used: 175689850866 (163.62 GB)
> DFS Remaining: 2556479139840(2.33 TB)
> DFS Used%: 5.52%
> DFS Remaining%: 88.4%
> Last contact: Mon Mar 08 21:59:36 PST 2010
>
>
> Name: 10.10.31.137:50010
> Decommission Status : Normal
> Configured Capacity: 2891785441280 (2.63 TB)
> DFS Used: 159613620238 (148.65 GB)
> Non DFS Used: 183559466994 (170.95 GB)
> DFS Remaining: 2548612354048(2.32 TB)
> DFS Used%: 5.52%
> DFS Remaining%: 88.13%
> Last contact: Mon Mar 08 21:59:37 PST 2010
>
> How can I bring HBase cluster fully up ?
>
> Thanks
>