You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Gaojinchao <ga...@huawei.com> on 2011/05/31 10:51:41 UTC

about disposing Hbase process

For one our application, There is 3 node.
All process disposing and machine configure is as below.

Who has experience about this?

The use rate of cpu is about 70%~80%, Does it make HBase or zookeeper starve?



Machine:

cpu:    8 core 2.GHz

memory: 48G

Disk:   2T*8 = 16T

Node1:
DataNode
HAJobTracker
TaskTracker
QuorumPeerMain
HMaster
HRegionServer

Node 2:
NameNode
DataNode
HAJobTracker
TaskTracker
HMaster
HRegionServer
QuorumPeerMain

Node 3:
QuorumPeerMain
HRegionServer
TaskTracker
DataNode

Re: about disposing Hbase process

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Inline.

J-D

2011/5/31 Gaojinchao <ga...@huawei.com>:
> Per I know:
> 1.zookeeper is sensitive to resources(Memory, Disk, CPU, NetWork).

Mostly just disk.

> If there is some underprovisioning on server, then
> a) Server may not respond to client requests in time.
> b) Client assumes server is down, closes the socket and it connects to other server.
>
> 2. Hbase is sensitive to RAM. But CPU use less.

Unless you swap, in which case your performance is bye bye.

>
> 3. M/R(it againsts HDFS ) is sensitive to CPU and Memory.

Usually MR is IO-bound.

>
> My question is that:
> 1. Does it have a race resource between HDFS, ZK, M/R and Hbase?

MR can steal IO from HBase and ZK, in the latter case it's really bad.

>
> 2. Suppose that one machine fault, the cluster should be safe.
>  What do I need consider(eg: cpu, network use rate or configure)?

More machines, I'd never run HBase in production on just three nodes.

>
> 3. Is there anyone who has used below configure for product?

On such a small cluster there's no good configuration IMO, you'd need
to get a few more machines then you would put ZK, Namenode, JobTracker
and HMaster on one node and all the others would have DataNode,
TaskTracker and HRegionServer.

>
>
>
> -----邮件原件-----
> 发件人: saint.ack@gmail.com [mailto:saint.ack@gmail.com] 代表 Stack
> 发送时间: 2011年6月1日 4:13
> 收件人: user@hbase.apache.org
> 主题: Re: about disposing Hbase process
>
> Sorry Gao, what is your question?
> St.Ack
>
> 2011/5/31 Gaojinchao <ga...@huawei.com>:
>> For one our application, There is 3 node.
>> All process disposing and machine configure is as below.
>>
>> Who has experience about this?
>>
>> The use rate of cpu is about 70%~80%, Does it make HBase or zookeeper starve?
>>
>>
>>
>> Machine:
>>
>> cpu:    8 core 2.GHz
>>
>> memory: 48G
>>
>> Disk:   2T*8 = 16T
>>
>> Node1:
>> DataNode
>> HAJobTracker
>> TaskTracker
>> QuorumPeerMain
>> HMaster
>> HRegionServer
>>
>> Node 2:
>> NameNode
>> DataNode
>> HAJobTracker
>> TaskTracker
>> HMaster
>> HRegionServer
>> QuorumPeerMain
>>
>> Node 3:
>> QuorumPeerMain
>> HRegionServer
>> TaskTracker
>> DataNode
>>
>

re: about disposing Hbase process

Posted by Gaojinchao <ga...@huawei.com>.
Per I know:
1.zookeeper is sensitive to resources(Memory, Disk, CPU, NetWork).
If there is some underprovisioning on server, then 
a) Server may not respond to client requests in time.
b) Client assumes server is down, closes the socket and it connects to other server.

2. Hbase is sensitive to RAM. But CPU use less.

3. M/R(it againsts HDFS ) is sensitive to CPU and Memory. 

My question is that:
1. Does it have a race resource between HDFS, ZK, M/R and Hbase?

2. Suppose that one machine fault, the cluster should be safe.
  What do I need consider(eg: cpu, network use rate or configure)?

3. Is there anyone who has used below configure for product?



-----邮件原件-----
发件人: saint.ack@gmail.com [mailto:saint.ack@gmail.com] 代表 Stack
发送时间: 2011年6月1日 4:13
收件人: user@hbase.apache.org
主题: Re: about disposing Hbase process

Sorry Gao, what is your question?
St.Ack

2011/5/31 Gaojinchao <ga...@huawei.com>:
> For one our application, There is 3 node.
> All process disposing and machine configure is as below.
>
> Who has experience about this?
>
> The use rate of cpu is about 70%~80%, Does it make HBase or zookeeper starve?
>
>
>
> Machine:
>
> cpu:    8 core 2.GHz
>
> memory: 48G
>
> Disk:   2T*8 = 16T
>
> Node1:
> DataNode
> HAJobTracker
> TaskTracker
> QuorumPeerMain
> HMaster
> HRegionServer
>
> Node 2:
> NameNode
> DataNode
> HAJobTracker
> TaskTracker
> HMaster
> HRegionServer
> QuorumPeerMain
>
> Node 3:
> QuorumPeerMain
> HRegionServer
> TaskTracker
> DataNode
>

Re: about disposing Hbase process

Posted by Stack <st...@duboce.net>.
Sorry Gao, what is your question?
St.Ack

2011/5/31 Gaojinchao <ga...@huawei.com>:
> For one our application, There is 3 node.
> All process disposing and machine configure is as below.
>
> Who has experience about this?
>
> The use rate of cpu is about 70%~80%, Does it make HBase or zookeeper starve?
>
>
>
> Machine:
>
> cpu:    8 core 2.GHz
>
> memory: 48G
>
> Disk:   2T*8 = 16T
>
> Node1:
> DataNode
> HAJobTracker
> TaskTracker
> QuorumPeerMain
> HMaster
> HRegionServer
>
> Node 2:
> NameNode
> DataNode
> HAJobTracker
> TaskTracker
> HMaster
> HRegionServer
> QuorumPeerMain
>
> Node 3:
> QuorumPeerMain
> HRegionServer
> TaskTracker
> DataNode
>