You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by gu...@zte.com.cn on 2017/04/26 06:20:52 UTC

HDFS HA(Based on QJM) Failover Frequently with Large FSimage and Busy Requests

SGkgQWxsLA0KDQoNCg0KICAgIEhERlMgSEEgKEJhc2VkIG9uIFFKTSkgLCA1IGpvdXJuYWxub2Rl
cywgQXBhY2hlIDIuNS4wIG9uIFJlZGhhdCA2LjUgd2l0aCBKREsxLjcuDQoNCg0KICAgIFB1dCAx
UCsgZGF0YSBpbnRvIEhERlMgd2l0aCBGU2ltYWdlIGFib3V0IDEwRywgdGhlbiBrZWVwIG9uIG1h
a2luZyBtb3JlIHJlcXVlc3RzIHRvIHRoaXMgSERGUywgbmFtZW5vZGVzIGZhaWxvdmVyIGZyZXF1
ZW50bHkuIFdhbm5hIHRvIGtub3cgc29tZXRoaW5nIGFzIGZvbGxvd3M6DQoNCg0KDQoNCg0KDQog
ICAgMS5BTk4oYWN0aXZlIG5hbWVub2RlKSBkb3dubG9hZGluZyBmc2ltYWdlLmNrcHRfKiBmcm9t
IFNOTihzdGFuZGJ5IG5hbWVub2RlKSBsZWFkcyB0byB2ZXJ5IGhpZ2ggZGlzayBpbywgYXQgdGhl
IHNhbWUgdGltZSwgemtmYyBmYWlscyB0byBtb25pdG9yIHRoZSBoZWFsdGggb2YgYW5uIGR1ZSB0
byB0aW1lb3V0LiBJcyB0aGVyZSBhbnkgcmVsZWF0aW9uc2hpcCBiZXR3ZWVuIGhpZ2ggZGlzayBp
byBhbmQgemtmYyBtb25pdG9yIHJlcXVlc3QgdGltZW91dD8gRXZlcnkgZmFpbG92ZXIgaGFwcGVu
ZWQgd2hlbiBja3B0IGRvd25sb2FkLCBidXQgbm90IGV2ZXJ5IGNrcHQgZG93bmxvYWQgbGVhZHMg
dG8gZmFpbG92ZXIuDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0KMjAxNy0wMy0xNSAwOToyNzowNSw3
NTAgV0FSTiBvcmcuYXBhY2hlLmhhZG9vcC5oYS5IZWFsdGhNb25pdG9yOiBUcmFuc3BvcnQtbGV2
ZWwgZXhjZXB0aW9uIHRyeWluZyB0byBtb25pdG9yIGhlYWx0aCBvZiBOYW1lTm9kZSBhdCBubjEv
aXA6ODAyMDogQ2FsbCBGcm9tIG5uMS9pcCB0byBubjE6ODAyMCBmYWlsZWQgb24gc29ja2V0IHRp
bWVvdXQgZXhjZXB0aW9uOiBqYXZhLm5ldC5Tb2NrZXRUaW1lb3V0RXhjZXB0aW9uOiA0NTAwMCBt
aWxsaXMgdGltZW91dCB3aGlsZSB3YWl0aW5nIGZvciBjaGFubmVsIHRvIGJlIHJlYWR5IGZvciBy
ZWFkLiBjaCA6IGphdmEubmlvLmNoYW5uZWxzLlNvY2tldENoYW5uZWxbY29ubmVjdGVkIGxvY2Fs
PS9pcDo0ODUzNiByZW1vdGU9bm4xL2lwOjgwMjBdIEZvciBtb3JlIGRldGFpbHMgc2VlOiAgaHR0
cDovL3dpa2kuYXBhY2hlLm9yZy9oYWRvb3AvU29ja2V0VGltZW91dA0KDQoNCjIwMTctMDMtMTUg
MDk6Mjc6MDUsNzUwIElORk8gb3JnLmFwYWNoZS5oYWRvb3AuaGEuSGVhbHRoTW9uaXRvcjogRW50
ZXJpbmcgc3RhdGUgU0VSVklDRV9OT1RfUkVTUE9ORElORw0KDQoNCg0KDQogICAgMi5EdWUgdG8g
U0VSVklDRV9OT1RfUkVTUE9ORElORywgYW5vdGhlciB6a2ZjIGZlbmNlcyB0aGUgb2xkIGFubihj
b25maWdlZCBzc2hmZW5jZSksIGJlZm9yZSByZXN0YXJ0IGJ5IG15IGFkZGl0aW9uYWwgbW9uaXRv
ciwgb2xkIGFubiBsb2cgc29tZXRpbWVzIHNob3dzIGxpa2UgdGhpcywgd2hhdCBpcyAiUmVzY2Fu
IG9mIHBvc3Rwb25lZE1pc3JlcGxpY2F0ZWRCbG9ja3MiPyBEb2VzIHRoaXMgaGF2ZSBhbnkgcmVs
ZXRpb25zaGlwcyB3aXRoIGZhaWxvdmVyPw0KDQoyMDE3LTAzLTE1IDA0OjM2OjAwLDg2NiBJTkZP
IG9yZy5hcGFjaGUuaGFkb29wLmhkZnMuc2VydmVyLmJsb2NrbWFuYWdlbWVudC5DYWNoZVJlcGxp
Y2F0aW9uTW9uaXRvcjogUmVzY2FubmluZyBhZnRlciAzMDAwMCBtaWxsaXNlY29uZHMNCg0KMjAx
Ny0wMy0xNSAwNDozNjowMCw5MzEgSU5GTyBvcmcuYXBhY2hlLmhhZG9vcC5oZGZzLnNlcnZlci5i
bG9ja21hbmFnZW1lbnQuQ2FjaGVSZXBsaWNhdGlvbk1vbml0b3I6IFNjYW5uZWQgMCBkaXJlY3Rp
dmUocykgYW5kIDAgYmxvY2socykgaW4gNjUgbWlsbGlzZWNvbmQocykuDQoNCjIwMTctMDMtMTUg
MDQ6MzY6MDEsMTI3IElORk8gb3JnLmFwYWNoZS5oYWRvb3AuaGRmcy5zZXJ2ZXIuYmxvY2ttYW5h
Z2VtZW50LkJsb2NrTWFuYWdlcjogUmVzY2FuIG9mIHBvc3Rwb25lZE1pc3JlcGxpY2F0ZWRCbG9j
a3MgY29tcGxldGVkIGluIDIzIG1zZWNzLiAyNDczNjEgYmxvY2tzIGFyZSBsZWZ0LiAwIGJsb2Nr
cyBhcmUgcmVtb3ZlZC4NCg0KMjAxNy0wMy0xNSAwNDozNjowNCwxNDUgSU5GTyBvcmcuYXBhY2hl
LmhhZG9vcC5oZGZzLnNlcnZlci5ibG9ja21hbmFnZW1lbnQuQmxvY2tNYW5hZ2VyOiBSZXNjYW4g
b2YgcG9zdHBvbmVkTWlzcmVwbGljYXRlZEJsb2NrcyBjb21wbGV0ZWQgaW4gMTcgbXNlY3MuIDI0
NzM2MSBibG9ja3MgYXJlIGxlZnQuIDAgYmxvY2tzIGFyZSByZW1vdmVkLg0KDQoyMDE3LTAzLTE1
IDA0OjM2OjA3LDE1OSBJTkZPIG9yZy5hcGFjaGUuaGFkb29wLmhkZnMuc2VydmVyLmJsb2NrbWFu
YWdlbWVudC5CbG9ja01hbmFnZXI6IFJlc2NhbiBvZiBwb3N0cG9uZWRNaXNyZXBsaWNhdGVkQmxv
Y2tzIGNvbXBsZXRlZCBpbiAxNCBtc2Vjcy4gMjQ3MzYxIGJsb2NrcyBhcmUgbGVmdC4gMCBibG9j
a3MgYXJlIHJlbW92ZWQuDQoNCjIwMTctMDMtMTUgMDQ6MzY6MTAsMTczIElORk8gb3JnLmFwYWNo
ZS5oYWRvb3AuaGRmcy5zZXJ2ZXIuYmxvY2ttYW5hZ2VtZW50LkJsb2NrTWFuYWdlcjogUmVzY2Fu
IG9mIHBvc3Rwb25lZE1pc3JlcGxpY2F0ZWRCbG9ja3MgY29tcGxldGVkIGluIDE0IG1zZWNzLiAy
NDczNjEgYmxvY2tzIGFyZSBsZWZ0LiAwIGJsb2NrcyBhcmUgcmVtb3ZlZC4NCg0KMjAxNy0wMy0x
NSAwNDozNjoxMywxODggSU5GTyBvcmcuYXBhY2hlLmhhZG9vcC5oZGZzLnNlcnZlci5ibG9ja21h
bmFnZW1lbnQuQmxvY2tNYW5hZ2VyOiBSZXNjYW4gb2YgcG9zdHBvbmVkTWlzcmVwbGljYXRlZEJs
b2NrcyBjb21wbGV0ZWQgaW4gMTQgbXNlY3MuIDI0NzM2MSBibG9ja3MgYXJlIGxlZnQuIDAgYmxv
Y2tzIGFyZSByZW1vdmVkLg0KDQoyMDE3LTAzLTE1IDA0OjM2OjE2LDIxMSBJTkZPIG9yZy5hcGFj
aGUuaGFkb29wLmhkZnMuc2VydmVyLmJsb2NrbWFuYWdlbWVudC5CbG9ja01hbmFnZXI6IFJlc2Nh
biBvZiBwb3N0cG9uZWRNaXNyZXBsaWNhdGVkQmxvY2tzIGNvbXBsZXRlZCBpbiAyMyBtc2Vjcy4g
MjQ3MzYxIGJsb2NrcyBhcmUgbGVmdC4gMCBibG9ja3MgYXJlIHJlbW92ZWQuDQoNCjIwMTctMDMt
MTUgMDQ6MzY6MTksMjM0IElORk8gb3JnLmFwYWNoZS5oYWRvb3AuaGRmcy5zZXJ2ZXIuYmxvY2tt
YW5hZ2VtZW50LkJsb2NrTWFuYWdlcjogUmVzY2FuIG9mIHBvc3Rwb25lZE1pc3JlcGxpY2F0ZWRC
bG9ja3MgY29tcGxldGVkIGluIDIyIG1zZWNzLiAyNDczNjEgYmxvY2tzIGFyZSBsZWZ0LiAwIGJs
b2NrcyBhcmUgcmVtb3ZlZC4NCg0KMjAxNy0wMy0xNSAwNDozNjoyOCw5OTQgSU5GTyBvcmcuYXBh
Y2hlLmhhZG9vcC5oZGZzLnNlcnZlci5uYW1lbm9kZS5OYW1lTm9kZTogU1RBUlRVUF9NU0c6DQoN
Cg0KDQoNCg0KDQogICAgMy5JIGNvbmZpZyB0d28gZGZzLm5hbWVub2RlLm5hbWUuZGlyIGFuZCBv
bmUgZGZzLmpvdXJuYWxub2RlLmVkaXRzLmRpcih3aGljaCBzaGFyZXMgb25lIGRpc2sgd2l0aCBu
biksIGlzIGl0IHN1aXRhYmxlPyBPciBkb2VzIHRoaXMgaGF2ZSBhbnkgZGlzYWR2YW50YWdlPw0K
DQoNCg0KDQoNCg0KDQoNCu+8nHByb3BlcnR577yeDQoNCu+8nG5hbWXvvJ5kZnMubmFtZW5vZGUu
bmFtZS5kaXIubmFtZXNlcnZpY2Uubm4x77ycL25hbWXvvJ4NCg0K77ycdmFsdWXvvJ4vZGF0YTEv
aGRmcy9kZnMvbmFtZSwvZGF0YTIvaGRmcy9kZnMvbmFtZe+8nC92YWx1Ze+8ng0KDQrvvJwvcHJv
cGVydHnvvJ4NCg0K77yccHJvcGVydHnvvJ4NCg0K77ycbmFtZe+8nmRmcy5uYW1lbm9kZS5uYW1l
LmRpci5uYW1lc2VydmljZS5ubjLvvJwvbmFtZe+8ng0KDQrvvJx2YWx1Ze+8ni9kYXRhMS9oZGZz
L2Rmcy9uYW1lLC9kYXRhMi9oZGZzL2Rmcy9uYW1l77ycL3ZhbHVl77yeDQoNCu+8nC9wcm9wZXJ0
ee+8ng0KDQoNCg0KDQoNCg0KDQoNCu+8nHByb3BlcnR577yeDQoNCu+8nG5hbWXvvJ5kZnMuam91
cm5hbG5vZGUuZWRpdHMuZGly77ycL25hbWXvvJ4NCg0K77ycdmFsdWXvvJ4vZGF0YTEvaGRmcy9k
ZnMvam91cm5hbO+8nC92YWx1Ze+8ng0KDQrvvJwvcHJvcGVydHnvvJ4NCg0KDQogICAgDQoNCg0K
DQogICAgNC5JbnRlcmVzdGVkIGluIGRlc2lnbiBvZiBjaGVja3BvaW50IGFuZCBlZGl0IGxvZ3Mg
dHJhbnNtaXNzaW9uLGFueSBleHBsYW5hdGlvbixpc3N1ZXMgb3IgZG9jdW1lbnRzPw0KDQoNCg0K
DQoNCg0KDQpUaGFua3MgaW4gYWR2YW5jZSwNCg0KDQpEb3Jpcw==


Re: HDFS HA(Based on QJM) Failover Frequently with Large FSimage andBusy Requests

Posted by Chackravarthy Esakkimuthu <ch...@gmail.com>.
Client failures due to failover gets handled seamlessly by having retries,
so need not worry about that.

And by increasing ha.health-monitor.rpc-timeout.ms to a slightly larger
value, you are just avoiding unnecessary failover when namenode busy
processing other client/service requests. This will get into effect only
when namenode is busy and not able to process zkfc rpc calls and other
times when active namenode shutdown for some reason, failover will be
instant and it will not wait for this much configured time.

On Thu, Apr 27, 2017 at 5:46 PM, <gu...@zte.com.cn> wrote:

> 1. Is service-rpc configured in namenode?
>
> *Not yet, I was considered to configure servicerpc, but I was thinking
> about the possible disadvantages as well. *
>
> *When failover is happened  because of too many waiting rpcs, if zkfc gets
> normal process from another port, is it possiable that the clients get a
> lot of failures?*
>
>
> 2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc
> call timeout to namenode.
>
> *The same worry,  is it possiable that the clients get a lot of failures?*
>
>
> Thanks very much,
>
> Doris
>
>
>
> ------------------------------------------------------------
> ---------------------------
>
>
> 1. Is service-rpc configured in namenode?
> (dfs.namenode.servicerpc-address - this will create another RPC server
> listening on another port (say 8021) to handle all service (non-client)
> requests and hence default rpc address (say 8020) will handle only client
> requests.)
>
> By doing this way, you would be able to decouple client and service
> requests. Here service requests corresponds to rpc calls from DN, ZKFC etc.
> Hence when cluster is too busy because of too many client operations, ZKFC
> requests will get processed by different rpc and hence need not wait in
> same queue as client requests.)
>
> 2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc
> call timeout to namenode.
>
> By default this is 45 secs. You can consider increasing it to 1 or 2 mins
> depending upon your cluster usage.
>
> Thanks,
> Chackra
>
> On Wed, Apr 26, 2017 at 11:50 AM,  <gu.yizhou@zte.com.cn> wrote:
>
>>
>> *Hi All,*
>>
>>     HDFS HA (Based on QJM) , 5 journalnodes, Apache 2.5.0 on Redhat 6.5
>> with JDK1.7.
>>
>>     Put 1P+ data into HDFS with FSimage about 10G, then keep on making
>> more requests to this HDFS, namenodes failover frequently. Wanna to know
>> something as follows:
>>
>>
>>  *   1.ANN(active namenode) downloading fsimage.ckpt_* from SNN(standby
>> namenode) leads to very high disk io, at the same time, zkfc fails to
>> monitor the health of ann due to timeout. Is there any releationship
>> between high disk io and zkfc monitor request timeout? Every failover
>> happened when ckpt download, but not every ckpt download leads to failover.*
>>
>>
>>
>> 2017-03-15 09:27:05,750 WARN org.apache.hadoop.ha.HealthMonitor:
>> Transport-level exception trying to monitor health of NameNode at
>> nn1/ip:8020: Call From nn1/ip to nn1:8020 failed on socket timeout
>> exception: java.net.SocketTimeoutException: 45000 millis timeout while
>> waiting for channel to be ready for read. ch :
>> java.nio.channels.SocketChannel[connected local=/ip:48536
>> remote=nn1/ip:8020]; For more details see:  http://wiki.apache.org/hadoop
>> /SocketTimeout
>>
>> 2017-03-15 09:27:05,750 INFO org.apache.hadoop.ha.HealthMonitor:
>> Entering state SERVICE_NOT_RESPONDING
>>
>>
>> *    2.Due to SERVICE_NOT_RESPONDING, another zkfc fences the old
>> ann(configed sshfence), before restart by my additional monitor, old ann
>> log sometimes shows like this, what is "Rescan of
>> postponedMisreplicatedBlocks"? Does this have any reletionships with
>> failover?*
>>
>> 2017-03-15 04:36:00,866 INFO org.apache.hadoop.hdfs.server.
>> blockmanagement.CacheReplicationMonitor: Rescanning after 30000
>> milliseconds
>>
>> 2017-03-15 04:36:00,931 INFO org.apache.hadoop.hdfs.server.
>> blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0
>> block(s) in 65 millisecond(s).
>>
>> 2017-03-15 04:36:01,127 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:04,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 17 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:07,159 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:10,173 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:13,188 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:16,211 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:19,234 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>> Rescan of postponedMisreplicatedBlocks completed in 22 msecs. 247361 blocks
>> are left. 0 blocks are removed.
>>
>> 2017-03-15 04:36:28,994 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
>> STARTUP_MSG:
>>
>>
>>     *3.I config two dfs.namenode.name.dir and
>> one dfs.journalnode.edits.dir(which shares one disk with nn), is it
>> suitable? Or does this have any disadvantage?*
>>
>>
>> <property>
>>
>> <name>dfs.namenode.name.dir.nameservice.nn1</name>
>>
>> <value>/data1/hdfs/dfs/name,/data2/hdfs/dfs/name</value>
>>
>> </property>
>>
>> <property>
>>
>> <name>dfs.namenode.name.dir.nameservice.nn2</name>
>>
>> <value>/data1/hdfs/dfs/name,/data2/hdfs/dfs/name</value>
>>
>> </property>
>>
>>
>> <property>
>>
>> <name>dfs.journalnode.edits.dir</name>
>>
>> <value>/data1/hdfs/dfs/journal</value>
>>
>> </property>
>>
>>
>>
>>    * 4.Interested in design of checkpoint and edit logs transmission,any
>> explanation,issues or documents?*
>>
>>
>> *Thanks in advance,*
>>
>> *Doris*
>>
>
>
>

Re: HDFS HA(Based on QJM) Failover Frequently with Large FSimage andBusy Requests

Posted by gu...@zte.com.cn.
MS4gSXMgc2VydmljZS1ycGMgY29uZmlndXJlZCBpbiBuYW1lbm9kZT8NCg0KDQpOb3QgeWV0LCBJ
IHdhcyBjb25zaWRlcmVkIHRvIGNvbmZpZ3VyZSBzZXJ2aWNlcnBjLCBidXQgSSB3YXMgdGhpbmtp
bmcgYWJvdXQgdGhlIHBvc3NpYmxlIGRpc2FkdmFudGFnZXMgYXMgd2VsbC4gDQoNCg0KV2hlbiBm
YWlsb3ZlciBpcyBoYXBwZW5lZCAgYmVjYXVzZSBvZiB0b28gbWFueSB3YWl0aW5nIHJwY3MsIGlm
IHprZmMgZ2V0cyBub3JtYWwgcHJvY2VzcyBmcm9tIGFub3RoZXIgcG9ydCwgaXMgaXQgcG9zc2lh
YmxlIHRoYXQgdGhlIGNsaWVudHMgZ2V0IGEgbG90IG9mIGZhaWx1cmVzPw0KDQoNCg0KDQoNCg0K
Mi4gaGEuaGVhbHRoLW1vbml0b3IucnBjLXRpbWVvdXQubXMgLSBBbHNvIGNvbnNpZGVyIGluY3Jl
YXNpbmcgemtmYyBycGMgY2FsbCB0aW1lb3V0IHRvIG5hbWVub2RlLiANCg0KDQpUaGUgc2FtZSB3
b3JyeSwgIGlzIGl0IHBvc3NpYWJsZSB0aGF0IHRoZSBjbGllbnRzIGdldCBhIGxvdCBvZiBmYWls
dXJlcz8NCg0KDQoNCg0KDQoNClRoYW5rcyB2ZXJ5IG11Y2gsDQoNCg0KRG9yaXMNCg0KDQoNCg0K
DQoNCg0KDQoNCg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCg0KDQoNCg0KDQoN
Cg0KDQoNCjEuIElzIHNlcnZpY2UtcnBjIGNvbmZpZ3VyZWQgaW4gbmFtZW5vZGU/DQooZGZzLm5h
bWVub2RlLnNlcnZpY2VycGMtYWRkcmVzcyAtIHRoaXMgd2lsbCBjcmVhdGUgYW5vdGhlciBSUEMg
c2VydmVyIGxpc3RlbmluZyBvbiBhbm90aGVyIHBvcnQgKHNheSA4MDIxKSB0byBoYW5kbGUgYWxs
IHNlcnZpY2UgKG5vbi1jbGllbnQpIHJlcXVlc3RzIGFuZCBoZW5jZSBkZWZhdWx0IHJwYyBhZGRy
ZXNzIChzYXkgODAyMCkgd2lsbCBoYW5kbGUgb25seSBjbGllbnQgcmVxdWVzdHMuKSANCg0KQnkg
ZG9pbmcgdGhpcyB3YXksIHlvdSB3b3VsZCBiZSBhYmxlIHRvIGRlY291cGxlIGNsaWVudCBhbmQg
c2VydmljZSByZXF1ZXN0cy4gSGVyZSBzZXJ2aWNlIHJlcXVlc3RzIGNvcnJlc3BvbmRzIHRvIHJw
YyBjYWxscyBmcm9tIEROLCBaS0ZDIGV0Yy4gSGVuY2Ugd2hlbiBjbHVzdGVyIGlzIHRvbyBidXN5
IGJlY2F1c2Ugb2YgdG9vIG1hbnkgY2xpZW50IG9wZXJhdGlvbnMsIFpLRkMgcmVxdWVzdHMgd2ls
bCBnZXQgcHJvY2Vzc2VkIGJ5IGRpZmZlcmVudCBycGMgYW5kIGhlbmNlIG5lZWQgbm90IHdhaXQg
aW4gc2FtZSBxdWV1ZSBhcyBjbGllbnQgcmVxdWVzdHMuKSAgDQoNCjIuIGhhLmhlYWx0aC1tb25p
dG9yLnJwYy10aW1lb3V0Lm1zIC0gQWxzbyBjb25zaWRlciBpbmNyZWFzaW5nIHprZmMgcnBjIGNh
bGwgdGltZW91dCB0byBuYW1lbm9kZS4gDQoNCkJ5IGRlZmF1bHQgdGhpcyBpcyA0NSBzZWNzLiBZ
b3UgY2FuIGNvbnNpZGVyIGluY3JlYXNpbmcgaXQgdG8gMSBvciAyIG1pbnMgZGVwZW5kaW5nIHVw
b24geW91ciBjbHVzdGVyIHVzYWdlLg0KDQpUaGFua3MsDQpDaGFja3JhIA0KDQoNCg0KDQpPbiBX
ZWQsIEFwciAyNiwgMjAxNyBhdCAxMTo1MCBBTSwgIO+8nGd1LnlpemhvdUB6dGUuY29tLmNu77ye
IHdyb3RlOg0KDQoNCkhpIEFsbCwNCg0KDQoNCiAgICBIREZTIEhBIChCYXNlZCBvbiBRSk0pICwg
NSBqb3VybmFsbm9kZXMsIEFwYWNoZSAyLjUuMCBvbiBSZWRoYXQgNi41IHdpdGggSkRLMS43Lg0K
DQoNCiAgICBQdXQgMVArIGRhdGEgaW50byBIREZTIHdpdGggRlNpbWFnZSBhYm91dCAxMEcsIHRo
ZW4ga2VlcCBvbiBtYWtpbmcgbW9yZSByZXF1ZXN0cyB0byB0aGlzIEhERlMsIG5hbWVub2RlcyBm
YWlsb3ZlciBmcmVxdWVudGx5LiBXYW5uYSB0byBrbm93IHNvbWV0aGluZyBhcyBmb2xsb3dzOg0K
DQoNCg0KDQoNCg0KICAgIDEuQU5OKGFjdGl2ZSBuYW1lbm9kZSkgZG93bmxvYWRpbmcgZnNpbWFn
ZS5ja3B0XyogZnJvbSBTTk4oc3RhbmRieSBuYW1lbm9kZSkgbGVhZHMgdG8gdmVyeSBoaWdoIGRp
c2sgaW8sIGF0IHRoZSBzYW1lIHRpbWUsIHprZmMgZmFpbHMgdG8gbW9uaXRvciB0aGUgaGVhbHRo
IG9mIGFubiBkdWUgdG8gdGltZW91dC4gSXMgdGhlcmUgYW55IHJlbGVhdGlvbnNoaXAgYmV0d2Vl
biBoaWdoIGRpc2sgaW8gYW5kIHprZmMgbW9uaXRvciByZXF1ZXN0IHRpbWVvdXQ/IEV2ZXJ5IGZh
aWxvdmVyIGhhcHBlbmVkIHdoZW4gY2twdCBkb3dubG9hZCwgYnV0IG5vdCBldmVyeSBja3B0IGRv
d25sb2FkIGxlYWRzIHRvIGZhaWxvdmVyLg0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCjIwMTctMDMt
MTUgMDk6Mjc6MDUsNzUwIFdBUk4gb3JnLmFwYWNoZS5oYWRvb3AuaGEuSGVhbHRoTW9uaXRvcjog
VHJhbnNwb3J0LWxldmVsIGV4Y2VwdGlvbiB0cnlpbmcgdG8gbW9uaXRvciBoZWFsdGggb2YgTmFt
ZU5vZGUgYXQgbm4xL2lwOjgwMjA6IENhbGwgRnJvbSBubjEvaXAgdG8gbm4xOjgwMjAgZmFpbGVk
IG9uIHNvY2tldCB0aW1lb3V0IGV4Y2VwdGlvbjogamF2YS5uZXQuU29ja2V0VGltZW91dEV4Y2Vw
dGlvbjogNDUwMDAgbWlsbGlzIHRpbWVvdXQgd2hpbGUgd2FpdGluZyBmb3IgY2hhbm5lbCB0byBi
ZSByZWFkeSBmb3IgcmVhZC4gY2ggOiBqYXZhLm5pby5jaGFubmVscy5Tb2NrZXRDaGFubmVsW2Nv
bm5lY3RlZCBsb2NhbD0vaXA6NDg1MzYgcmVtb3RlPW5uMS9pcDo4MDIwXSBGb3IgbW9yZSBkZXRh
aWxzIHNlZTogIGh0dHA6Ly93aWtpLmFwYWNoZS5vcmcvaGFkb29wL1NvY2tldFRpbWVvdXQNCg0K
DQoyMDE3LTAzLTE1IDA5OjI3OjA1LDc1MCBJTkZPIG9yZy5hcGFjaGUuaGFkb29wLmhhLkhlYWx0
aE1vbml0b3I6IEVudGVyaW5nIHN0YXRlIFNFUlZJQ0VfTk9UX1JFU1BPTkRJTkcNCg0KDQoNCg0K
ICAgIDIuRHVlIHRvIFNFUlZJQ0VfTk9UX1JFU1BPTkRJTkcsIGFub3RoZXIgemtmYyBmZW5jZXMg
dGhlIG9sZCBhbm4oY29uZmlnZWQgc3NoZmVuY2UpLCBiZWZvcmUgcmVzdGFydCBieSBteSBhZGRp
dGlvbmFsIG1vbml0b3IsIG9sZCBhbm4gbG9nIHNvbWV0aW1lcyBzaG93cyBsaWtlIHRoaXMsIHdo
YXQgaXMgIlJlc2NhbiBvZiBwb3N0cG9uZWRNaXNyZXBsaWNhdGVkQmxvY2tzIj8gRG9lcyB0aGlz
IGhhdmUgYW55IHJlbGV0aW9uc2hpcHMgd2l0aCBmYWlsb3Zlcj8NCg0KMjAxNy0wMy0xNSAwNDoz
NjowMCw4NjYgSU5GTyBvcmcuYXBhY2hlLmhhZG9vcC5oZGZzLnNlcnZlci5ibG9ja21hbmFnZW1l
bnQuQ2FjaGVSZXBsaWNhdGlvbk1vbml0b3I6IFJlc2Nhbm5pbmcgYWZ0ZXIgMzAwMDAgbWlsbGlz
ZWNvbmRzDQoNCjIwMTctMDMtMTUgMDQ6MzY6MDAsOTMxIElORk8gb3JnLmFwYWNoZS5oYWRvb3Au
aGRmcy5zZXJ2ZXIuYmxvY2ttYW5hZ2VtZW50LkNhY2hlUmVwbGljYXRpb25Nb25pdG9yOiBTY2Fu
bmVkIDAgZGlyZWN0aXZlKHMpIGFuZCAwIGJsb2NrKHMpIGluIDY1IG1pbGxpc2Vjb25kKHMpLg0K
DQoyMDE3LTAzLTE1IDA0OjM2OjAxLDEyNyBJTkZPIG9yZy5hcGFjaGUuaGFkb29wLmhkZnMuc2Vy
dmVyLmJsb2NrbWFuYWdlbWVudC5CbG9ja01hbmFnZXI6IFJlc2NhbiBvZiBwb3N0cG9uZWRNaXNy
ZXBsaWNhdGVkQmxvY2tzIGNvbXBsZXRlZCBpbiAyMyBtc2Vjcy4gMjQ3MzYxIGJsb2NrcyBhcmUg
bGVmdC4gMCBibG9ja3MgYXJlIHJlbW92ZWQuDQoNCjIwMTctMDMtMTUgMDQ6MzY6MDQsMTQ1IElO
Rk8gb3JnLmFwYWNoZS5oYWRvb3AuaGRmcy5zZXJ2ZXIuYmxvY2ttYW5hZ2VtZW50LkJsb2NrTWFu
YWdlcjogUmVzY2FuIG9mIHBvc3Rwb25lZE1pc3JlcGxpY2F0ZWRCbG9ja3MgY29tcGxldGVkIGlu
IDE3IG1zZWNzLiAyNDczNjEgYmxvY2tzIGFyZSBsZWZ0LiAwIGJsb2NrcyBhcmUgcmVtb3ZlZC4N
Cg0KMjAxNy0wMy0xNSAwNDozNjowNywxNTkgSU5GTyBvcmcuYXBhY2hlLmhhZG9vcC5oZGZzLnNl
cnZlci5ibG9ja21hbmFnZW1lbnQuQmxvY2tNYW5hZ2VyOiBSZXNjYW4gb2YgcG9zdHBvbmVkTWlz
cmVwbGljYXRlZEJsb2NrcyBjb21wbGV0ZWQgaW4gMTQgbXNlY3MuIDI0NzM2MSBibG9ja3MgYXJl
IGxlZnQuIDAgYmxvY2tzIGFyZSByZW1vdmVkLg0KDQoyMDE3LTAzLTE1IDA0OjM2OjEwLDE3MyBJ
TkZPIG9yZy5hcGFjaGUuaGFkb29wLmhkZnMuc2VydmVyLmJsb2NrbWFuYWdlbWVudC5CbG9ja01h
bmFnZXI6IFJlc2NhbiBvZiBwb3N0cG9uZWRNaXNyZXBsaWNhdGVkQmxvY2tzIGNvbXBsZXRlZCBp
biAxNCBtc2Vjcy4gMjQ3MzYxIGJsb2NrcyBhcmUgbGVmdC4gMCBibG9ja3MgYXJlIHJlbW92ZWQu
DQoNCjIwMTctMDMtMTUgMDQ6MzY6MTMsMTg4IElORk8gb3JnLmFwYWNoZS5oYWRvb3AuaGRmcy5z
ZXJ2ZXIuYmxvY2ttYW5hZ2VtZW50LkJsb2NrTWFuYWdlcjogUmVzY2FuIG9mIHBvc3Rwb25lZE1p
c3JlcGxpY2F0ZWRCbG9ja3MgY29tcGxldGVkIGluIDE0IG1zZWNzLiAyNDczNjEgYmxvY2tzIGFy
ZSBsZWZ0LiAwIGJsb2NrcyBhcmUgcmVtb3ZlZC4NCg0KMjAxNy0wMy0xNSAwNDozNjoxNiwyMTEg
SU5GTyBvcmcuYXBhY2hlLmhhZG9vcC5oZGZzLnNlcnZlci5ibG9ja21hbmFnZW1lbnQuQmxvY2tN
YW5hZ2VyOiBSZXNjYW4gb2YgcG9zdHBvbmVkTWlzcmVwbGljYXRlZEJsb2NrcyBjb21wbGV0ZWQg
aW4gMjMgbXNlY3MuIDI0NzM2MSBibG9ja3MgYXJlIGxlZnQuIDAgYmxvY2tzIGFyZSByZW1vdmVk
Lg0KDQoyMDE3LTAzLTE1IDA0OjM2OjE5LDIzNCBJTkZPIG9yZy5hcGFjaGUuaGFkb29wLmhkZnMu
c2VydmVyLmJsb2NrbWFuYWdlbWVudC5CbG9ja01hbmFnZXI6IFJlc2NhbiBvZiBwb3N0cG9uZWRN
aXNyZXBsaWNhdGVkQmxvY2tzIGNvbXBsZXRlZCBpbiAyMiBtc2Vjcy4gMjQ3MzYxIGJsb2NrcyBh
cmUgbGVmdC4gMCBibG9ja3MgYXJlIHJlbW92ZWQuDQoNCjIwMTctMDMtMTUgMDQ6MzY6MjgsOTk0
IElORk8gb3JnLmFwYWNoZS5oYWRvb3AuaGRmcy5zZXJ2ZXIubmFtZW5vZGUuTmFtZU5vZGU6IFNU
QVJUVVBfTVNHOg0KDQoNCg0KDQoNCg0KICAgIDMuSSBjb25maWcgdHdvIGRmcy5uYW1lbm9kZS5u
YW1lLmRpciBhbmQgb25lIGRmcy5qb3VybmFsbm9kZS5lZGl0cy5kaXIod2hpY2ggc2hhcmVzIG9u
ZSBkaXNrIHdpdGggbm4pLCBpcyBpdCBzdWl0YWJsZT8gT3IgZG9lcyB0aGlzIGhhdmUgYW55IGRp
c2FkdmFudGFnZT8NCg0KDQoNCg0KDQoNCg0KDQrvvJxwcm9wZXJ0ee+8ng0KDQrvvJxuYW1l77ye
ZGZzLm5hbWVub2RlLm5hbWUuZGlyLm5hbWVzZXJ2aWNlLm5uMe+8nC9uYW1l77yeDQoNCu+8nHZh
bHVl77yeL2RhdGExL2hkZnMvZGZzL25hbWUsL2RhdGEyL2hkZnMvZGZzL25hbWXvvJwvdmFsdWXv
vJ4NCg0K77ycL3Byb3BlcnR577yeDQoNCu+8nHByb3BlcnR577yeDQoNCu+8nG5hbWXvvJ5kZnMu
bmFtZW5vZGUubmFtZS5kaXIubmFtZXNlcnZpY2Uubm4y77ycL25hbWXvvJ4NCg0K77ycdmFsdWXv
vJ4vZGF0YTEvaGRmcy9kZnMvbmFtZSwvZGF0YTIvaGRmcy9kZnMvbmFtZe+8nC92YWx1Ze+8ng0K
DQrvvJwvcHJvcGVydHnvvJ4NCg0KDQoNCg0KDQoNCg0KDQrvvJxwcm9wZXJ0ee+8ng0KDQrvvJxu
YW1l77yeZGZzLmpvdXJuYWxub2RlLmVkaXRzLmRpcu+8nC9uYW1l77yeDQoNCu+8nHZhbHVl77ye
L2RhdGExL2hkZnMvZGZzL2pvdXJuYWzvvJwvdmFsdWXvvJ4NCg0K77ycL3Byb3BlcnR577yeDQoN
Cg0KICAgIA0KDQoNCg0KICAgIDQuSW50ZXJlc3RlZCBpbiBkZXNpZ24gb2YgY2hlY2twb2ludCBh
bmQgZWRpdCBsb2dzIHRyYW5zbWlzc2lvbixhbnkgZXhwbGFuYXRpb24saXNzdWVzIG9yIGRvY3Vt
ZW50cz8NCg0KDQoNCg0KDQoNCg0KVGhhbmtzIGluIGFkdmFuY2UsDQoNCkRvcmlz


Re: HDFS HA(Based on QJM) Failover Frequently with Large FSimage and Busy Requests

Posted by Chackravarthy Esakkimuthu <ch...@gmail.com>.
1. Is service-rpc configured in namenode?

(dfs.namenode.servicerpc-address - this will create another RPC server
listening on another port (say 8021) to handle all service (non-client)
requests and hence default rpc address (say 8020) will handle only client
requests.)

By doing this way, you would be able to decouple client and service
requests. Here service requests corresponds to rpc calls from DN, ZKFC etc.
Hence when cluster is too busy because of too many client operations, ZKFC
requests will get processed by different rpc and hence need not wait in
same queue as client requests.)

2. ha.health-monitor.rpc-timeout.ms - Also consider increasing zkfc rpc
call timeout to namenode.

By default this is 45 secs. You can consider increasing it to 1 or 2 mins
depending upon your cluster usage.

Thanks,
Chackra

On Wed, Apr 26, 2017 at 11:50 AM, <gu...@zte.com.cn> wrote:

>
> *Hi All,*
>
>     HDFS HA (Based on QJM) , 5 journalnodes, Apache 2.5.0 on Redhat 6.5
> with JDK1.7.
>
>     Put 1P+ data into HDFS with FSimage about 10G, then keep on making
> more requests to this HDFS, namenodes failover frequently. Wanna to know
> something as follows:
>
>
>  *   1.ANN(active namenode) downloading fsimage.ckpt_* from SNN(standby
> namenode) leads to very high disk io, at the same time, zkfc fails to
> monitor the health of ann due to timeout. Is there any releationship
> between high disk io and zkfc monitor request timeout? Every failover
> happened when ckpt download, but not every ckpt download leads to failover.*
>
>
>
> 2017-03-15 09:27:05,750 WARN org.apache.hadoop.ha.HealthMonitor:
> Transport-level exception trying to monitor health of NameNode at
> nn1/ip:8020: Call From nn1/ip to nn1:8020 failed on socket timeout
> exception: java.net.SocketTimeoutException: 45000 millis timeout while
> waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local=/ip:48536 remote=nn1/ip:8020]; For more details see:
> http://wiki.apache.org/hadoop/SocketTimeout
>
> 2017-03-15 09:27:05,750 INFO org.apache.hadoop.ha.HealthMonitor: Entering
> state SERVICE_NOT_RESPONDING
>
>
> *    2.Due to SERVICE_NOT_RESPONDING, another zkfc fences the old
> ann(configed sshfence), before restart by my additional monitor, old ann
> log sometimes shows like this, what is "Rescan of
> postponedMisreplicatedBlocks"? Does this have any reletionships with
> failover?*
>
> 2017-03-15 04:36:00,866 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Rescanning after 30000
> milliseconds
>
> 2017-03-15 04:36:00,931 INFO org.apache.hadoop.hdfs.server.
> blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0
> block(s) in 65 millisecond(s).
>
> 2017-03-15 04:36:01,127 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:04,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 17 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:07,159 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:10,173 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:13,188 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 14 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:16,211 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 23 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:19,234 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> Rescan of postponedMisreplicatedBlocks completed in 22 msecs. 247361 blocks
> are left. 0 blocks are removed.
>
> 2017-03-15 04:36:28,994 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> STARTUP_MSG:
>
>
>     *3.I config two dfs.namenode.name.dir and
> one dfs.journalnode.edits.dir(which shares one disk with nn), is it
> suitable? Or does this have any disadvantage?*
>
>
> <property>
>
> <name>dfs.namenode.name.dir.nameservice.nn1</name>
>
> <value>/data1/hdfs/dfs/name,/data2/hdfs/dfs/name</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.name.dir.nameservice.nn2</name>
>
> <value>/data1/hdfs/dfs/name,/data2/hdfs/dfs/name</value>
>
> </property>
>
>
> <property>
>
> <name>dfs.journalnode.edits.dir</name>
>
> <value>/data1/hdfs/dfs/journal</value>
>
> </property>
>
>
>
>    * 4.Interested in design of checkpoint and edit logs transmission,any
> explanation,issues or documents?*
>
>
> *Thanks in advance,*
>
> *Doris*
>
>
>
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>