You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Bo Cui (JIRA)" <ji...@apache.org> on 2018/12/27 10:08:00 UTC

[jira] [Commented] (HBASE-21651) when splitting hlog,occurred exception

    [ https://issues.apache.org/jira/browse/HBASE-21651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16729498#comment-16729498 ] 

Bo Cui commented on HBASE-21651:
--------------------------------

rs log

2018-12-25 13:58:08,825 | ERROR | split-log-closeStream-2 | Couldn't close log at hdfs://hacluster/hbase/data/default/petest03/fbe3cd1087031b509d78313fe71730c3/recovered.edits/0000000000000005058-8-5-242-3%2C21302%2C1545293978156.1545294750014.temp | org.apache.hadoop.hbase.wal.WALSplitter$LogRecoveredEditsOutputSink$2.call(WALSplitter.java:1398)
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[8.5.242.4:25009,DS-e16fdff2-d169-4691-9894-0d0b27b30b89,DISK]], original=[DatanodeInfoWithStorage[8.5.242.4:25009,DS-e16fdff2-d169-4691-9894-0d0b27b30b89,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
        at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1326)
        at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1411)
        at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1637)
        at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1537)
        at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1283)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
2018-12-25 13:58:09,144 | WARN  | Thread-33749 | Abandoning BP-549235534-8.5.242.1-1533603373960:blk_1076266307_2585583 | org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1727)
2018-12-25 13:58:09,144 | ERROR | split-log-closeStream-2 | Couldn't close log at hdfs://hacluster/hbase/data/default/petest03/2fe692a873b57d79cf0c66812b89ea32/recovered.edits/0000000000000002140-8-5-242-4%2C21302%2C1545293977631.1545294511802.temp | org.apache.hadoop.hbase.wal.WALSplitter$LogRecoveredEditsOutputSink$2.call(WALSplitter.java:1398)
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[8.5.242.4:25009,DS-e16fdff2-d169-4691-9894-0d0b27b30b89,DISK]], original=[DatanodeInfoWithStorage[8.5.242.4:25009,DS-e16fdff2-d169-4691-9894-0d0b27b30b89,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
        at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1326)
        at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1411)
        at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1637)
        at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1537)
        at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1283)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)

> when splitting hlog,occurred exception 
> ---------------------------------------
>
>                 Key: HBASE-21651
>                 URL: https://issues.apache.org/jira/browse/HBASE-21651
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.3.1, 2.1.0
>            Reporter: Bo Cui
>            Priority: Critical
>
> if hlog contains too many regions,when hlog splits, WALSplitter will open a discovered.edits(FSDataOutputStream) for each region, 
> {code:title=WALSplitter.java|borderStyle=solid}
> protected Map<byte[], SinkWriter> writers = Collections
>         .synchronizedMap(new TreeMap<byte[], SinkWriter>(Bytes.BYTES_COMPARATOR));
> {code}
> but datanode has a limit (dfs.datanode.max.transfer.threads), which will cause splithlog to be very slow,even failed.
> solution:add an max openEdits value(a configuration), when openEdits exceeds this value, the oldest FSDataOutputStream will be closed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)