You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Liu Zheng (Jira)" <ji...@apache.org> on 2020/07/09 01:42:00 UTC
[jira] [Commented] (HBASE-16212) Many connections to datanode are
created when doing a large scan
[ https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17154114#comment-17154114 ]
Liu Zheng commented on HBASE-16212:
-----------------------------------
this issue seems to be reproduced in version hbase 1.5.0:
2020-07-09 09:13:24,744 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception:
java.io.IOException: 连接被对方重设
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:605)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:647)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:830)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:778)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:594)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:288)
at java.lang.Thread.run(Thread.java:748)
> Many connections to datanode are created when doing a large scan
> -----------------------------------------------------------------
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
> Issue Type: Improvement
> Affects Versions: 1.1.2
> Reporter: Zhihua Deng
> Priority: Major
> Attachments: HBASE-16212.patch, HBASE-16212.v2.patch
>
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode is suffering from logging the same repeatedly. Adding log to DFSInputStream, it outputs as follows:
> 2016-07-10 21:31:42,147 INFO [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK] for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 111506876, targetPos: 111506843
> ...
> As the pos of this input stream is larger than targetPos(the pos trying to seek), A new connection to the datanode will be created, the older one will be closed as a consequence. When the wrong seeking ops are large, the datanode's block scanner info message is spamming logs, as well as many connections to the same datanode will be created.
> hadoop version: 2.7.1
--
This message was sent by Atlassian Jira
(v8.3.4#803005)