You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ashish Singhi (JIRA)" <ji...@apache.org> on 2017/03/31 12:31:42 UTC

[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

    [ https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15950785#comment-15950785 ] 

Ashish Singhi commented on HBASE-9393:
--------------------------------------

Attached patch v16 for further reviews. Once the master branch patch is accepted I will attach patches for other branches.
Response to [~busbey] comments,
{quote}
The addition of the unbuffer call here means that we need to update the javadocs for HFile.createReader(FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Configuration) and HFile.createReaderFromStream(Path, FSDataInputStream, long, CacheConfig, Configuration) to note that callers need to ensure no other threads have access to the passed FSDISW instance.
We should also ensure that existing calls to those methods are safely passing the FSDISW instance.
{quote}
No need, the new reference of FSDISW is just created and passed from this methods.

{quote}
Just want to make sure I'm following the rationale correctly here. This won't actually take care of unbuffering if the lock is held e.g. for reading. I think this is fine, since it implies someone else is still using the stream and presumably they will also attempt to unbuffer when they are done.
{quote}
Yes.

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> --------------------------------------------------------------------
>
>                 Key: HBASE-9393
>                 URL: https://issues.apache.org/jira/browse/HBASE-9393
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.94.2, 0.98.0, 1.0.1.1, 1.1.2
>         Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 7279 regions
>            Reporter: Avi Zrachya
>            Assignee: Ashish Singhi
>            Priority: Critical
>             Fix For: 2.0.0
>
>         Attachments: HBASE-9393.patch, HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch, HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v15.patch, HBASE-9393.v15.patch, HBASE-9393.v16.patch, HBASE-9393.v1.patch, HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v7.patch, HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect to the datanode because too many mapped sockets from one host to another on the same port.
> The example below is with low CLOSE_WAIT count because we had to restart hbase to solve the porblem, later in time it will incease to 60-100K sockets on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root     17255 17219  0 12:26 pts/0    00:00:00 grep 21592
> hbase    21592     1 17 Aug29 ?        03:29:06 /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)