You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Albert Lee (JIRA)" <ji...@apache.org> on 2017/04/12 15:08:41 UTC

[jira] [Commented] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

    [ https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966020#comment-15966020 ] 

Albert Lee commented on HBASE-17906:
------------------------------------

I find because the htablePools cache and connection cache have the same timeout.
I add a refresh mechanism and it works for me now.

> When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.
> --------------------------------------------------------------------------------------------
>
>                 Key: HBASE-17906
>                 URL: https://issues.apache.org/jira/browse/HBASE-17906
>             Project: HBase
>          Issue Type: Bug
>          Components: Client
>    Affects Versions: 0.98.21
>         Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>            Reporter: Albert Lee
>             Fix For: 1.2.2, 0.98.21
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)