You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Josh Elser (JIRA)" <ji...@apache.org> on 2018/03/05 20:17:00 UTC

[jira] [Commented] (PHOENIX-4247) Phoenix/Spark/ZK connection

    [ https://issues.apache.org/jira/browse/PHOENIX-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386685#comment-16386685 ] 

Josh Elser commented on PHOENIX-4247:
-------------------------------------

Likely related to PHOENIX-4489. Please reopen if you have more details to provide, or can reproduce this problem after patching your installation with PHOENIX-4489. Thanks.

> Phoenix/Spark/ZK connection
> ---------------------------
>
>                 Key: PHOENIX-4247
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4247
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.10.0
>         Environment: HBase 1.2 
> Spark 1.6 
> Phoenix 4.10 
>            Reporter: Kumar Palaniappan
>            Priority: Major
>
> After upgrading to CDH 5.9.1/Phoenix 4.10/Spark 1.6 from CDH 5.5.2/Phoenix 4.6/Spark 1.5, streaming jobs that read data from Phoenix no longer release their zookeeper connections, meaning that the number of connections from the driver grow with each batch until the ZooKeeper limit on connections per IP address is reached, at which point the Spark streaming job can no longer read data from Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)