You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "iduanyingjie (JIRA)" <ji...@apache.org> on 2018/02/06 06:26:00 UTC

[jira] [Updated] (PHOENIX-4584) when use phoenix-spark reading phoenix data, there will be data loss in spark DataFrame

     [ https://issues.apache.org/jira/browse/PHOENIX-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

iduanyingjie updated PHOENIX-4584:
----------------------------------
    Description: 
{code:java}
scala> val df = sqlContext.read.format("org.apache.phoenix.spark").option("table", "xx.xx").option("zkUrl", "xxxx:2181").load().cache()

scala> df.filter(df("locationid") === 523714).count()
res0: Long = 0
{code}

> when use phoenix-spark reading phoenix data, there will be data loss in spark DataFrame
> ---------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-4584
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4584
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.8.0
>            Reporter: iduanyingjie
>            Priority: Major
>         Attachments: 微信图片_20180206142059.png, 微信图片_20180206142106.png
>
>
> {code:java}
> scala> val df = sqlContext.read.format("org.apache.phoenix.spark").option("table", "xx.xx").option("zkUrl", "xxxx:2181").load().cache()
> scala> df.filter(df("locationid") === 523714).count()
> res0: Long = 0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)