You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by "mohankrishna (Jira)" <ji...@apache.org> on 2020/11/18 14:01:00 UTC

[jira] [Commented] (PHOENIX-6226) Records are not loading fully into phoenix table

    [ https://issues.apache.org/jira/browse/PHOENIX-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17234643#comment-17234643 ] 

mohankrishna commented on PHOENIX-6226:
---------------------------------------

Writing data using spark like df.write.format("org.apache.phoenix").option("table","namespace.table_name").option("query","select * from namespace.table_name where id = 123").option("zkUrl","url-provided-in-conf").save()

> Records are not loading fully into phoenix table
> ------------------------------------------------
>
>                 Key: PHOENIX-6226
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6226
>             Project: Phoenix
>          Issue Type: Bug
>          Components: core, python
>    Affects Versions: 4.7.0
>         Environment: linux server with hbase 1.1.2 and pheonix 4.7.0
>            Reporter: mohankrishna
>            Priority: Major
>             Fix For: 4.7.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> my phoenix table is having 5500000 records when i was writing parquet file data which 1600000 records it is writing only 700000 records only into my table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)