You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by Istvan Toth <st...@cloudera.com.INVALID> on 2021/08/02 06:14:34 UTC

Re: Apache spark plugin - Optional SELECT columns in push down predicates

These kinds of questions are better asked on the user@phoenix.apache.org
list.

You can check the tests in

https://github.com/apache/phoenix-connectors/blob/master/phoenix-spark-base/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala

to get an idea of how to use push down support. (It is automatic for the
supported cases)

I don't know of any specific issue with using the Phoenix connector from
pySpark (as opposed to Scala or Java)

Istvan

On Mon, Jul 26, 2021 at 4:19 PM Karthigeyan r <vi...@gmail.com>
wrote:

> Hello Team,
>
>
>
> We are working on Apache Phoenix table with Spark Integration. There is a
> table which has millions of records and we wanted to apply pushdown
> predicates for efficient filtering Before we pull all records from
> underlying apache phoenix table.
>
>
>
> When I looked into the document , I couldn’t see the example related to
> that under Apache Spark Plugin documentation. Can you please update the
> documentation with the example for pushdown predicates for Pyspark
> especially ?
>
>
>
> Regards,
>
> Karthigeyan
>


-- 
*István Tóth* | Staff Software Engineer
stoty@cloudera.com <https://www.cloudera.com>
[image: Cloudera] <https://www.cloudera.com/>
[image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera
on LinkedIn] <https://www.linkedin.com/company/cloudera>
<https://www.cloudera.com/>
------------------------------