You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2014/07/09 01:06:05 UTC

[jira] [Created] (PHOENIX-1071) Provide integration for exposing Phoenix tables as Spark RDDs

Andrew Purtell created PHOENIX-1071:
---------------------------------------

             Summary: Provide integration for exposing Phoenix tables as Spark RDDs
                 Key: PHOENIX-1071
                 URL: https://issues.apache.org/jira/browse/PHOENIX-1071
             Project: Phoenix
          Issue Type: New Feature
            Reporter: Andrew Purtell


A core concept of Apache Spark is the resilient distributed dataset (RDD), a "fault-tolerant collection of elements that can be operated on in parallel". One can create a RDDs referencing a dataset in any external storage system offering a Hadoop InputFormat, like HBase's TableInputFormat and TableSnapshotInputFormat. Phoenix as JDBC driver supporting a SQL dialect can provide interesting and deep integration. 

Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} action, implicitly creating necessary schema on demand.

Add support for {{filter}} transformations that push predicates to the server.

Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
{code}
// Count the number of different coffee varieties offered by each
// supplier from Guatemala
phoenixTable("coffees")
    .select("supplier")(c =>
        where(c.origin == "GT"))
    .countByKey()
    .foreach(r => println(r._1 + "=" + r._2))
{code} 

Support conversions between Scala and Java types and Phoenix table data.



--
This message was sent by Atlassian JIRA
(v6.2#6252)