You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Josh Mahonin (JIRA)" <ji...@apache.org> on 2014/07/09 15:31:05 UTC
[jira] [Commented] (PHOENIX-1071) Provide integration for exposing
Phoenix tables as Spark RDDs
[ https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056222#comment-14056222 ]
Josh Mahonin commented on PHOENIX-1071:
---------------------------------------
Hi Andrew,
With the phoenix-pig module, there's a PhoenixInputFormat and a PhoenixOutputFormat that Spark can use to create an RDD. I'm able to both read and write Phoenix data from Spark in this way.
Example:
{code}
val phoenixConf = new PhoenixPigConfiguration(new Configuration())
phoenixConf.setSelectStatement("SOME SELECT STATEMENT")
phoenixConf.setSelectColumns("COMMA,SEPARATED,COLUMNS")
phoenixConf.setSchemaType(SchemaType.QUERY)
phoenixConf.configure("db-server", "SOME_TABLE", 100L)
val phoenixRDD = sc.newAPIHadoopRDD(phoenixConf.getConfiguration(),
classOf[PhoenixInputFormat],
classOf[NullWritable],
classOf[PhoenixRecord])
{code}
> Provide integration for exposing Phoenix tables as Spark RDDs
> -------------------------------------------------------------
>
> Key: PHOENIX-1071
> URL: https://issues.apache.org/jira/browse/PHOENIX-1071
> Project: Phoenix
> Issue Type: New Feature
> Reporter: Andrew Purtell
>
> A core concept of Apache Spark is the resilient distributed dataset (RDD), a "fault-tolerant collection of elements that can be operated on in parallel". One can create a RDDs referencing a dataset in any external storage system offering a Hadoop InputFormat, like HBase's TableInputFormat and TableSnapshotInputFormat. Phoenix as JDBC driver supporting a SQL dialect can provide interesting and deep integration.
> Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} action, implicitly creating necessary schema on demand.
> Add support for {{filter}} transformations that push predicates to the server.
> Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
> {code}
> // Count the number of different coffee varieties offered by each
> // supplier from Guatemala
> phoenixTable("coffees")
> .select(c =>
> where(c.origin == "GT"))
> .countByKey()
> .foreach(r => println(r._1 + "=" + r._2))
> {code}
> Support conversions between Scala and Java types and Phoenix table data.
--
This message was sent by Atlassian JIRA
(v6.2#6252)