You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by DanteSama <ch...@sojo.com> on 2014/09/11 19:20:53 UTC

Spark SQL and running parquet tables?

I've been under the impression that creating and registering a parquet table
will pick up on updates to the table, such as inserts. I have a program
running that does the following:

// Create Context
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)

// Register table
sqlContext
   .parquetFile("hdfs://somewhere/users/sql/")
   .registerAsTable("mytable")

This program is continuously running. Over time, queries get fired off to
that sqlContext:

// Query the registered table, collect and return
sqlContext.sql(query)
  .collect()


Then, elsewhere, I have processes which inserts data into that same table,
like so:

// Create context
val ssc = new StreamingContext(conf, Seconds(3600))
val sqlContext = new SQLContext(ssc.sparkContext)

// Register table
createParquetFile[Row]("hdfs://somewhere/users/sql/")
    .registerAsTable("mytable")

// Insert into (rdd exists and is filled with type Row)
createSchemaRDD[Row](rdd)
        .coalesce(1)
        .insertInto("mytable")


I've made a local test where it is the case that the first program will be
aware of the changes the second program makes. But when deploying with real
data, outside of that local test case, the running table "mytable" doesn't
get updated. If I kill the query program and restart it, it refreshes to the
current state of "mytable".

Thoughts?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-and-running-parquet-tables-tp13987.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Spark SQL and running parquet tables?

Posted by DanteSama <ch...@sojo.com>.
Turns out it was Spray with a bad route -- the results weren't updating
despite the table running. This thread can be ignored.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-and-running-parquet-tables-tp13987p14114.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Spark SQL and running parquet tables?

Posted by DanteSama <ch...@sojo.com>.
So, after toying around a bit, here's what I ended up with. First off,
there's no function "registerTempTable" -- "registerTable" seems to be
enough to work (it's the same whether directly on a SchemaRDD or on a
SqlContext being passed an RDD). The problem I encountered after was
reloading a table in one actor and referencing it another. 

The environment I had set has 2 types of Akka actors, a Query and a
Refresher. They share a reference (passed in on creation via
Props(classOf[Actor], sqlContext). The Refresher would simply reload the
parquet file and refresh the table:

sqlContext
          .parquetFile(dataDir)
          .registerAsTable(tableName)

The WebService would query it:

sqlContext.sql("query with tableName").collect()

This would break, the Refresher actor would work and be able to query, but
the Query actor would return that the table doesn't exist.


I now removed the Refresher and just updated the Query actor to refresh its
table if it's stale.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-and-running-parquet-tables-tp13987p14102.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Spark SQL and running parquet tables?

Posted by Yin Huai <hu...@gmail.com>.
It is in SQLContext (
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.SQLContext
).

On Thu, Sep 11, 2014 at 3:21 PM, DanteSama <ch...@sojo.com> wrote:

> Michael Armbrust wrote
> > You'll need to run parquetFile("path").registerTempTable("name") to
> > refresh the table.
>
> I'm not seeing that function on SchemaRDD in 1.0.2, is there something I'm
> missing?
>
> SchemaRDD Scaladoc
> <
> http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.SchemaRDD
> >
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-and-running-parquet-tables-tp13987p14002.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Spark SQL and running parquet tables?

Posted by DanteSama <ch...@sojo.com>.
Michael Armbrust wrote
> You'll need to run parquetFile("path").registerTempTable("name") to
> refresh the table.

I'm not seeing that function on SchemaRDD in 1.0.2, is there something I'm
missing?

SchemaRDD Scaladoc
<http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.SchemaRDD>  



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-and-running-parquet-tables-tp13987p14002.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org