You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by kant kodali <ka...@gmail.com> on 2018/06/21 02:32:32 UTC

Question on Ignite and Spark Structured Streaming Integration.

Hi All,

I am wondering if Ignite has Spark Structured streaming sink? If I can use
any JDBC then my question really is Does Spark Structured Streaming has a
JDBC sink or do I need to use forEachWriter? I see the following code in
this link
<https://databricks.com/blog/2016/07/28/structured-streaming-in-apache-spark.html>
and
I can see that database name can be passed in the JDBC connection string,
however, I wonder how to pass a table name? Thanks!

inputDF.groupBy($"action", window($"time", "1 hour")).count()
       .writeStream.format("jdbc")
       .save("jdbc:mysql//…")

Re: Question on Ignite and Spark Structured Streaming Integration.

Posted by aealexsandrov <ae...@gmail.com>.
Hi,

As I know there is no any special sink function for streaming to Ignite.
Also, I don't see "jdbc" format in official documentation of spark (only
file, kafka, console, memory and foreach):

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-sinks

However, you can integrate it with Ignite using foreach sink functionality
and custom implemetation:

https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#using-foreach
https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/ForeachWriter.html

BR,
Andrei







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/