You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Koert Kuipers <ko...@tresata.com> on 2018/08/06 16:31:38 UTC

spark structured streaming with file based sources and sinks

has anyone used spark structured streaming from/to files (json, csv,
parquet, avro) in a non-test setting?

i realize kafka is probably the way to go, but lets say i have a situation
where kafka is not available for reasons out of my control, and i want to
do micro-batching. could i use files to do so in a production setting?
basically:

files on hdfs => spark structured streaming => files on hdfs => spark
structured streaming => files on hdfs => etc.

i assumed this is not a good idea but interested to hear otherwise.