You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Amit Baghel (JIRA)" <ji...@apache.org> on 2017/03/01 04:14:45 UTC
[jira] [Commented] (SPARK-19768) Error for both aggregate and
non-aggregate queries in Structured Streaming - "This query does not
support recovering from checkpoint location"
[ https://issues.apache.org/jira/browse/SPARK-19768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889472#comment-15889472 ]
Amit Baghel commented on SPARK-19768:
-------------------------------------
Thanks [~zsxwing] for clarification. Documentation for structured streaming missing this piece of information. The error thrown in case of console sink with checkpoint should be more meaningful. I have one more question. Does file sink using "parquet" and checkpoint work only for non-aggregate query? I tried this for both aggregate and non-aggregate queries and I am getting exception for aggregate query.
> Error for both aggregate and non-aggregate queries in Structured Streaming - "This query does not support recovering from checkpoint location"
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-19768
> URL: https://issues.apache.org/jira/browse/SPARK-19768
> Project: Spark
> Issue Type: Question
> Components: Structured Streaming
> Affects Versions: 2.1.0
> Reporter: Amit Baghel
>
> I am running JavaStructuredKafkaWordCount.java example with checkpointLocation. Output mode is "complete". Below is relevant code.
> {code}
> // Generate running word count
> Dataset<Row> wordCounts = lines.flatMap(new FlatMapFunction<String, String>() {
> @Override
> public Iterator<String> call(String x) {
> return Arrays.asList(x.split(" ")).iterator();
> }
> }, Encoders.STRING()).groupBy("value").count();
> // Start running the query that prints the running counts to the console
> StreamingQuery query = wordCounts.writeStream()
> .outputMode("complete")
> .format("console")
> .option("checkpointLocation", "/tmp/checkpoint-data")
> .start();
> {code}
> This example runs successfully and writes data in checkpoint directory. When I re-run the program it throws below exception
> {code}
> Exception in thread "main" org.apache.spark.sql.AnalysisException: This query does not support recovering from checkpoint location. Delete /tmp/checkpoint-data/offsets to start over.;
> at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:219)
> at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:269)
> at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:262)
> at com.spark.JavaStructuredKafkaWordCount.main(JavaStructuredKafkaWordCount.java:85)
> {code}
> Then I modified JavaStructuredKafkaWordCount.java to have non aggregate query with output mode as "append". Please see the code below.
> {code}
> // no aggregations
> Dataset<Row> wordCounts = lines.flatMap(new FlatMapFunction<String, String>() {
> @Override
> public Iterator<String> call(String x) {
> return Arrays.asList(x.split(" ")).iterator();
> }
> }, Encoders.STRING()).select("value");
> // append mode with console
> StreamingQuery query = wordCounts.writeStream()
> .outputMode("append")
> .format("console")
> .option("checkpointLocation", "/tmp/checkpoint-data")
> .start();
> {code}
> This modified code runs successfully and writes data in checkpoint directory. When I re-run the program it throws same exception
> {code}
> Exception in thread "main" org.apache.spark.sql.AnalysisException: This query does not support recovering from checkpoint location. Delete /tmp/checkpoint-data/offsets to start over.;
> at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:219)
> at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:269)
> at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:262)
> at com.spark.JavaStructuredKafkaWordCount.main(JavaStructuredKafkaWordCount.java:85)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org