You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lukáš (Jira)" <ji...@apache.org> on 2021/10/12 08:17:00 UTC
[jira] [Created] (SPARK-36984) Misleading Spark Streaming source
documentation
Lukáš created SPARK-36984:
-----------------------------
Summary: Misleading Spark Streaming source documentation
Key: SPARK-36984
URL: https://issues.apache.org/jira/browse/SPARK-36984
Project: Spark
Issue Type: Documentation
Components: Documentation, PySpark, Structured Streaming
Affects Versions: 3.1.2, 3.1.1, 3.1.0, 3.0.3, 3.0.2, 3.0.1
Reporter: Lukáš
The documentation at [https://spark.apache.org/docs/latest/streaming-programming-guide.html#advanced-sources] clearly states that *Kafka* (and Kinesis) are available in the Python API v 3.1.2 in *Spark Streaming (DStreams)*.
However, there is no way to create DStream from Kafka in PySpark >= 3.0.0, as the `kafka.py` file is missing in [https://github.com/apache/spark/tree/master/python/pyspark/streaming]. I'm coming from PySpark 2.4.4 where this was possible. _Should Kafka be excluded as advanced source for spark streaming in Python API in the docs?_
Note that I'm aware of this Kafka integration guide [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html] but I'm not interested in Structured Streaming as it doesn't support arbitrary stateful operations in Python. DStreams support this functionality with `updateStateByKey`.
!image-2021-10-12-10-04-03-232.png!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org