You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jeyhun Karimov <je...@gmail.com> on 2016/10/23 14:28:34 UTC
Spark streaming crashes with high throughput
Hi,
I am getting
*Remote RPC client disassociated. Likely due to containers exceeding
thresholds, or network issues. Check driver logs for WARN messages.*
error with spark streaming job. I am using spark 2.0.0. The job is simple
windowed aggregation and the stream is read from socket. Average throughput
is 220K tuples p/s. The job is running for a while (approx. 10 mins) as
expected, then I get the message above.
There is similar thread in mailing list but no conclusion is reached there.
Thanks
Jeyhun
--
-Cheers
Jeyhun