You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Shay Elbaz <sh...@gm.com> on 2022/05/09 14:55:19 UTC
Spark on K8s - repeating annoying exception
Hi all,
I apologize for reposting this from Stack Overflow, but it got very little attention and now comment.
I'm using Spark 3.2.1 image that was built from the official distribution via `docker-image-tool.sh', on Kubernetes 1.18 cluster.
Everything works fine, except for this error message on stdout every 90 seconds:
WARN WatcherWebSocketListener: Exec Failure
java.io.EOFException
at okio.RealBufferedSource.require(RealBufferedSource.java:61)
at okio.RealBufferedSource.readByte(RealBufferedSource.java:74)
at okhttp3.internal.ws.WebSocketReader.readHeader(WebSocketReader.java:117)
at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:101)
at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:274)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:214)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This message does not effect the application, but it's really annoying, and especially for Jupyter users. The lack of details makes it very hard to debug.
It appears on any submit variation - spark-submit, pyspark, spark-shell.
I've found traces of it on the internet, but all occurrences were from older versions of Spark and were resolved by using "newer" version of fabric8 (4.x).
Spark 3.2.1 already use fabric8 version 5.4.1.
I wonder if anyone else still sees this error in Spark 3.x, and has a resolution.
Thanks,
Shay.
RE: [EXTERNAL] Re: Spark on K8s - repeating annoying exception
Posted by Shay Elbaz <sh...@gm.com>.
Hi Martin,
Thanks for the help :) I tried to set those keys to high value but the error persists every 90 seconds.
Shay
From: Martin Grigorov <mg...@apache.org>
Sent: Friday, May 13, 2022 4:15 PM
To: Shay Elbaz <sh...@gm.com>
Cc: user@spark.apache.org
Subject: [EXTERNAL] Re: Spark on K8s - repeating annoying exception
ATTENTION: This email originated from outside of GM.
Hi,
On Mon, May 9, 2022 at 5:57 PM Shay Elbaz <sh...@gm.com>> wrote:
Hi all,
I apologize for reposting this from Stack Overflow, but it got very little attention and now comment.
I'm using Spark 3.2.1 image that was built from the official distribution via `docker-image-tool.sh', on Kubernetes 1.18 cluster.
Everything works fine, except for this error message on stdout every 90 seconds:
Wild guess: K8S API polling ?!
https://spark.apache.org/docs/latest/running-on-kubernetes.html#spark-properties
- spark.kubernetes.executor.apiPollingInterval
- spark.kubernetes.executor.missingPodDetectDelta
but for both settings the default is 30s, not 90s
WARN WatcherWebSocketListener: Exec Failure
java.io.EOFException
at okio.RealBufferedSource.require(RealBufferedSource.java:61)
at okio.RealBufferedSource.readByte(RealBufferedSource.java:74)
at okhttp3.internal.ws.WebSocketReader.readHeader(WebSocketReader.java:117)
at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:101)
at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:274)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:214)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This message does not effect the application, but it's really annoying, and especially for Jupyter users. The lack of details makes it very hard to debug.
It appears on any submit variation - spark-submit, pyspark, spark-shell.
I've found traces of it on the internet, but all occurrences were from older versions of Spark and were resolved by using "newer" version of fabric8 (4.x).
Spark 3.2.1 already use fabric8 version 5.4.1.
I wonder if anyone else still sees this error in Spark 3.x, and has a resolution.
Thanks,
Shay.
Re: Spark on K8s - repeating annoying exception
Posted by Martin Grigorov <mg...@apache.org>.
Hi,
On Mon, May 9, 2022 at 5:57 PM Shay Elbaz <sh...@gm.com> wrote:
> Hi all,
>
>
>
> I apologize for reposting this from Stack Overflow, but it got very little
> attention and now comment.
>
>
>
> I'm using Spark 3.2.1 image that was built from the official distribution
> via `docker-image-tool.sh', on Kubernetes 1.18 cluster.
>
> Everything works fine, except for this error message on stdout every 90
> seconds:
>
Wild guess: K8S API polling ?!
https://spark.apache.org/docs/latest/running-on-kubernetes.html#spark-properties
- spark.kubernetes.executor.apiPollingInterval
- spark.kubernetes.executor.missingPodDetectDelta
but for both settings the default is 30s, not 90s
>
>
> WARN WatcherWebSocketListener: Exec Failure
>
> java.io.EOFException
>
> at okio.RealBufferedSource.require(RealBufferedSource.java:61)
>
> at okio.RealBufferedSource.readByte(RealBufferedSource.java:74)
>
> at
> okhttp3.internal.ws.WebSocketReader.readHeader(WebSocketReader.java:117)
>
> at
> okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:101)
>
> at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:274)
>
> at
> okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:214)
>
> at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
>
> at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> This message does not effect the application, but it's really annoying,
> and especially for Jupyter users. The lack of details makes it very hard to
> debug.
>
> It appears on any submit variation - spark-submit, pyspark, spark-shell.
>
> I've found traces of it on the internet, but all occurrences were from
> older versions of Spark and were resolved by using "newer" version of
> fabric8 (4.x).
>
> Spark 3.2.1 already use fabric8 version 5.4.1.
>
> I wonder if anyone else still sees this error in Spark 3.x, and has a
> resolution.
>
>
>
> Thanks,
>
> Shay.
>