You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Meghajit Mazumdar <me...@gojek.com> on 2022/11/04 09:10:24 UTC

Getting error stacktrace during job submission on Flink Operator

Hello folks,

We in our team are currently running Flink clusters in Standalone, Session
mode using Kubernetes. ( on Flink 1.14.3)
We want to migrate towards *Flink Operator* + Application mode deployment
setup. ( still continue using Flink 1.14.3)


In the current setup, we upload a jar once and then keep submitting
different jobs via this *run* POST API.
<https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jars-jarid-run>
During the job creation, in case we submit a job with incorrect
configuration or maybe incorrect SQL syntax, we get the exception message
as well as the complete stacktrace in the response. For example, here is a
snippet:

{"statusCode":400,"error":"Bad Request","message":" [\"Internal server
error.\",\"<Exception on server
side:\\norg.apache.flink.client.program.ProgramInvocationException:
org.apache.flink.table.api.ValidationException: SQL validation failed. From
line 26, column 3 to line 26, column 22: . . .

This helps us, as we can show this message to our users on our self made UI
as soon as they submit a job.


However, in the case of Flink operator, the setup is slightly different as
you might know already.
The job details like program args are hardcoded in the FlinkDeployment CRD
itself and then the yaml is applied which creates an entire cluster for the
job. So no success/error response is received.

What is then the recommended way of getting error stacktraces like the
above or maybe other job submission runtime exceptions when using Flink
Operator ?


-- 
*Regards,*
*Meghajit*