You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Great Info <gu...@gmail.com> on 2022/05/07 05:21:18 UTC

flink Job is throwing depdnecy issue when submitted to clusrer

I have one flink job which reads files from s3 and processes them.
Currently, it is running on flink 1.9.0, I need to upgrade my cluster to
1.13.5, so I have done the changes in my job pom and brought up the flink
cluster using 1.13.5 dist.

when I submit my application I am getting the below error when it tries to
connect to s3, have updated the s3 SDK version to the latest, but still
getting the same error.

caused by: java.lang.invoke.lambdaconversionexception: invalid receiver
type interface org.apache.http.header; not a subtype of implementation type
interface org.apache.http.namevaluepair

it works when I just run as a mini-cluster ( running just java -jar
<myjob.jar>) and also when I submit to the Flink cluster with 1.9.0.

Not able to understand where the dependency match is happening.

Re: flink Job is throwing depdnecy issue when submitted to clusrer

Posted by Konstantin Knauf <kn...@apache.org>.
Hi there,

are you using any of Flink S3 Filesystems? If so, where do you load it from

a) lib/
b) plugins/
c) bundled with your Job in a fat JAR

b) would be the right way to do it in Flink 1.13. I don't know if this
fixes the issue, but IIRC because we introduced the plugin mechansim we
don't relocated dependencies in the filesystems anymore.

Cheers,

Konstantin





Am Sa., 7. Mai 2022 um 07:47 Uhr schrieb 张立志 <zh...@163.com>:

> 退订
>
>
>
> | |
> zh_harry@163.com
> |
> |
> 邮箱:zh_harry@163.com
> |
>
>
>
>
> ---- 回复的原邮件 ----
> | 发件人 | Great Info<gu...@gmail.com> |
> | 日期 | 2022年05月07日 13:21 |
> | 收件人 | dev@flink.apache.org<de...@flink.apache.org>、user<
> user@flink.apache.org> |
> | 抄送至 | |
> | 主题 | flink Job is throwing depdnecy issue when submitted to clusrer |
> I have one flink job which reads files from s3 and processes them.
> Currently, it is running on flink 1.9.0, I need to upgrade my cluster to
> 1.13.5, so I have done the changes in my job pom and brought up the flink
> cluster using 1.13.5 dist.
>
> when I submit my application I am getting the below error when it tries to
> connect to s3, have updated the s3 SDK version to the latest, but still
> getting the same error.
>
> caused by: java.lang.invoke.lambdaconversionexception: invalid receiver
> type interface org.apache.http.header; not a subtype of implementation type
> interface org.apache.http.namevaluepair
>
> it works when I just run as a mini-cluster ( running just java -jar
> <myjob.jar>) and also when I submit to the Flink cluster with 1.9.0.
>
> Not able to understand where the dependency match is happening.
>


-- 
https://twitter.com/snntrable
https://github.com/knaufk

Re: flink Job is throwing depdnecy issue when submitted to clusrer

Posted by Konstantin Knauf <kn...@apache.org>.
Hi there,

are you using any of Flink S3 Filesystems? If so, where do you load it from

a) lib/
b) plugins/
c) bundled with your Job in a fat JAR

b) would be the right way to do it in Flink 1.13. I don't know if this
fixes the issue, but IIRC because we introduced the plugin mechansim we
don't relocated dependencies in the filesystems anymore.

Cheers,

Konstantin





Am Sa., 7. Mai 2022 um 07:47 Uhr schrieb 张立志 <zh...@163.com>:

> 退订
>
>
>
> | |
> zh_harry@163.com
> |
> |
> 邮箱:zh_harry@163.com
> |
>
>
>
>
> ---- 回复的原邮件 ----
> | 发件人 | Great Info<gu...@gmail.com> |
> | 日期 | 2022年05月07日 13:21 |
> | 收件人 | dev@flink.apache.org<de...@flink.apache.org>、user<
> user@flink.apache.org> |
> | 抄送至 | |
> | 主题 | flink Job is throwing depdnecy issue when submitted to clusrer |
> I have one flink job which reads files from s3 and processes them.
> Currently, it is running on flink 1.9.0, I need to upgrade my cluster to
> 1.13.5, so I have done the changes in my job pom and brought up the flink
> cluster using 1.13.5 dist.
>
> when I submit my application I am getting the below error when it tries to
> connect to s3, have updated the s3 SDK version to the latest, but still
> getting the same error.
>
> caused by: java.lang.invoke.lambdaconversionexception: invalid receiver
> type interface org.apache.http.header; not a subtype of implementation type
> interface org.apache.http.namevaluepair
>
> it works when I just run as a mini-cluster ( running just java -jar
> <myjob.jar>) and also when I submit to the Flink cluster with 1.9.0.
>
> Not able to understand where the dependency match is happening.
>


-- 
https://twitter.com/snntrable
https://github.com/knaufk

Re: flink Job is throwing depdnecy issue when submitted to clusrer

Posted by 张立志 <zh...@163.com>.
退订



| |
zh_harry@163.com
|
|
邮箱:zh_harry@163.com
|




---- 回复的原邮件 ----
| 发件人 | Great Info<gu...@gmail.com> |
| 日期 | 2022年05月07日 13:21 |
| 收件人 | dev@flink.apache.org<de...@flink.apache.org> |
| 抄送至 | |
| 主题 | flink Job is throwing depdnecy issue when submitted to clusrer |
I have one flink job which reads files from s3 and processes them.
Currently, it is running on flink 1.9.0, I need to upgrade my cluster to
1.13.5, so I have done the changes in my job pom and brought up the flink
cluster using 1.13.5 dist.

when I submit my application I am getting the below error when it tries to
connect to s3, have updated the s3 SDK version to the latest, but still
getting the same error.

caused by: java.lang.invoke.lambdaconversionexception: invalid receiver
type interface org.apache.http.header; not a subtype of implementation type
interface org.apache.http.namevaluepair

it works when I just run as a mini-cluster ( running just java -jar
<myjob.jar>) and also when I submit to the Flink cluster with 1.9.0.

Not able to understand where the dependency match is happening.