You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Simon Su <ba...@163.com> on 2019/06/26 09:49:19 UTC
Best Flink SQL length proposal
Hi all,
Currently I faced a problem caused by a long Flink SQL.
My sql is like “insert into tableA select a, b, c …….from sourceTable”, I have more than 1000 columns in select target, so that’s the problem, flink code generator will generate a RichMapFunction class and contains a map function which exceed the JVM max method limit (64kb). It throws the exception like:
Caused by: java.lang.RuntimeException: Compiling "DataStreamSinkConversion$3055": Code of method "map(Ljava/lang/Object;)Ljava/lang/Object;" of class "DataStreamSinkConversion$3055" grows beyond 64 KB
So is there any best practice for this ?
Thanks,
Simon
Re: Best Flink SQL length proposal
Posted by JingsongLee <lz...@aliyun.com>.
Hi Simon,
Hope you can wrap them simply.
In our scenario, there are also many jobs that have so many columns,
the huge generated code not only lead to compile exception, but also
lead to the code cannot be optimized by JIT.
We are planning to introduce a Java code Splitter (analyze Java code and
make appropriate segmentation when compile) to solve this problem
thoroughly in blink planner. Maybe it will in release-1.10.
Best, JingsongLee
------------------------------------------------------------------
From:Simon Su <ba...@163.com>
Send Time:2019年6月27日(星期四) 11:22
To:JingsongLee <lz...@aliyun.com>
Cc:user <us...@flink.apache.org>
Subject:Re: Best Flink SQL length proposal
Hi Jiongsong
Thanks for your reply.
It seems that to wrap fields is a feasible way for me now. And there already exists another JIRA FLINK-8921 try to improve this.
Thanks,
Simon
On 06/26/2019 19:21,JingsongLee<lz...@aliyun.com> wrote:
Hi Simon:
Does your code include the PR[1]?
If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 64000)?
If exclude: Can you wrap some fields to a nested Row field to reduce field number.
1.https://github.com/apache/flink/pull/5613
------------------------------------------------------------------
From:Simon Su <ba...@163.com>
Send Time:2019年6月26日(星期三) 17:49
To:user <us...@flink.apache.org>
Subject:Best Flink SQL length proposal
Hi all,
Currently I faced a problem caused by a long Flink SQL.
My sql is like “insert into tableA select a, b, c …….from sourceTable”, I have more than 1000 columns in select target, so that’s the problem, flink code generator will generate a RichMapFunction class and contains a map function which exceed the JVM max method limit (64kb). It throws the exception like:
Caused by: java.lang.RuntimeException: Compiling "DataStreamSinkConversion$3055": Code of method "map(Ljava/lang/Object;)Ljava/lang/Object;" of class "DataStreamSinkConversion$3055" grows beyond 64 KB
So is there any best practice for this ?
Thanks,
Simon
Re: Best Flink SQL length proposal
Posted by Simon Su <ba...@163.com>.
Hi Jiongsong
Thanks for your reply.
It seems that to wrap fields is a feasible way for me now. And there already exists another JIRA FLINK-8921 try to improve this.
Thanks,
Simon
On 06/26/2019 19:21,JingsongLee<lz...@aliyun.com> wrote:
Hi Simon:
Does your code include the PR[1]?
If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 64000)?
If exclude: Can you wrap some fields to a nested Row field to reduce field number.
1.https://github.com/apache/flink/pull/5613
------------------------------------------------------------------
From:Simon Su <ba...@163.com>
Send Time:2019年6月26日(星期三) 17:49
To:user <us...@flink.apache.org>
Subject:Best Flink SQL length proposal
Hi all,
Currently I faced a problem caused by a long Flink SQL.
My sql is like “insert into tableA select a, b, c …….from sourceTable”, I have more than 1000 columns in select target, so that’s the problem, flink code generator will generate a RichMapFunction class and contains a map function which exceed the JVM max method limit (64kb). It throws the exception like:
Caused by: java.lang.RuntimeException: Compiling "DataStreamSinkConversion$3055": Code of method "map(Ljava/lang/Object;)Ljava/lang/Object;" of class "DataStreamSinkConversion$3055" grows beyond 64 KB
So is there any best practice for this ?
Thanks,
Simon
Re: Best Flink SQL length proposal
Posted by JingsongLee <lz...@aliyun.com>.
Hi Simon:
Does your code include the PR[1]?
If include: try set TableConfig.setMaxGeneratedCodeLength smaller (default 64000)?
If exclude: Can you wrap some fields to a nested Row field to reduce field number.
1.https://github.com/apache/flink/pull/5613
------------------------------------------------------------------
From:Simon Su <ba...@163.com>
Send Time:2019年6月26日(星期三) 17:49
To:user <us...@flink.apache.org>
Subject:Best Flink SQL length proposal
Hi all,
Currently I faced a problem caused by a long Flink SQL.
My sql is like “insert into tableA select a, b, c …….from sourceTable”, I have more than 1000 columns in select target, so that’s the problem, flink code generator will generate a RichMapFunction class and contains a map function which exceed the JVM max method limit (64kb). It throws the exception like:
Caused by: java.lang.RuntimeException: Compiling "DataStreamSinkConversion$3055": Code of method "map(Ljava/lang/Object;)Ljava/lang/Object;" of class "DataStreamSinkConversion$3055" grows beyond 64 KB
So is there any best practice for this ?
Thanks,
Simon