You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by 烟虫李彦卓 <12...@qq.com> on 2020/03/20 03:19:10 UTC

Flink SQL:Too few memory segments provided. Hash Join needs at least 33 memory segments.

Hi, All:

&nbsp;&nbsp;&nbsp;&nbsp;我在使用Flink1.10.0的SQL时,遇到了一个问题:


Caused by: java.lang.IllegalArgumentException: Too few memory segments provided. Hash Join needs at least 33 memory segments.

 at org.apache.flink.runtime.operators.hash.MutableHashTable.<init&gt;(MutableHashTable.java:401)

 at org.apache.flink.runtime.operators.hash.MutableHashTable.<init&gt;(MutableHashTable.java:387)

 at org.apache.flink.runtime.operators.hash.HashJoinIteratorBase.getHashJoin(HashJoinIteratorBase.java:51)

 at org.apache.flink.runtime.operators.hash.NonReusingBuildSecondHashJoinIterator.<init&gt;(NonReusingBuildSecondHashJoinIterator.java:89)

 at org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:194)

 at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:474)

 ... 4 common frames omitted

整个项目的逻辑是使用JDBCInputFormat从MySQL读数据,然后使用Flink SQL做聚合计算,再通过JDBCOutputFormat写入到MySQL。

其实代码在Flink1.7.2版本下是正常可用的,并已投入生产环境。但切换到Flink1.10.0后就出现这样的问题。通过排查,在聚合计算后,将每个聚合结果Table连接成一个Table时出现上述错误。这一步涉及到30个Table,29个left join,且通过试验,当第10次left join时就会报错。

希望得到你的回复,非常感谢!Thanks!

Re: Flink SQL:Too few memory segments provided. Hash Join needs at least 33 memory segments.

Posted by Jingsong Li <ji...@gmail.com>.
Hi,

- 看堆栈你是用了老的planner,推荐使用blink planner.
- 如果要在老planner上解决问题,提高Taskmanager的manage内存试试。

Best,
Jingsong Lee

On Fri, Mar 20, 2020 at 4:41 PM 烟虫李彦卓 <12...@qq.com> wrote:

> Hi, All:
>
> &nbsp;&nbsp;&nbsp;&nbsp;我在使用Flink1.10.0的SQL时,遇到了一个问题:
>
>
> Caused by: java.lang.IllegalArgumentException: Too few memory segments
> provided. Hash Join needs at least 33 memory segments.
>
>  at
> org.apache.flink.runtime.operators.hash.MutableHashTable.<init&gt;(MutableHashTable.java:401)
>
>  at
> org.apache.flink.runtime.operators.hash.MutableHashTable.<init&gt;(MutableHashTable.java:387)
>
>  at
> org.apache.flink.runtime.operators.hash.HashJoinIteratorBase.getHashJoin(HashJoinIteratorBase.java:51)
>
>  at
> org.apache.flink.runtime.operators.hash.NonReusingBuildSecondHashJoinIterator.<init&gt;(NonReusingBuildSecondHashJoinIterator.java:89)
>
>  at
> org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:194)
>
>  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:474)
>
>  ... 4 common frames omitted
>
> 整个项目的逻辑是使用JDBCInputFormat从MySQL读数据,然后使用Flink
> SQL做聚合计算,再通过JDBCOutputFormat写入到MySQL。
>
> 其实代码在Flink1.7.2版本下是正常可用的,并已投入生产环境。但切换到Flink1.10.0后就出现这样的问题。通过排查,在聚合计算后,将每个聚合结果Table连接成一个Table时出现上述错误。这一步涉及到30个Table,29个left
> join,且通过试验,当第10次left join时就会报错。
>
> 希望得到你的回复,非常感谢!Thanks!



-- 
Best, Jingsong Lee