You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by sunfulin <su...@163.com> on 2020/07/10 15:32:33 UTC

flink 1.11 local execution oom问题

hi,
我在使用1.11版本在本地idea起一个作业时,并发为1,抛出了如下关于内存的异常。。问题是之前从来没有显示配置过taskmanager的memory参数,这是为何?
感觉由1.10升级到1.11问题还是挺多的。。我尝试增加了JVM参数,增加DirectMemory内存配置,还是没有作用,请教大神帮忙看下。


Exception in thread "main" java.lang.OutOfMemoryError: Could not allocate enough memory segments for NetworkBufferPool (required (Mb): 64, allocated (Mb): 63, missing (Mb): 1). Cause: Direct buffer memory. The direct out-of-memory error has occurred. This can mean two things: either job(s) require(s) a larger size of JVM direct memory or there is a direct memory leak. The direct memory can be allocated by user code or some of its dependencies. In this case 'taskmanager.memory.task.off-heap.size' configuration option should be increased. Flink framework and its dependencies also consume the direct memory, mostly for network communication. The most of network memory is managed by Flink and should not result in out-of-memory error. In certain special cases, in particular for jobs with high parallelism, the framework may require more direct memory which is not managed by Flink. In this case 'taskmanager.memory.framework.off-heap.size' configuration option should be increased. If the error persists then there is probably a direct memory leak in user code or some of its dependencies which has to be investigated and fixed. The task executor has to be shutdown...

Re:Re: flink 1.11 local execution oom问题

Posted by sunfulin <su...@163.com>.
hi,
感谢大神的回复。看了下是因为我执行的SQL作业,有多个sink,在执行TableEnvironment.executeSQL时,Flink提交了多个job,在本地执行时貌似就走到了这个异常。我将多sink修改为TableEnvironment.createStatementSet来提交,就没有这个问题了。谢谢回复。

















在 2020-07-13 13:51:55,"Xintong Song" <to...@gmail.com> 写道:
>Local execution 模式下,Flink 是无法实际控制 JVM 的 Xmx, Xms, MaxDirectMemorySize
>等参数的,这些参数取决于你的 IDE 设置。
>检查一下 idea 的 run configuration 是否有配置过 -XX:MaxDirectMemorySize。
>
>Thank you~
>
>Xintong Song
>
>
>
>On Sat, Jul 11, 2020 at 3:48 PM Congxian Qiu <qc...@gmail.com> wrote:
>
>> Hi
>>
>> 这个问题可以看下是否和 releasenote[1] 中 memory configuration
>> 相关的修改有关,具体到这个错误,你可以按照提示增加一些内存看看
>>
>> [1]
>>
>> https://flink.apache.org/news/2020/07/06/release-1.11.0.html#other-improvements
>> Best,
>> Congxian
>>
>>
>> sunfulin <su...@163.com> 于2020年7月10日周五 下午11:32写道:
>>
>> > hi,
>> >
>> >
>> 我在使用1.11版本在本地idea起一个作业时,并发为1,抛出了如下关于内存的异常。。问题是之前从来没有显示配置过taskmanager的memory参数,这是为何?
>> > 感觉由1.10升级到1.11问题还是挺多的。。我尝试增加了JVM参数,增加DirectMemory内存配置,还是没有作用,请教大神帮忙看下。
>> >
>> >
>> > Exception in thread "main" java.lang.OutOfMemoryError: Could not allocate
>> > enough memory segments for NetworkBufferPool (required (Mb): 64,
>> allocated
>> > (Mb): 63, missing (Mb): 1). Cause: Direct buffer memory. The direct
>> > out-of-memory error has occurred. This can mean two things: either job(s)
>> > require(s) a larger size of JVM direct memory or there is a direct memory
>> > leak. The direct memory can be allocated by user code or some of its
>> > dependencies. In this case 'taskmanager.memory.task.off-heap.size'
>> > configuration option should be increased. Flink framework and its
>> > dependencies also consume the direct memory, mostly for network
>> > communication. The most of network memory is managed by Flink and should
>> > not result in out-of-memory error. In certain special cases, in
>> particular
>> > for jobs with high parallelism, the framework may require more direct
>> > memory which is not managed by Flink. In this case
>> > 'taskmanager.memory.framework.off-heap.size' configuration option should
>> be
>> > increased. If the error persists then there is probably a direct memory
>> > leak in user code or some of its dependencies which has to be
>> investigated
>> > and fixed. The task executor has to be shutdown...
>>

Re: flink 1.11 local execution oom问题

Posted by Xintong Song <to...@gmail.com>.
Local execution 模式下,Flink 是无法实际控制 JVM 的 Xmx, Xms, MaxDirectMemorySize
等参数的,这些参数取决于你的 IDE 设置。
检查一下 idea 的 run configuration 是否有配置过 -XX:MaxDirectMemorySize。

Thank you~

Xintong Song



On Sat, Jul 11, 2020 at 3:48 PM Congxian Qiu <qc...@gmail.com> wrote:

> Hi
>
> 这个问题可以看下是否和 releasenote[1] 中 memory configuration
> 相关的修改有关,具体到这个错误,你可以按照提示增加一些内存看看
>
> [1]
>
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html#other-improvements
> Best,
> Congxian
>
>
> sunfulin <su...@163.com> 于2020年7月10日周五 下午11:32写道:
>
> > hi,
> >
> >
> 我在使用1.11版本在本地idea起一个作业时,并发为1,抛出了如下关于内存的异常。。问题是之前从来没有显示配置过taskmanager的memory参数,这是为何?
> > 感觉由1.10升级到1.11问题还是挺多的。。我尝试增加了JVM参数,增加DirectMemory内存配置,还是没有作用,请教大神帮忙看下。
> >
> >
> > Exception in thread "main" java.lang.OutOfMemoryError: Could not allocate
> > enough memory segments for NetworkBufferPool (required (Mb): 64,
> allocated
> > (Mb): 63, missing (Mb): 1). Cause: Direct buffer memory. The direct
> > out-of-memory error has occurred. This can mean two things: either job(s)
> > require(s) a larger size of JVM direct memory or there is a direct memory
> > leak. The direct memory can be allocated by user code or some of its
> > dependencies. In this case 'taskmanager.memory.task.off-heap.size'
> > configuration option should be increased. Flink framework and its
> > dependencies also consume the direct memory, mostly for network
> > communication. The most of network memory is managed by Flink and should
> > not result in out-of-memory error. In certain special cases, in
> particular
> > for jobs with high parallelism, the framework may require more direct
> > memory which is not managed by Flink. In this case
> > 'taskmanager.memory.framework.off-heap.size' configuration option should
> be
> > increased. If the error persists then there is probably a direct memory
> > leak in user code or some of its dependencies which has to be
> investigated
> > and fixed. The task executor has to be shutdown...
>

Re: flink 1.11 local execution oom问题

Posted by Congxian Qiu <qc...@gmail.com>.
Hi

这个问题可以看下是否和 releasenote[1] 中 memory configuration
相关的修改有关,具体到这个错误,你可以按照提示增加一些内存看看

[1]
https://flink.apache.org/news/2020/07/06/release-1.11.0.html#other-improvements
Best,
Congxian


sunfulin <su...@163.com> 于2020年7月10日周五 下午11:32写道:

> hi,
>
> 我在使用1.11版本在本地idea起一个作业时,并发为1,抛出了如下关于内存的异常。。问题是之前从来没有显示配置过taskmanager的memory参数,这是为何?
> 感觉由1.10升级到1.11问题还是挺多的。。我尝试增加了JVM参数,增加DirectMemory内存配置,还是没有作用,请教大神帮忙看下。
>
>
> Exception in thread "main" java.lang.OutOfMemoryError: Could not allocate
> enough memory segments for NetworkBufferPool (required (Mb): 64, allocated
> (Mb): 63, missing (Mb): 1). Cause: Direct buffer memory. The direct
> out-of-memory error has occurred. This can mean two things: either job(s)
> require(s) a larger size of JVM direct memory or there is a direct memory
> leak. The direct memory can be allocated by user code or some of its
> dependencies. In this case 'taskmanager.memory.task.off-heap.size'
> configuration option should be increased. Flink framework and its
> dependencies also consume the direct memory, mostly for network
> communication. The most of network memory is managed by Flink and should
> not result in out-of-memory error. In certain special cases, in particular
> for jobs with high parallelism, the framework may require more direct
> memory which is not managed by Flink. In this case
> 'taskmanager.memory.framework.off-heap.size' configuration option should be
> increased. If the error persists then there is probably a direct memory
> leak in user code or some of its dependencies which has to be investigated
> and fixed. The task executor has to be shutdown...