You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by 杨海乐 <ya...@letv.com> on 2015/11/11 03:36:36 UTC

Error : OutofMemoryError

数据量是500G,15台机器,每台hadoop节点内存调整的是89G ,hbase集群共五台,内存是18G,500G测试数据共5163328507条数据,一张事实表,一共7个维度,这里没有设置维度表
Kylin有关内存的配置:

kylin.job.mapreduce.default.reduce.input.mb=8192//这个参数不太确定是做什么的,如果太小会引发什么后果

报错:
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#3
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.io.BoundedByteArrayOutputStream.
<init>(BoundedByteArrayOutputStream.java:56)at org.apache.hadoop.io.BoundedByteArrayOutputStream.
<init>(BoundedByteArrayOutputStream.java:46) at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:304)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:294)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:511)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)



Re: Error : OutofMemoryError

Posted by 杨海乐 <ya...@letv.com>.
You are right.I solved the problem by setting mapreduce.reduce.java.opts and
mapreduce.map.java.opts.
thanks!



--
View this message in context: http://apache-kylin-incubating.74782.x6.nabble.com/Error-OutofMemoryError-tp2296p2372.html
Sent from the Apache Kylin (Incubating) mailing list archive at Nabble.com.

Re: Error : OutofMemoryError

Posted by Li Yang <li...@apache.org>.
The problem is OOM at reducer copy phase. Some general suggestions.

1) Increase JVM heap size of reducer.
2) Set lesser "shuffle.input.buffer.percentage" in mr job config. Reducer
will use less mem and more disk to do shuffling.
3) Increase the number of reducers. Set a smaller "
kylin.job.mapreduce.default.reduce.input.mb" in kylin.properties, e.g. 500.
Kylin will estimate shuffle size and allocate one reducer for every 500 MB
shuffle data. This way you can control how much data be handled by each
reducer.

Try 3) and 1) first.

On Wed, Nov 11, 2015 at 10:36 AM, 杨海乐 <ya...@letv.com> wrote:

> 数据量是500G,15台机器,每台hadoop节点内存调整的是89G ,hbase集群共五台,内存是18G,500G测试数据共5163328507
> 条数据,一张事实表,一共7个维度,这里没有设置维度表
> Kylin有关内存的配置:
>
> kylin.job.mapreduce.default.reduce.input.mb=8192//这个参数不太确定是做什么的,如果太小会引发什么后果
>
> 报错:
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error
> in shuffle in fetcher#3
> at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.OutOfMemoryError: Java heap space at
> org.apache.hadoop.io.BoundedByteArrayOutputStream.
> <init>(BoundedByteArrayOutputStream.java:56)at
> org.apache.hadoop.io.BoundedByteArrayOutputStream.
> <init>(BoundedByteArrayOutputStream.java:46) at
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
> at
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:304)
> at
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:294)
> at
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:511)
> at
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
> at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
>
>
>