You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Mahadev konar (JIRA)" <ji...@apache.org> on 2015/05/06 19:18:00 UTC

[jira] [Commented] (AMBARI-10963) Change default for hive conditional task size to 52428800

    [ https://issues.apache.org/jira/browse/AMBARI-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530959#comment-14530959 ] 

Mahadev konar commented on AMBARI-10963:
----------------------------------------

+1 

> Change default for hive conditional task size to 52428800
> ---------------------------------------------------------
>
>                 Key: AMBARI-10963
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10963
>             Project: Ambari
>          Issue Type: Bug
>          Components: stacks
>    Affects Versions: 2.1.0
>            Reporter: Sumit Mohanty
>            Assignee: Sumit Mohanty
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-10963.patch
>
>
> Hive query failure due to OOM error in mr mode
> Noticed following error while running several join queries in mr mode:
> {noformat}
>  FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded
> 	at org.apache.hadoop.hive.serde2.typeinfo.HiveDecimalUtils.enforcePrecisionScale(HiveDecimalUtils.java:59)
> 	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.enforcePrecisionScale(WritableHiveDecimalObjectInspector.java:105)
> 	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveWritableObject(WritableHiveDecimalObjectInspector.java:41)
> 	at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveDecimalObjectInspector.getPrimitiveWritableObject(WritableHiveDecimalObjectInspector.java:26)
> 	at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305)
> 	at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:340)
> 	at org.apache.hadoop.hive.ql.exec.persistence.MapJoinEagerRowContainer.read(MapJoinEagerRowContainer.java:129)
> 	at org.apache.hadoop.hive.ql.exec.persistence.MapJoinEagerRowContainer.read(MapJoinEagerRowContainer.java:122)
> 	at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:79)
> 	at org.apache.hadoop.hive.ql.exec.mr.HashTableLoader.load(HashTableLoader.java:98)
> 	at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:190)
> 	at org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:216)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:176)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> 	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> {noformat}
> This is definitely a configuration issue, noconditionaltask.size is ~1GB while mapper container size is 750MB.
> From mapred-site.xml
> {code}
>     <property>
>       <name>mapreduce.map.memory.mb</name>
>       <value>768</value>
>     </property>
>     <property>
>       <name>mapreduce.reduce.memory.mb</name>
>       <value>1536</value>
>     </property>
> {code}    
> From hive-site.xml
> {code}
>     <property>
>       <name>hive.auto.convert.join.noconditionaltask.size</name>
>       <value>1000000000</value>
>     </property>
> {code}    
> Please modify hive-site.xml with 
> {code}
>     <property>
>       <name>hive.auto.convert.join.noconditionaltask.size</name>
>       <value>52428800</value>
>     </property>
> {code}    



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)