You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Thomas Mann (FiduciaGAD) (Jira)" <ji...@apache.org> on 2019/11/21 16:30:00 UTC

[jira] [Comment Edited] (HIVE-16220) Memory leak when creating a table using location and NameNode in HA

    [ https://issues.apache.org/jira/browse/HIVE-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979402#comment-16979402 ] 

Thomas Mann (FiduciaGAD) edited comment on HIVE-16220 at 11/21/19 4:29 PM:
---------------------------------------------------------------------------

can confirm same issue

 

for HDP 3.1.0 and Hive in Version 3.0.0.3.1

Circumstances: Sqoop Job importing Data from DB2 via HDFS/MapReduce and loading them into Hive

Configuration: NameNode in HA

 

Memory Leak:

{color:#000000}44,343 instances of {color}*"org.apache.hadoop.hive.conf.HiveConf"*{color:#000000}, loaded by {color}*"sun.misc.Launcher$AppClassLoader @ 0x7fa7b62f5400"*{color:#000000} occupy {color}*18,993,039,520 (96.13%)*{color:#000000} bytes. These instances are referenced from one instance of {color}*"java.util.concurrent.ConcurrentHashMap$Node[]"*{color:#000000}, loaded by {color}*"<system class loader>"*


was (Author: xcg2945):
can confirm same issue

for HDP 3.1.0 and Hive in Version 3.0.0.3.1 

> Memory leak when creating a table using location and NameNode in HA
> -------------------------------------------------------------------
>
>                 Key: HIVE-16220
>                 URL: https://issues.apache.org/jira/browse/HIVE-16220
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2
>    Affects Versions: 1.2.1, 3.0.0
>         Environment: HDP-2.4.0.0
> HDP-3.1.0.0
>            Reporter: Angel Alvarez Pascua
>            Priority: Major
>
> The following simple DDL
> CREATE TABLE `test`(`field` varchar(1)) LOCATION 'hdfs://benderHA/apps/hive/warehouse/test'
> ends up generating a huge memory leak in the HiveServer2 service.
> After two weeks without a restart, the service stops suddenly because of OutOfMemory errors.
> This only happens when we're in an environment in which the NameNode is in HA,  otherwise, nothing (so weird) happens. If the location clause is not present, everything is also fine.
> It seems, multiples instances of Hadoop configuration are created when we're in an HA environment:
> <AFTER ONE EXECUTIONS OF CREATE TABLE WITH LOCATION>
> 2.618 instances of "org.apache.hadoop.conf.Configuration", loaded by "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 350.263.816 (81,66%) bytes. These instances are referenced from one instance of "java.util.HashMap$Node[]", 
> loaded by "<system class loader>"
> <AFTER TWO EXECUTIONS OF CREATE TABLE WITH LOCATION>
> 5.216 instances of "org.apache.hadoop.conf.Configuration", loaded by "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 699.901.416 (87,32%) bytes. These instances are referenced from one instance of "java.util.HashMap$Node[]", 
> loaded by "<system class loader>"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)