You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sahil Takiar (JIRA)" <ji...@apache.org> on 2018/09/04 20:10:00 UTC

[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler

    [ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16603530#comment-16603530 ] 

Sahil Takiar commented on HIVE-17684:
-------------------------------------

[~misha@cloudera.com] sorry for the delay on this. I figured out why a bunch of the {{TestSparkCliDriver}} tests were failing and attached an updated patch with a fix.

As for the issues with {{auto_join25.q.out}} - it looks like there is a config called {{hive.mapjoin.localtask.max.memory.usage}} / {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} which defines how much memory the small table can consume before the memory exhaustion handler throws an error. These tests define a very low value for these configs and thus expect the tests to trigger the memory exhaustion handler.

We should probably do something similar. Introduce a new config that makes {{CRITICAl_GC_TIME_PERCENTAGE_PROD}} configurable. We can set it to a lower value in our tests in order to confirm that everything is working correctly.

Let me know if you need more help getting this done.

> HoS memory issues with MapJoinMemoryExhaustionHandler
> -----------------------------------------------------
>
>                 Key: HIVE-17684
>                 URL: https://issues.apache.org/jira/browse/HIVE-17684
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Misha Dmitriev
>            Priority: Major
>         Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect scenarios where the small table is taking too much space in memory, in which case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic to estimate how much memory the {{HashMap}} is consuming: {{MemoryMXBean#getHeapMemoryUsage().getUsed() / MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be inaccurate. The value returned by this method returns all reachable and unreachable memory on the heap, so there may be a bunch of garbage data, and the JVM just hasn't taken the time to reclaim it all. This can lead to intermittent failures of this check even though a simple GC would have reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. In Hive-on-MR this probably made sense to use because every Hive task was run in a dedicated container, so a Hive Task could assume it created most of the data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks running in a single executor, each doing different things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)