You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@impala.apache.org by "Juan Yu (JIRA)" <ji...@apache.org> on 2017/05/31 23:13:04 UTC

[jira] [Closed] (IMPALA-5410) same query report much higher memory usage in some scenarios

     [ https://issues.apache.org/jira/browse/IMPALA-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Juan Yu closed IMPALA-5410.
---------------------------
    Resolution: Invalid

Sorry for the noise, just noticed the profile list all queries that failed due to process memory limit exceeding at the moment, not just the query itself.

Process: memory limit exceeded. Limit=201.73 GB Total=201.73 GB Peak=201.73 GB
  RequestPool=root.default: Total=189.29 GB Peak=189.29 GB
    Query(c74ce10ea42773c1:8f12f6e900000000): Total=189.22 GB Peak=189.22 GB
      Fragment c74ce10ea42773c1:8f12f6e900000101: Total=10.30 MB Peak=11.02 MB
        AGGREGATION_NODE (id=2): Total=8.00 KB Peak=8.00 KB
          Exprs: Total=4.00 KB Peak=4.00 KB
        AGGREGATION_NODE (id=4): Total=10.27 MB Peak=10.27 MB
          Exprs: Total=4.00 KB Peak=4.00 KB
        EXCHANGE_NODE (id=3): Total=0 Peak=0
        DataStreamRecvr: Total=0 Peak=0
        DataStreamSender (dst_id=5): Total=7.52 KB Peak=7.52 KB
        CodeGen: Total=6.79 KB Peak=750.50 KB
      Block Manager: Limit=161.39 GB Total=13.63 GB Peak=13.63 GB
      Fragment c74ce10ea42773c1:8f12f6e900000080: Total=189.21 GB Peak=189.21 GB
        AGGREGATION_NODE (id=1): Total=188.83 GB Peak=188.83 GB
          Exprs: Total=175.20 GB Peak=175.20 GB
        HDFS_SCAN_NODE (id=0): Total=385.53 MB Peak=601.23 MB
        DataStreamSender (dst_id=3): Total=660.12 KB Peak=660.12 KB
        CodeGen: Total=4.48 KB Peak=610.00 KB
    Query(8743be49f34a3cb9:8bb6ace100000000): Total=34.31 MB Peak=49.49 MB
      Fragment 8743be49f34a3cb9:8bb6ace1000000f7: Total=34.31 MB Peak=35.22 MB
        SORT_NODE (id=2): Total=24.00 MB Peak=24.00 MB
        AGGREGATION_NODE (id=4): Total=10.30 MB Peak=10.30 MB
          Exprs: Total=4.00 KB Peak=4.00 KB
        EXCHANGE_NODE (id=3): Total=0 Peak=0
        DataStreamRecvr: Total=0 Peak=0
        DataStreamSender (dst_id=5): Total=1.30 KB Peak=1.30 KB
        CodeGen: Total=3.53 KB Peak=938.50 KB
      Block Manager: Limit=161.39 GB Total=32.25 MB Peak=32.77 MB
    Query(2f47bf67bfc60f0d:5473b95700000000): Total=34.31 MB Peak=50.15 MB
      Fragment 2f47bf67bfc60f0d:5473b957000000f7: Total=34.31 MB Peak=35.22 MB
        SORT_NODE (id=2): Total=24.00 MB Peak=24.00 MB
        AGGREGATION_NODE (id=4): Total=10.30 MB Peak=10.30 MB
          Exprs: Total=4.00 KB Peak=4.00 KB
        EXCHANGE_NODE (id=3): Total=0 Peak=0
        DataStreamRecvr: Total=0 Peak=0
        DataStreamSender (dst_id=5): Total=1.30 KB Peak=1.30 KB
        CodeGen: Total=3.53 KB Peak=938.50 KB
      Block Manager: Limit=161.39 GB Total=32.25 MB Peak=32.77 MB
  Untracked Memory: Total=12.44 GB


> same query report much higher memory usage in some scenarios
> ------------------------------------------------------------
>
>                 Key: IMPALA-5410
>                 URL: https://issues.apache.org/jira/browse/IMPALA-5410
>             Project: IMPALA
>          Issue Type: Bug
>            Reporter: Juan Yu
>         Attachments: profile-failed.txt, profile-OOM.txt, profile-succeed.txt
>
>
> The same query running on the same cluster repeatedly report much higher memory usage and failed with memory limit exceed. Not sure if it indeed use more memory, or the counter is wrong. at the time the query failed, there is another large query running and used all memory.
> Success one
> Aggregate Peak Memory Usage: 4.2 GiB
> Failed one
> Query(c74ce10ea42773c1:8f12f6e900000000): Total=189.22 GB Peak=189.22 GB
> large query
> Query(c74ce10ea42773c1:8f12f6e900000000): Total=189.22 GB Peak=189.22 GB
> Attached all three profiles



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)