You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Deneche A. Hakim (JIRA)" <ji...@apache.org> on 2015/10/13 21:33:05 UTC

[jira] [Assigned] (DRILL-3913) Possible memory leak during CTAS using 30 TB TPC-H dataset

     [ https://issues.apache.org/jira/browse/DRILL-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Deneche A. Hakim reassigned DRILL-3913:
---------------------------------------

    Assignee: Deneche A. Hakim

> Possible memory leak during CTAS using 30 TB TPC-H dataset
> ----------------------------------------------------------
>
>                 Key: DRILL-3913
>                 URL: https://issues.apache.org/jira/browse/DRILL-3913
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Flow
>    Affects Versions: 1.2.0
>         Environment: 47 nodes configured with 32 GB Drill Direct memory
>            Reporter: Abhishek Girish
>            Assignee: Deneche A. Hakim
>            Priority: Critical
>             Fix For: 1.3.0
>
>         Attachments: create_table_sf30000.txt, drillbit.log.txt, drillbit_attempt3.log.txt, queryProfile_attempt2.json, query_profile.json, sqlline_verbose_error.txt, sysmemory.txt
>
>
> 8 CTAS queries were executed sequentially to write TPC-H text data into Parquet. After successfully writing a few tables, CTAS failed with OOM.
> Restarting Drillbits fixed the problem and re-run of pending CTAS queries completed. This process was done twice in-order to complete all 8 tables to be written. Overall source was 30TB in size. 
> Queries attached. Query profile for one of the CTAS which failed is attached. Logs indicated that the Drillbit was out of Direct Memory. 
> Can share more details as required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)