You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Paul Rogers (JIRA)" <ji...@apache.org> on 2017/02/24 17:50:44 UTC

[jira] [Commented] (DRILL-4253) Some functional tests are failing because sort limit is too low

    [ https://issues.apache.org/jira/browse/DRILL-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15883184#comment-15883184 ] 

Paul Rogers commented on DRILL-4253:
------------------------------------

[~agirish] or [~rkins], please try these tests against the revised "managed" sort.

Note also that the amount of memory given to the sort depends (oddly) on the number of cores on the test machine. The default memory per query of 2GB is fine when running on a Mac with a few cores, but will be too little when running with 20+ cores.

For now, the memory per query parameter should be set so that, when divided by the core count, still gives each sort sufficient memory to make good progress.

The managed sort does better than the original external sort in low memory, but performance will always be better with more memory.

> Some functional tests are failing because sort limit is too low
> ---------------------------------------------------------------
>
>                 Key: DRILL-4253
>                 URL: https://issues.apache.org/jira/browse/DRILL-4253
>             Project: Apache Drill
>          Issue Type: Test
>          Components: Tools, Build & Test
>    Affects Versions: 1.5.0
>         Environment: 4 nodes cluster, 32 cores each
>            Reporter: Deneche A. Hakim
>             Fix For: 1.10.0
>
>
> The following tests are running out of memory:
> {noformat}
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q174.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q171.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q168_DRILL-2046.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q162_DRILL-1985.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q165.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q177_DRILL-2046.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q159_DRILL-2046.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/large/q157_DRILL-1985.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/large/q175_DRILL-1985.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q160_DRILL-1985.q
> framework/resources/Functional/data-shapes/wide-columns/5000/1000rows/parquet/q163_DRILL-2046.q
> {noformat}
> With errors similar to the following:
> {noformat}
> java.sql.SQLException: SYSTEM ERROR: DrillRuntimeException: Failed to pre-allocate memory for SV. Existing recordCount*4 = 0, incoming batch recordCount*4 = 696
> {noformat}
> {noformat}
> Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.
> {noformat}
> Those queries operate on wide tables and the sort limit is too low when using the default value for {{planner.memory.max_query_memory_per_node}}.
> We should update those tests to set a higher limit (4GB worked well for me) to {{planner.memory.max_query_memory_per_node}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)