You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@calcite.apache.org by "Vladimir Sitnikov (JIRA)" <ji...@apache.org> on 2018/09/21 09:01:00 UTC
[jira] [Commented] (CALCITE-760) Aggregate recommender blows up if
row count estimate is too high
[ https://issues.apache.org/jira/browse/CALCITE-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16623291#comment-16623291 ]
Vladimir Sitnikov commented on CALCITE-760:
-------------------------------------------
[~julianhyde], OOM happens since org.pentaho.aggdes.algorithm.impl.MonteCarloLatticeImpl#chooseAggregate => org.pentaho.aggdes.algorithm.impl.MonteCarloLatticeImpl#costQuery => org.pentaho.aggdes.algorithm.impl.LatticeImpl#getParents is trying "infinite" amount of various aggregates.
org.pentaho.aggdes.algorithm.Algorithm.ParameterEnum#aggregateLimit is basicaly ignored.
Should a priority queue be used there?
Should there be a limit on the number of attempts?
> Aggregate recommender blows up if row count estimate is too high
> ----------------------------------------------------------------
>
> Key: CALCITE-760
> URL: https://issues.apache.org/jira/browse/CALCITE-760
> Project: Calcite
> Issue Type: Bug
> Reporter: Julian Hyde
> Assignee: Julian Hyde
> Priority: Major
>
> If you run the aggregate recommendation algorithm with a value of rowCountEstimate that is wrong and large, the algorithm runs for a long time and eventually fails with "OutOfMemoryError: GC overhead limit exceeded".
> I have added LatticeTest.testLatticeWithBadRowCountEstimate as a test case.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)