You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by "Victoria Markman (JIRA)" <ji...@apache.org> on 2015/10/15 01:14:05 UTC
[jira] [Created] (DRILL-3936) We don't handle out of memory
condition during build phase of hash join
Victoria Markman created DRILL-3936:
---------------------------------------
Summary: We don't handle out of memory condition during build phase of hash join
Key: DRILL-3936
URL: https://issues.apache.org/jira/browse/DRILL-3936
Project: Apache Drill
Issue Type: Bug
Components: Execution - Relational Operators
Reporter: Victoria Markman
It looks like we just fall through ( see excerpt from HashJoinBatch.java below )
{code:java}
public void executeBuildPhase() throws SchemaChangeException, ClassTransformationException, IOException {
//Setup the underlying hash table
// skip first batch if count is zero, as it may be an empty schema batch
if (right.getRecordCount() == 0) {
for (final VectorWrapper<?> w : right) {
w.clear();
}
rightUpstream = next(right);
}
boolean moreData = true;
while (moreData) {
switch (rightUpstream) {
case OUT_OF_MEMORY:
case NONE:
case NOT_YET:
case STOP:
moreData = false;
continue;
...
{code}
We don't handle it later either:
{code:java}
public IterOutcome innerNext() {
try {
/* If we are here for the first time, execute the build phase of the
* hash join and setup the run time generated class for the probe side
*/
if (state == BatchState.FIRST) {
// Build the hash table, using the build side record batches.
executeBuildPhase();
// IterOutcome next = next(HashJoinHelper.LEFT_INPUT, left);
hashJoinProbe.setupHashJoinProbe(context, hyperContainer, left, left.getRecordCount(), this, hashTable,
hjHelper, joinType);
// Update the hash table related stats for the operator
updateStats(this.hashTable);
}
....
{code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)