You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by "Venki Korukanti (JIRA)" <ji...@apache.org> on 2014/05/21 22:38:38 UTC

[jira] [Created] (DRILL-800) Partitioner is dropping records that can't fit in the available space of ValueVectors in OutgoingRecordBatch

Venki Korukanti created DRILL-800:
-------------------------------------

             Summary: Partitioner is dropping records that can't fit in the available space of ValueVectors in OutgoingRecordBatch
                 Key: DRILL-800
                 URL: https://issues.apache.org/jira/browse/DRILL-800
             Project: Apache Drill
          Issue Type: Bug
            Reporter: Venki Korukanti
            Assignee: Venki Korukanti


Partitioner code looks like:

{code}
public void partitionBatch(RecordBatch incoming) {
  for (int recordId = 0; recordId < incoming.getRecordCount(); ++recordId) {
      doEval(recordId, 0);
    }
}
{code}

In doEval

{code}
public void doEval(int inIndex, int outIndex) {
   ....

  if (!((NullableBigIntVector) outgoingVectors[(bucket)][ 0]).copyFromSafe((inIndex), outgoingBatches[(bucket)].getRecordCount(), vv35)) {
    outgoingBatches[(bucket)].flush();
    return ;
  }
  ....
  outgoingBatches[(bucket)].incRecordCount();
  outgoingBatches[(bucket)].flushIfNecessary();
}
{code}

If the copyFromSafe returns false due to insufficient space, we flush the existing records in outgoing batch and move on to the next record. The record that can't fit is ignored.




--
This message was sent by Atlassian JIRA
(v6.2#6252)