You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Jacques Nadeau (JIRA)" <ji...@apache.org> on 2014/05/24 18:15:02 UTC

[jira] [Commented] (DRILL-800) Partitioner is dropping records that can't fit in the available space of ValueVectors in OutgoingRecordBatch

    [ https://issues.apache.org/jira/browse/DRILL-800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14008157#comment-14008157 ] 

Jacques Nadeau commented on DRILL-800:
--------------------------------------

can you rebase on latest master?

> Partitioner is dropping records that can't fit in the available space of ValueVectors in OutgoingRecordBatch
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: DRILL-800
>                 URL: https://issues.apache.org/jira/browse/DRILL-800
>             Project: Apache Drill
>          Issue Type: Bug
>            Reporter: Venki Korukanti
>            Assignee: Venki Korukanti
>         Attachments: DRILL-800-1.patch
>
>
> Partitioner code looks like:
> {code}
> public void partitionBatch(RecordBatch incoming) {
>   for (int recordId = 0; recordId < incoming.getRecordCount(); ++recordId) {
>       doEval(recordId, 0);
>     }
> }
> {code}
> In doEval
> {code}
> public void doEval(int inIndex, int outIndex) {
>    ....
>   if (!((NullableBigIntVector) outgoingVectors[(bucket)][ 0]).copyFromSafe((inIndex), outgoingBatches[(bucket)].getRecordCount(), vv35)) {
>     outgoingBatches[(bucket)].flush();
>     return ;
>   }
>   ....
>   outgoingBatches[(bucket)].incRecordCount();
>   outgoingBatches[(bucket)].flushIfNecessary();
> }
> {code}
> If the copyFromSafe returns false due to insufficient space, we flush the existing records in outgoing batch and move on to the next record. The record that can't fit is ignored.



--
This message was sent by Atlassian JIRA
(v6.2#6252)