You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Jingkei Ly (JIRA)" <ji...@apache.org> on 2009/03/27 13:16:50 UTC

[jira] Commented: (HADOOP-5589) TupleWritable: Lift implicit limit on the number of values that can be stored

    [ https://issues.apache.org/jira/browse/HADOOP-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12689884#action_12689884 ] 

Jingkei Ly commented on HADOOP-5589:
------------------------------------

The example below demonstrates the current 64-value limit in TupleWritable:
{code}
Text emptyText = new Text("Should not be set written");
public void testTupleBoundarySuccess() throws Exception {
    Writable[] values = new Writable[64];
    Arrays.fill(values,emptyText);
    values[63] = new Text("Should be the only value set written");
                                     
    TupleWritable tuple = new TupleWritable(values);
    tuple.setWritten(63);
    
    for (int pos=0; pos<tuple.size();pos++) {
      boolean has = tuple.has(pos);
      if (pos == 63) {
        assertTrue(has);
      }
      else {
        assertFalse("Tuple position is incorrectly labelled as set: " + pos, has);
      }
    }
}
 
public void testTupleBoundaryFailure() throws Exception {
    Writable[] values = new Writable[65];
    Arrays.fill(values,emptyText);
    values[64] = new Text("Should be the only value set written");
                                     
    TupleWritable tuple = new TupleWritable(values);
    tuple.setWritten(64);
    
    for (int pos=0; pos<tuple.size();pos++) {
      boolean has = tuple.has(pos);
      if (pos == 64) {
        assertTrue(has);
      }
      else {
        assertFalse("Tuple position is incorrectly labelled as set: " + pos, has);
      }
    }
  }
{code}

> TupleWritable: Lift implicit limit on the number of values that can be stored
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-5589
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5589
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.21.0
>            Reporter: Jingkei Ly
>
> TupleWritable uses an instance field of the primitive type, long, which I presume is so that it can quickly determine if a position has been written to in its array of Writables (by using bit-shifting operations on the long field). The problem with this is that it implies that there is a maximum limit of 64 values you can store in a TupleWritable.
> An example of a use-case where I think this would be a problem is if you had two MR jobs with over 64 reduces tasks and you wanted to join the outputs with CompositeInputFormat  - this will probably cause unexpected results in the current scheme.
> At the very least, the 64-value limit should be documented in TupleWritable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.