You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "jv ning (JIRA)" <ji...@apache.org> on 2009/06/09 18:47:07 UTC
[jira] Commented: (HADOOP-5589) TupleWritable: Lift implicit limit
on the number of values that can be stored
[ https://issues.apache.org/jira/browse/HADOOP-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12717744#action_12717744 ]
jv ning commented on HADOOP-5589:
---------------------------------
How does the CompositeRecordReader change with this patch. I have been backporting to 18.2 and it looks like CompositeRecordReader uses a long to hold information also.
> TupleWritable: Lift implicit limit on the number of values that can be stored
> -----------------------------------------------------------------------------
>
> Key: HADOOP-5589
> URL: https://issues.apache.org/jira/browse/HADOOP-5589
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Affects Versions: 0.19.1
> Reporter: Jingkei Ly
> Assignee: Jingkei Ly
> Fix For: 0.21.0
>
> Attachments: HADOOP-5589-1.patch, HADOOP-5589-2.patch, HADOOP-5589-3.patch, HADOOP-5589-4.patch, HADOOP-5589-4.patch
>
>
> TupleWritable uses an instance field of the primitive type, long, which I presume is so that it can quickly determine if a position has been written to in its array of Writables (by using bit-shifting operations on the long field). The problem with this is that it implies that there is a maximum limit of 64 values you can store in a TupleWritable.
> An example of a use-case where I think this would be a problem is if you had two MR jobs with over 64 reduces tasks and you wanted to join the outputs with CompositeInputFormat - this will probably cause unexpected results in the current scheme.
> At the very least, the 64-value limit should be documented in TupleWritable.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.