You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Michael White (JIRA)" <ji...@apache.org> on 2011/05/27 05:37:47 UTC

[jira] [Created] (MAPREDUCE-2538) InputSampler.writePartitionFile() may write duplicate keys

InputSampler.writePartitionFile() may write duplicate keys
----------------------------------------------------------

                 Key: MAPREDUCE-2538
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2538
             Project: Hadoop Map/Reduce
          Issue Type: Bug
    Affects Versions: 0.20.2
         Environment: EMR.
            Reporter: Michael White
            Priority: Minor


InputSampler.writePartitionFile() outputs the same key multiple times if the input samples have enough of a given key to span multiple partitions.  There is logic in the code that appears to try to avoid this, but seems incorrect:

for(int i = 1; i < numPartitions; ++i) {
  int k = Math.round(stepSize * i);
  while (last >= k && comparator.compare(samples[last], samples[k]) == 0) {
    ++k;
  }
  writer.append(samples[k], nullValue);
  last = k;
}

The while loop condition "last >= k" is always false.  The sample comparison after the && never occurs.

It's not entirely clear what the correct fix is.  The current behavior is arguably correct mathematically, though the while loop could be elided for clarity.  If bug MAPREDUCE-1987 were fixed, it would be less of a problem (for me at least), since that is where the non-uniqueness causes me problems.

Alternatively, changing the while to:

"if( last >= 0) {
   while (comparator.compare(samples[last], samples[k]) >= 0)) {"

or, optimized for skipping over many duplicates (but arguably less clear):

"if (last >= 0) {
   while (last >= k || comparator.compare(samples[last], samples[k]) >= 0)) {"

would probably achieve what the original author intended.

Perhaps the behavior could be selected by a parameter, e.g. "boolean unique".

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira