You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Suman Somasundar (JIRA)" <ji...@apache.org> on 2015/01/07 20:31:35 UTC

[jira] [Created] (HADOOP-11466) Newer Fast byte comparison is slower in SUN SPARC machines

Suman Somasundar created HADOOP-11466:
-----------------------------------------

             Summary: Newer Fast byte comparison is slower in SUN SPARC machines
                 Key: HADOOP-11466
                 URL: https://issues.apache.org/jira/browse/HADOOP-11466
             Project: Hadoop Common
          Issue Type: Improvement
          Components: io, performance, util
         Environment: Linux X86 and Solaris SPARC
            Reporter: Suman Somasundar
            Assignee: Suman Somasundar
            Priority: Minor


One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two byte arrays at coarser 8-byte granularity instead of at the byte-level. The discussion at HADOOP-7761 says this fast byte comparison is somewhat faster for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the patch uses Unsafe.getLong. The problem is that this call is incredibly expensive on SPARC. The reason is that the Studio compiler detects an unaligned pointer read and handles this read in software. x86 supports unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)