You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Edward Nevill <ed...@linaro.org> on 2015/01/21 12:42:18 UTC

AARCH64 build broken

Hi,

Hadoop currently does not build on ARM AARCH64. I have raised a JIRA issue
with a patch.

https://issues.apache.org/jira/browse/HADOOP-11484

I have submitted the patch and it builds OK and passes all the core tests.

Many thanks,
Ed.

Re: AARCH64 build broken

Posted by Steve Loughran <st...@hortonworks.com>.
On 22 January 2015 at 19:34, Edward Nevill <ed...@linaro.org> wrote:

> Another question is whether we actually care about 32 bit platforms, or can
> they just all downgrade to C code. Does anyone actually build Hadoop on a
> 32 bit platform?
>



I think we can assume that pretty much everyone has switched to 64-bit
JVMs, though it was common in Hadoop 1.x for people to have 32 bit JVMs for
the JTs and DNs to save RAM in those processes.

There's some work someone has been doing for a 32-bit windows hadoop build
for client-side use;

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: AARCH64 build broken

Posted by Colin McCabe <cm...@alumni.cmu.edu>.
Good find.  I filed HADOOP-11505 to fix the incorrect usage of
unoptimized code on x86 and the incorrect bswap on alternative
architectures.

Let's address the fmemcmp stuff in a separate jira.

best,
Colin


On Thu, Jan 22, 2015 at 11:34 AM, Edward Nevill
<ed...@linaro.org> wrote:
> On 21 January 2015 at 11:42, Edward Nevill <ed...@linaro.org> wrote:
>
>> Hi,
>>
>> Hadoop currently does not build on ARM AARCH64. I have raised a JIRA issue
>> with a patch.
>>
>> https://issues.apache.org/jira/browse/HADOOP-11484
>>
>> I have submitted the patch and it builds OK and passes all the core tests.
>>
>
> Hi Colin,
>
> Thanks for pushing this patch.  Steve Loughran raised the issue in the card
> that although this patch fixes the ARM issue it does nothing for other
> archs.
>
> I would be happy to prepare a patch which makes it downgrade to C code on
> other CPU families if this would be useful.
>
> The general format would be
>
> #idef __aarch64__
>    __asm__("ARM Asm")
> #elif defined(??X86??)
>    __asm__("X86 Asm")
> #else
>    C Implementation
> #endif
>
> My question is what to put for the defined(??X86??)
>
> According to the following page
>
> http://nadeausoftware.com/articles/2012/02/c_c_tip_how_detect_processor_type_using_compiler_predefined_macros
>
> the only way to fully detect all x86 variants is to write
>
> #if defined(__x86_64__) || defined(_M_X64) || defined(__i386) ||
> defined(_M_IX86)
>
> will detect all variants of 32 and 64 bit x86 across gcc and windows.
>
> Interestingly the bswap64 inline function in primitives.h has the following
>
> #ifdef __X64
>   __asm__("rev ....");
> #else
>   C implementation
> #endif
>
> However if I compile Hadoop on my 64 bit Red Hat Enterprise Linux system it
> actually compiles the C implementation (I have verified this by putting a
> #error at the start of the C implementation. This is because the correct
> macro to detect 64 bit x86 on gcc is __x86_64__ I had also thought that the
> macro for windows was _M_X64 not __X64 but maybe __X64 works just as well
> on windows? Perhaps someone with access to windows development platform
> could do some tests and tell us what macros actually work.
>
> Another question is whether we actually care about 32 bit platforms, or can
> they just all downgrade to C code. Does anyone actually build Hadoop on a
> 32 bit platform?
>
> Another thing to be aware of is that there are endian dependncies in
> primitives.h, for example in fmemcmp() just a bit further down is the line
>
>     return (int64_t)bswap(*(uint32_t*)src) -
> (int64_t)bswap(*(uint32_t*)dest);
>
> This is little endian dependant so will work on the likes of X86 and ARM
> but will fail on Sparc. Note, I haven't trawled looking for endian
> dependancies but this was one I just spotted while looking at the aarch64
> non compilation issue.
>
> All the best,
> Ed.

Re: AARCH64 build broken

Posted by Edward Nevill <ed...@linaro.org>.
On 21 January 2015 at 11:42, Edward Nevill <ed...@linaro.org> wrote:

> Hi,
>
> Hadoop currently does not build on ARM AARCH64. I have raised a JIRA issue
> with a patch.
>
> https://issues.apache.org/jira/browse/HADOOP-11484
>
> I have submitted the patch and it builds OK and passes all the core tests.
>

Hi Colin,

Thanks for pushing this patch.  Steve Loughran raised the issue in the card
that although this patch fixes the ARM issue it does nothing for other
archs.

I would be happy to prepare a patch which makes it downgrade to C code on
other CPU families if this would be useful.

The general format would be

#idef __aarch64__
   __asm__("ARM Asm")
#elif defined(??X86??)
   __asm__("X86 Asm")
#else
   C Implementation
#endif

My question is what to put for the defined(??X86??)

According to the following page

http://nadeausoftware.com/articles/2012/02/c_c_tip_how_detect_processor_type_using_compiler_predefined_macros

the only way to fully detect all x86 variants is to write

#if defined(__x86_64__) || defined(_M_X64) || defined(__i386) ||
defined(_M_IX86)

will detect all variants of 32 and 64 bit x86 across gcc and windows.

Interestingly the bswap64 inline function in primitives.h has the following

#ifdef __X64
  __asm__("rev ....");
#else
  C implementation
#endif

However if I compile Hadoop on my 64 bit Red Hat Enterprise Linux system it
actually compiles the C implementation (I have verified this by putting a
#error at the start of the C implementation. This is because the correct
macro to detect 64 bit x86 on gcc is __x86_64__ I had also thought that the
macro for windows was _M_X64 not __X64 but maybe __X64 works just as well
on windows? Perhaps someone with access to windows development platform
could do some tests and tell us what macros actually work.

Another question is whether we actually care about 32 bit platforms, or can
they just all downgrade to C code. Does anyone actually build Hadoop on a
32 bit platform?

Another thing to be aware of is that there are endian dependncies in
primitives.h, for example in fmemcmp() just a bit further down is the line

    return (int64_t)bswap(*(uint32_t*)src) -
(int64_t)bswap(*(uint32_t*)dest);

This is little endian dependant so will work on the likes of X86 and ARM
but will fail on Sparc. Note, I haven't trawled looking for endian
dependancies but this was one I just spotted while looking at the aarch64
non compilation issue.

All the best,
Ed.