You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Colin Patrick McCabe (JIRA)" <ji...@apache.org> on 2012/05/07 23:28:49 UTC

[jira] [Created] (HADOOP-8368) Use CMake rather than autotools to build native code

Colin Patrick McCabe created HADOOP-8368:
--------------------------------------------

             Summary: Use CMake rather than autotools to build native code
                 Key: HADOOP-8368
                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
             Project: Hadoop Common
          Issue Type: Improvement
            Reporter: Colin Patrick McCabe
            Assignee: Colin Patrick McCabe
            Priority: Minor


It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.

Rationale:
1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.

CMake has robust cross-platform support, including Windows.  It does not use shell scripts.

2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.

CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.

3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).

CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.

4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.

For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290874#comment-13290874 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Opened HADOOP-8489 to deal with issues related to mixed 32/64 bit environments.  Hopefully this is fixed (still waiting on a jenkins run of the patch.)
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289589#comment-13289589 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Reverted r1346102 for branch-2.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alejandro Abdelnur resolved HADOOP-8368.
----------------------------------------

    Resolution: Fixed

I've reverted Nicholas revert from trunk & branch-2, cleanly in both branches. I've done a clean native build in both branches without any issues. Re-committed to trunk and branch-2.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.020.trimmed.patch

New patch that excludes the hadooppipes and hadooputils stuff, since that will be done in another JIRA.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.029.patch

Here's a fix for the 32/64 bit issues, taken from HADOOP-8489.

Please note that this will appear to succeed even the build fails (HADOOP-8488).  So we'll need to look carefully at the generated Jenkins output to make sure things are working properly.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.001.patch
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281400#comment-13281400 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528699/HADOOP-8368.014.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1022//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1022//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283694#comment-13283694 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

The incompatibility that I would flag is removing the .a files.  Or I would suggest putting them back unless there is reasoning to remove.

I agree that I don't think the build environment qualifies it as an incompatible change.  It would be nice to document though - perhaps in the twikis and BUILDING.txt. It is fairly obvious when the build fails due to cmake missing.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280413#comment-13280413 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Just FYI, the 2 javadoc warnings are the gridmx thing again.

{code}
[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-tools/hadoop-gridmix/src/  main/java/org/apache/hadoop/mapred/gridmix/StressJobFactory.java:305: warning - @param argument "stats" is not a parameter name.
[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-tools/hadoop-gridmix/src/  main/java/org/apache/hadoop/mapred/gridmix/StressJobFactory.java:305: warning - @param argument                "clusterStatus" is not a parameter name.
{code}

That wasn't introduced by this patch.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289730#comment-13289730 ] 

Eli Collins commented on HADOOP-8368:
-------------------------------------

bq. The tests are not running in Jenkins. You may check the recent builds. 

What test are you referring to? I updated Hadoop-*-trunk so they'd pass with this change.

bq. We should revert this since it never passes Jenkins

This patch passed Jenkins serveral times above. 
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.026.trimmed.patch
                HADOOP-8368.026.rm.patch

* remove a stray file that was in version 25, but no longer used.

* update the svn rm set.  All file removes are now in there.

I tested this on Ubuntu 12.04 and it worked.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283696#comment-13283696 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

oops, looks like I should have refreshed, ignore my comment about incompatibility with removing .a since Colin already addressed.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer updated HADOOP-8368:
-------------------------------------

    Hadoop Flags: Incompatible change
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291169#comment-13291169 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk-Commit #2403 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2403/])
    svn merge -c -1347092 for reverting HADOOP-8368 again. (Revision 1347738)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347738
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291170#comment-13291170 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Revert r1347092 for trunk and r1347094 for branch-2.

Please feel free submit new patches.  I am happy to help checking the report.  I won't withdraw my -1 until it passes Jerkins.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment:     (was: HADOOP-8368.001.patch)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290706#comment-13290706 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

The patch seems still not working.  There were some error in [build #2612|https://builds.apache.org/job/PreCommit-HDFS-Build/2612/consoleText].
{noformat}
     [exec] make[2]: *** [target/usr/local/lib/libhdfs.so.0.0.0] Error 1
     [exec] make[1]: *** [CMakeFiles/hdfs.dir/all] Error 2
     [exec] make: *** [all] Error 2
     [exec] Linking C shared library target/usr/local/lib/libhdfs.so
     [exec] /usr/bin/cmake -E cmake_link_script CMakeFiles/hdfs.dir/link.txt --verbose=1
     [exec] /usr/bin/gcc  -fPIC  -m32 -g -Wall -O2 -D_GNU_SOURCE -D_REENTRANT -D_FILE_OFFSET_BITS=64  -shared -Wl,-soname,libhdfs.so.0.0.0 -o target/usr/local/lib/libhdfs.so.0.0.0 CMakeFiles/hdfs.dir/main/native/hdfs.c.o CMakeFiles/hdfs.dir/main/native/hdfsJniHelper.c.o /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server/libjvm.so -Wl,-rpath,/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server 
     [exec] make[2]: Leaving directory `/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native'
     [exec] make[1]: Leaving directory `/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native'
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 23.829s
[INFO] Finished at: Thu Jun 07 01:46:34 UTC 2012
[INFO] Final Memory: 29M/397M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 2 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[I{noformat}

Hi Alejandro, why not wait for the Jenkins report before committing this?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment:     (was: HADOOP-8368.trimmed.013.patch)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Status: Patch Available  (was: Open)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283131#comment-13283131 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529666/HADOOP-8368.020.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1033//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1033//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291661#comment-13291661 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12531379/HADOOP-8368.030.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +0 tests included.  The patch appears to be a documentation patch that doesn't require tests.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    -1 core tests.  The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

                  org.apache.hadoop.fs.viewfs.TestViewFsTrash

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1099//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1099//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290084#comment-13290084 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk #1069 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1069/])
    svn merge -c -1345421 for reverting HADOOP-8368. (Revision 1346491)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1346491
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285325#comment-13285325 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530136/HADOOP-8368.025.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1053//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1053//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eli Collins updated HADOOP-8368:
--------------------------------

    Target Version/s: 2.0.1-alpha  (was: 2.0.0-alpha)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281223#comment-13281223 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

This patch allows you to build libhadoop.so and libhdfs.so without m4.  Check it out, it should provide all the same features, but in a much easier way.

It is possible to run m4 from CMake (or any external program), but it should not be necessary.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292045#comment-13292045 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

Nicholas, are we good to go?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288033#comment-13288033 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530662/HADOOP-8368-b2.001.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1072//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.020.rm.patch

new rm list
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281346#comment-13281346 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528682/HADOOP-8368.trimmed.013.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1021//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1021//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.trimmed.013.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283321#comment-13283321 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

Sorry for the delay, I couldn't post this since Jiras was down.

I'm on a rhel5 box - 64 bit. We build with both 32 and 64 bit java because we want both 32 and 64 bit versions of the native stuff.  I'm currently using version java 1.6.0_22.

the Pipes stuff does now built.
However, now when I try to use 32 bit java to build it gives the following error:

     [exec] /8368-test/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:68: warning: ‘userListHead’ may be used uninitialized in this function     [exec] Building C object CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/util/NativeCrc32.c.o
     [exec] [100%] Building C object CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/util/bulk_crc32.c.o     [exec] /8368-test/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c:44:8: warning: extra tokens at end of #endif directive     [exec] Linking C shared library libhadoop.so
     [exec] /java_jdk/java/jre/lib/i386/client/libjvm.so: could not read symbols: File in wrong format


I also see the libhadoop.a went away.  I'm not positive if any of our customers are using it but it is an incompatibility.  Perhaps other have comments on that. 

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.018.trimmed.patch

* don't search for JNI when compiling hadooppipes, hadooputils (we don't need it for those)

* don't use the -m flag, since we'll automatically determine the machine architecture when compiling
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290376#comment-13290376 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Common-trunk-Commit #2326 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2326/])
    svn merge -c -1346491 for re-committing HADOOP-8368. (tucu) (Revision 1347092)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347092
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Reopened] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8368:
--------------------------------------------


Since Jerkins builds are failing after this.  I will revert the patch again.

-1 on the patch in order to prevent the commit-revert situation.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285194#comment-13285194 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

bq. Any chance to get the -DskipTests working for native (by ant plugin magic)?

Can you file a separate JIRA for that?  The reason is that there may be people depending on the current behavior, so it's not a "no-brainer."  There may be objections or other requests.  Also I'd like to get HADOOP-8368 in as soon as possible so that it can unblock some other things.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291716#comment-13291716 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk #1071 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1071/])
    svn merge -c -1347092 for reverting HADOOP-8368 again. (Revision 1347738)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347738
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285072#comment-13285072 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

Sorry, that should say I am NOW able to build both 32 and 64 bit
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alejandro Abdelnur updated HADOOP-8368:
---------------------------------------

    Fix Version/s: 3.0.0
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290935#comment-13290935 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk #1070 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1070/])
    svn merge -c -1346491 for re-committing HADOOP-8368. (tucu) (Revision 1347092)

     Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347092
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285099#comment-13285099 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

Reviewing changes in the POMs:

* A bunch ofcmake files are added at src/ level in common/hdfs/mapreduce modules and in hadoop-mapreduce-project/. These files should go into src/main/native or the corresponding module. In the case of hadoop-mapreduce-project/ the cmake file should go in the module it is being used (not in this POM aggregator module).

* Are all native testcases run during Maven test phase?

* In the hadoop-common POM the variable runas.home is set by default to EMPTY. the build seems to work, is this OK?

* Finally, this is nice to have. In the current autoconf build native testcases are not skipped if maven is invoked with -DskipTests, any chance to do that skip with cmake?

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Reopened] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8368:
--------------------------------------------


Hi Eli, the tests are not running in Jenkins.  You may check the recent builds.  We should revert this since it never passes Jenkins.  The patch should be re-tested.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.024.trimmed.patch

* fold include files into CMakeLists.txt
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278271#comment-13278271 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527914/HADOOP-8368.008.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    -1 javac.  The applied patch generated 1976 javac compiler warnings (more than the trunk's current 1973 warnings).

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1004//testReport/
Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1004//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1004//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.012.half.patch

* here is the patch without removed files
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270112#comment-13270112 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

bq. Is this meant for branch-1, branch-2/trunk or both?

It seems most reasonable to do this in branch-2.

bq. What do you have in mind for CMake/Maven, CMake/ant integration?

I think the easiest way to go is to use maven-antrun-plugin or possibly make-maven-plugin.  Basically, we just need to invoke the cmake program with an argument or two prior to running make.

I did take a quick look at cmake-maven-project, which is a project to create a native Maven plugin for CMake.  However, it looks like they are requiring Java7, so we won't be able to make use of that for a while.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281989#comment-13281989 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Hi Thomas,

You are right that it might make more sense to put the hadooppipes / hadooputils changes into MAPREDUCE-4267.

For now, I've left them in this patch.  I think I fixed the issue that prevented you from building earlier.  Can you give it a try and see if it works for you?

Also, are you using a 64-bit or 32-bit computer?  And what JVM version(s) do you have installed?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eli Collins updated HADOOP-8368:
--------------------------------

    Status: Open  (was: Patch Available)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281142#comment-13281142 ] 

Arun C Murthy commented on HADOOP-8368:
---------------------------------------

There are some m4 tricks we rely on for libhadoop.so, are these viable in CMake?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278242#comment-13278242 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527910/HADOOP-8368.007.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    -1 javac.  The applied patch generated 1976 javac compiler warnings (more than the trunk's current 1973 warnings).

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1003//testReport/
Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1003//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1003//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281060#comment-13281060 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528606/HADOOP-8368.012.half.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1018//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1018//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281843#comment-13281843 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528761/HADOOP-8368.016.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1024//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1024//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290356#comment-13290356 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Hi Nicholas,

It appears that Jenkins does not compile patches with -Pnative.  This is the reason why we originally didn't realize that cmake wasn't installed on the Yahoo! build machines.  I filed INFRA-4886 to address this.

I am running 
{code}
mvn clean package -Pnative -Pdist -Dtar -Dmaven.test.failure.ignore=true
{code}
on the patch right now.  I think this is pretty close to what the nightly build does, so if this passes, we can be fairly sure that the nightly will as well.

I'll also re-submit the patch through Jenkins as you requested.

-C.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289034#comment-13289034 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

We're fixing it as we speak.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289028#comment-13289028 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Similar errors can be found in [build #1062|https://builds.apache.org/job/PreCommit-HADOOP-Build/1062/console], i.e. [this QA comment|https://issues.apache.org/jira/browse/HADOOP-8368?focusedCommentId=13286201&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13286201].  So the patch never works on Jenkins.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290746#comment-13290746 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

bq. Hi Alejandro, why not wait for the Jenkins report before committing this?

Well, for one thing, we know that test-patch gives a +1 even if the native build doesn't work.  There have been several JIRAs filed about it-- I think one of them was even filed by you!  I did a build today with the same options as the Jenkins job and it succeeded, so I naturally assumed that the Yahoo! machines would behave similarly.

I do know exactly what the problem is.  It seems like the build is picking up the 64-bit OpenJDK libjvm.so library, but maven is being run with a different 32-bit JVM.  Just a simple configuration problem, and we'll resolve it shortly.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.030.patch

resubmitting.

* It looks like we forgot to install cmake on asf000 (I guess nobody expected the indexing to start at 0?)

* remove runAs stuff because runAs no longer exists (see HADOOP-8450)
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13293023#comment-13293023 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Common-trunk-Commit #2341 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2341/])
    HADOOP-8368. Amendment to add entry in CHANGES.txt (Revision 1348960)
HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1348957)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348960
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348957
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279330#comment-13279330 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528181/HADOOP-8368.010.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    -1 javac.  The applied patch generated 1976 javac compiler warnings (more than the trunk's current 1973 warnings).

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1013//testReport/
Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1013//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1013//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.008.patch

* fix misspelled 'DEFINE'
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289033#comment-13289033 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Hi Colin, we have to fix it.  Otherwise, no test can be run on Jenkins.  We may need to revert the patch and wait for INFRA-4881 since the patch indeed depends on it.  Thoughts?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.012.rm.patch

the svn rm part
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271904#comment-13271904 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526229/HADOOP-8368.001.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    -1 javac.  The applied patch generated 1937 javac compiler warnings (more than the trunk's current 1934 warnings).

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/972//testReport/
Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/972//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/972//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Closed] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy closed HADOOP-8368.
---------------------------------

    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.2-alpha
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch, HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287943#comment-13287943 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk #1098 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1098/])
    HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1345421)

     Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287840#comment-13287840 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

Thanks Colin. I've just committed this to trunk. I'm getting some conflicts in branch2 (trying to do an svn merge and trying to apply the patch/script), would you please check and if necessary upload the corresponding patch/script for branch-2?

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Aaron T. Myers (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Aaron T. Myers updated HADOOP-8368:
-----------------------------------

     Target Version/s: 2.0.0
    Affects Version/s: 2.0.0
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280954#comment-13280954 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

fyi - The javadoc warnings are fixed with MAPREDUCE-4269.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291435#comment-13291435 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

As his is an issue in Jenkins cannot verify locally. Are we good to go now? a re-re-revert or a new patch/script? Nicholas, seems OK now?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289068#comment-13289068 ] 

Eli Collins commented on HADOOP-8368:
-------------------------------------

Hey Nicholas,

I've removed -Pnative from Hadoop-Common-trunk and Hadoop-Hdfs-trunk which should get those passing again. I've asked someone from Yahoo! to install cmake on the Jenkins hosts (eg in /home/jenkins/tools with the other toolchain deps or just on the host itself). Will try to close this out asap, shouldn't be hard just need to install another dep on the machines.

Hadoop-MapReduce-trunk is failing due to another issue, filed MAPREDUCE-4313 for that.

Thanks,
Eli
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291316#comment-13291316 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Well, some good news first.  I finally managed to reproduce the libjvm.so problem we had in the earlier test runs.  It wasn't easy-- I had to set up a virtual machine to get the right versions of all the software.  And the fix does fix it.

Bad new second: the latest Jenkins run hit another "cmake not installed" problem.

{code}
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:
run (make) on project hadoop-common: An Ant BuildException has occured: 
Execute failed: java.io.IOException: Cannot run program "cmake" (in directory
 "/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native"): 
java.io.IOException: error=2, No such file or directory -> [Help 1]
{code}

I thought we had fixed this on the build machines...?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.012.patch

* rename some antrun executions to be more descriptive (and avoid warnings about executions with the same id values).

* remove an antrun stanza that just copied around files for automake, which is no longer needed

* yarn-server-nodemanager/pom.xml: fix test execution stanza
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.028.trimmed.patch
                HADOOP-8368.028.rm.patch

for trunk
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alejandro Abdelnur updated HADOOP-8368:
---------------------------------------

       Resolution: Fixed
    Fix Version/s:     (was: 3.0.0)
                   2.0.1-alpha
     Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
           Status: Resolved  (was: Patch Available)

Thanks Colin. Committed to trunk & branch-2.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279136#comment-13279136 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528132/HADOOP-8368.009.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    -1 javac.  The applied patch generated 1976 javac compiler warnings (more than the trunk's current 1973 warnings).

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1010//testReport/
Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1010//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1010//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283682#comment-13283682 ] 

Allen Wittenauer commented on HADOOP-8368:
------------------------------------------

Other platforms now require cmake to be installed whereas before they didn't.  That's an incompatible change in my book.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289582#comment-13289582 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Reverted r1345421.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287889#comment-13287889 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk #1064 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1064/])
    HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1345421)

     Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.016.trimmed.patch

* hadoop-mapreduce: build the native code using maven+CMake rather than ant+autoconf

* build the hadooputils and hadooppipes libraries as static libraries, because that is what we did in the past (for some reason.)  We can change it later if it turns out we want normal shared libraries.

* fix a few things in hadoop-mapreduce/src/CMakeLists.txt.  We depend on OpenSSL in hadooppipes, so locate the library and link against it.  etc.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281713#comment-13281713 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

bq. are the trimmed patches the official ones or just meant for review and you actually want the renames to happen?

HADOOP-8368.015.trimmed.patch and HADOOP-8368.012.rm.patch are the official ones.  Forget the renames.

[snip build discussion]
The easiest way to build everything is ''mvn compile -Pnative''.

There are other options you can add to that command line if you want, like ''-DskipTests'' and ''-Dmaven.javadoc.skip=true''

I realize that the wiki tells you to invoke ant with veryclean before submitting a patch.  However, that's not a replacement for mvn clean, or for maven in general.  It's just used to clear the ivy cache.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292979#comment-13292979 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Thanks Colin and Alejandro!  :)
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281808#comment-13281808 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

thanks for the update Colin, sorry if I confused you, but I think the mavenization of pipes should be handled under MAPREDUCE-4267, because the code should move directories. I just want to make sure it still compiles in the old location.  If that means you just removing all the cmake changes you made to it, thats fine with me. I'll make the changes to use cmake in with MAPREDUCE-4267. I don't think many people build it right now anyway.

When I try to build with maven (mvn clean install package -Pnative,dist -Dtar -DskipTests  -Dmaven.javadoc.skip=true)  with the latest patch it fails on the pipes:
   [exec] Scanning dependencies of target pipes_sort
     [exec] [ 57%] Building CXX object CMakeFiles/pipes_sort.dir/examples/pipes/impl/sort.cc.o
     [exec] /home/y/share/yjava_jdk/java/jre/lib/i386/client/libjvm.so: could not read symbols: File in wrong format
     [exec] collect2: ld returned 1 exit status
     [exec] make[2]: *** [pipes_sort] Error 1
     [exec] make[1]: *** [CMakeFiles/pipes_sort.dir/all] Error 2
     [exec] make: *** [all] Error 2
     [exec] Linking CXX executable pipes_sort
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289596#comment-13289596 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk-Commit #2397 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2397/])
    svn merge -c -1345421 for reverting HADOOP-8368. (Revision 1346491)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1346491
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.030.patch

resubmitting, in hopes that jenkins will run on it this time.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287843#comment-13287843 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #2331 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2331/])
    HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1345421)

     Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278205#comment-13278205 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527883/HADOOP-8368.006.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    -1 javac.  The applied patch generated 1976 javac compiler warnings (more than the trunk's current 1973 warnings).

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1001//testReport/
Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/1001//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1001//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Luke Lu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281321#comment-13281321 ] 

Luke Lu commented on HADOOP-8368:
---------------------------------

+1 for using cmake :)

But can you move the native source file renaming (from using underscore to hyphen) to a separate patch? It'll make the patch a lot smaller to review. It took me a while to notice that you're just renaming these source file.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291308#comment-13291308 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12531292/HADOOP-8368.029.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +0 tests included.  The patch appears to be a documentation patch that doesn't require tests.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1095//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1095//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291003#comment-13291003 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk #1103 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1103/])
    svn merge -c -1346491 for re-committing HADOOP-8368. (tucu) (Revision 1347092)

     Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347092
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368-b2.001.trimmed.patch
                HADOOP-8368-b2.001.rm.patch
                HADOOP-8368-b2.001.patch

uploading branch-2 version (the merge conflicts seem to have been trivial)
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Comment Edited] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285099#comment-13285099 ] 

Alejandro Abdelnur edited comment on HADOOP-8368 at 5/29/12 9:05 PM:
---------------------------------------------------------------------

the patch fails in the fuse-dfs pom.xml.

Reviewing changes in the POMs:

* A bunch ofcmake files are added at src/ level in common/hdfs/mapreduce modules and in hadoop-mapreduce-project/. These files should go into src/main/native or the corresponding module. In the case of hadoop-mapreduce-project/ the cmake file should go in the module it is being used (not in this POM aggregator module).

* Are all native testcases run during Maven test phase?

* In the hadoop-common POM the variable runas.home is set by default to EMPTY. the build seems to work, is this OK?

* Finally, this is nice to have. In the current autoconf build native testcases are not skipped if maven is invoked with -DskipTests, any chance to do that skip with cmake?

                
      was (Author: tucu00):
    Reviewing changes in the POMs:

* A bunch ofcmake files are added at src/ level in common/hdfs/mapreduce modules and in hadoop-mapreduce-project/. These files should go into src/main/native or the corresponding module. In the case of hadoop-mapreduce-project/ the cmake file should go in the module it is being used (not in this POM aggregator module).

* Are all native testcases run during Maven test phase?

* In the hadoop-common POM the variable runas.home is set by default to EMPTY. the build seems to work, is this OK?

* Finally, this is nice to have. In the current autoconf build native testcases are not skipped if maven is invoked with -DskipTests, any chance to do that skip with cmake?

                  
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291439#comment-13291439 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Hi Alejando,

I think it would be best to wait for Jenkins to run this.  Although it will unconditionally return success, we can determine whether it actually succeeded by checking the logs manually.  At that point, I believe Nicholas will also remove his -1 at that point.

Also I would encourage anyone reading this to review HADOOP-8488, which will fix the issue in the script that's been causing us so much grief.  Please review this.  It is a one line change and not at all scary.

thanks,
C.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Andrew Purtell (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289999#comment-13289999 ] 

Andrew Purtell commented on HADOOP-8368:
----------------------------------------

The last comment on INFRA-4881 indicates the Jenkins slaves for Hadoop are controlled internally by Yahoo? So it would seem cmake has not been installed yet.


                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283690#comment-13283690 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

we are talking about build environment requirement changes, from autoconf to cmake, this does not affect the end user. AFAIK we don't flag this kind of things as incompatible changes. We didn't do it when introducing Maven or protoc.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292080#comment-13292080 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12531440/HADOOP-8368.030.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +0 tests included.  The patch appears to be a documentation patch that doesn't require tests.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    -1 core tests.  The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

                  org.apache.hadoop.fs.viewfs.TestViewFsTrash

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1102//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1102//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292110#comment-13292110 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Just have checked the console log.  Everything looks good.  I withdraw my -1.  Thanks.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.025.trimmed.patch

* rebase on trunk
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alejandro Abdelnur updated HADOOP-8368:
---------------------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

Thanks Colin. Committed (again :) ) to trunk an branch-2
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290165#comment-13290165 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk #1102 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1102/])
    svn merge -c -1345421 for reverting HADOOP-8368. (Revision 1346491)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1346491
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.trimmed.013.patch

here's a version without the container-executor --> container_executor renaming.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.trimmed.013.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291214#comment-13291214 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #2349 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2349/])
    svn merge -c -1347092 for reverting HADOOP-8368 again. (Revision 1347738)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347738
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290222#comment-13290222 ] 

Eli Collins commented on HADOOP-8368:
-------------------------------------

I've installed cmake on the jenkins hosts. Tucu, mind re-committing the change to trunk and branch-2?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289024#comment-13289024 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Hi,

cmake seems not working on jenkins:
[build #2587|https://builds.apache.org/job/PreCommit-HDFS-Build/2587/console]
{noformat}
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project hadoop-hdfs:
 An Ant BuildException has occured: Execute failed: java.io.IOException:
 Cannot run program "cmake" (in directory "/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native"):
 java.io.IOException: error=2, No such file or directory -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287839#comment-13287839 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk-Commit #2385 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2385/])
    HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1345421)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281409#comment-13281409 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528700/HADOOP-8368.015.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1023//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1023//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290304#comment-13290304 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Please try resubmitting the patch before re-committing.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.014.trimmed.patch

Oops, you're right.  Here is a version that doesn't rename the container-executor directory.

The "svn rm" list remains the same as before.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291897#comment-13291897 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

The previous QA report looks good.  However, I am not yet able to read the console log since the page did not load.  Will retry.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13277640#comment-13277640 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527803/HADOOP-8368.005.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1000//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287841#comment-13287841 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Common-trunk-Commit #2313 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2313/])
    HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1345421)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.006.patch

* rebase on trunk
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280665#comment-13280665 ] 

Todd Lipcon commented on HADOOP-8368:
-------------------------------------

Hey Colin. Would it be possible to attach the patch as multiple files:
1) a shell script with "svn rm" commands for the removed files
2) a diff which only includes changed and added files

Hadoop QA won't know what to do with it, but it will be easier for folks to review, and is our usual method when reviewing changes that move or delete a lot of stuff.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285171#comment-13285171 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

bq. A bunch ofcmake files are added at src/ level in common/hdfs/mapreduce modules and in hadoop-mapreduce-project/. These files should go into src/main/native or the corresponding module. In the case of hadoop-mapreduce-project/ the cmake file should go in the module it is being used (not in this POM aggregator module).

I guess I can get rid of the include files for now.  Currently none are being included from multiple places.  We can reconsider how to do this when we have multiple CMakeLists.txt files per project.

bq. Are all native testcases run during Maven test phase?

All the native testcases that used to be run during the maven test phase are still run.

We still have to wire up the fuse_dfs testcase, but there is a separate JIRA for that: HDFS-3250.  hdfs_test is another one that still needs to be wired up somehow (it's more of a system test, and requires an HDFS cluster), but I think that's out of scope for this JIRA.

bq. In the hadoop-common POM the variable runas.home is set by default to EMPTY. the build seems to work, is this OK?

Yes.  runAs is a tool that is not built by default.  The old build had similar behavior where you had to specify extra options to get runAs to build.

bq. Finally, this is nice to have. In the current autoconf build native testcases are not skipped if maven is invoked with -DskipTests, any chance to do that skip with cmake?

This patch preserves that same behavior.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13293019#comment-13293019 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk-Commit #2414 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2414/])
    HADOOP-8368. Amendment to add entry in CHANGES.txt (Revision 1348960)
HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1348957)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348960
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348957
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289030#comment-13289030 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

bq. cmake seems not working on jenkins:

This is INFRA-4881.  The issue is that cmake is not installed on certain servers.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283872#comment-13283872 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529837/HADOOP-8368.023.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1039//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1039//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.007.patch

* fix macro issue
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13286755#comment-13286755 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

+1, tested on centos 5.5
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289756#comment-13289756 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Builds failed and the tests could not be executed.  See [build #2593|https://builds.apache.org/job/PreCommit-HDFS-Build/2593/console] for example.  After the patch was reverted, the tests could be run; see [build #2594|https://builds.apache.org/job/PreCommit-HDFS-Build/2594/console].

This patch failed the build but got a false positive on a bug (HADOOP-8483) in test-patch.sh as mentioned previously.  The following is from the console output of [build #1062|https://builds.apache.org/job/PreCommit-HADOOP-Build/1062/console].
{noformat}
main:
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.494s
[INFO] Finished at: Wed May 30 23:51:25 UTC 2012
[INFO] Final Memory: 21M/259M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project hadoop-common:
 An Ant BuildException has occured: Execute failed: java.io.IOException:
 Cannot run program "cmake" (in directory "/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native"):
 java.io.IOException: error=2, No such file or directory -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[INFO] Build failures were ignored.
{noformat}

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290418#comment-13290418 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #2345 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2345/])
    svn merge -c -1346491 for re-committing HADOOP-8368. (tucu) (Revision 1347092)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347092
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291464#comment-13291464 ] 

Eli Collins commented on HADOOP-8368:
-------------------------------------

I've committed HADOOP-8488. Sorry I missed asf000, I was told 001-012 was the range of hosts.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291968#comment-13291968 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

I've confimed that cmake seems to be working great on the latest patch (version 30).

{code}
     [exec] /usr/bin/ranlib target/usr/local/lib/libhadoop.a
     [exec] make[2]: Leaving directory `/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native'
     [exec] /usr/bin/cmake -E cmake_progress_report /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native/CMakeFiles  14 15 16 17 18 19 20 21 22 23 24 25 26
     [exec] [100%] Built target hadoop_static
     [exec] make[1]: Leaving directory `/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native'
     [exec] /usr/bin/cmake -E cmake_progress_start /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native/CMakeFiles 0
{code}

I think we should be ready t submit.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.015.trimmed.patch

Some fixes.

* Ensure that any place that included config.h before still does (CMake now generates a config.h that serves the same purposes).

* If snappy isn't installed, just skip compiling SnappyCompressor.c and SnappyDecompressor.c, rather than using an awkward "ifdef around the whole file" approach.

* NativeIO.c: shouldn't have to change anything in this file except removing the comment about autoconf :)
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290378#comment-13290378 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk-Commit #2399 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2399/])
    svn merge -c -1346491 for re-committing HADOOP-8368. (tucu) (Revision 1347092)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347092
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281722#comment-13281722 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

So you do still have to use ant if you want to build the mrv1 stuff that hasn't been mavenized (anything in hadoop-mapreduce-project/src). In particular I'm interested in pipes as that still needs to be mavenized: MAPREDUCE-4267.  Without your change you use the command: ant  -Dresolvers=internal  -Dcompile.c++=true -Dcompile.native=true veryclean compile-c++-pipes. I would like to make sure that still builds after your change.  
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Status: Patch Available  (was: Open)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289041#comment-13289041 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

What is the plan on fixing it?  When do you expect it will be fixed?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Roman Shaposhnik (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270072#comment-13270072 ] 

Roman Shaposhnik commented on HADOOP-8368:
------------------------------------------

Sounds reasonable, a few questions though:
  # is this meant for branch-1, branch-2/trunk or both?
  # what do you have in mind for CMake/Maven, CMake/ant integration?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281063#comment-13281063 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528611/HADOOP-8368.012.rm.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +0 tests included.  The patch appears to be a documentation patch that doesn't require tests.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1019//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.010.patch
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291904#comment-13291904 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

Yeah, I also tried to load from https://builds.apache.org, but got a 503: Service Temporarily Unavailable.

After HADOOP-8488, I wouldn't expect this Jenkins run to a be a false positive, but I would like to make sure.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291766#comment-13291766 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk #1104 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1104/])
    svn merge -c -1347092 for reverting HADOOP-8368 again. (Revision 1347738)

     Result = FAILURE
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347738
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285178#comment-13285178 ] 

Alejandro Abdelnur commented on HADOOP-8368:
--------------------------------------------

>> Finally, this is nice to have. In the current autoconf build native testcases are not 
>> skipped if maven is invoked with -DskipTests, any chance to do that skip with cmake?
>
> This patch preserves that same behavior.

Any chance to get the -DskipTests working for native (by ant plugin magic)?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291171#comment-13291171 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Common-trunk-Commit #2330 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2330/])
    svn merge -c -1347092 for reverting HADOOP-8368 again. (Revision 1347738)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1347738
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368-b2.003.trimmed.patch
                HADOOP-8368-b2.003.rm.patch

here's the branch-2 version of patch 30.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Eli Collins (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291466#comment-13291466 ] 

Eli Collins commented on HADOOP-8368:
-------------------------------------

Forgot to mention asf000 now has cmake installed. Now that 8488 is in can we resubmit the patch so jenkins will run and actually catch the issue if it still exists?
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13293068#comment-13293068 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #2361 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2361/])
    HADOOP-8368. Amendment to add entry in CHANGES.txt (Revision 1348960)
HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1348957)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348960
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348957
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291484#comment-13291484 ] 

Colin Patrick McCabe commented on HADOOP-8368:
----------------------------------------------

@Eli: Jenkins should do a build on version 30 soon.  It's been about 2 hours so hopefully that will happen soon :\
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.023.trimmed.patch

* fix 32-bit compile

* only generate one copy of each binary or library (get rid of make install step)

* make sure that dual shared / static library build works correctly in all cases
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289639#comment-13289639 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk-Commit #2343 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2343/])
    svn merge -c -1345421 for reverting HADOOP-8368. (Revision 1346491)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1346491
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368-b2.002.trimmed.patch
                HADOOP-8368-b2.002.rm.patch

Looks like the deletion of runAs caused a merge conflict with the previous branch-2 patch I posted.  Updating...
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13293532#comment-13293532 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Hdfs-trunk #1074 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1074/])
    HADOOP-8368. Amendment to add entry in CHANGES.txt (Revision 1348960)
HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1348957)

     Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348960
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348957
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285071#comment-13285071 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

I am not able to build both 32 and 64 bit version, thanks Colin.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer updated HADOOP-8368:
-------------------------------------

    Hadoop Flags: Incompatible change

Marking this as an incompatible change since it breaks the building of the native code on platforms where it currently worked out of the box.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.030.trimmed.patch
                HADOOP-8368.030.rm.patch

posting traditional 'svn rm' file plus delta.

Jenkins can't run these directly, as noted before.  The content is the same as in HADOOP-8368.030.patch
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Hadoop Flags:   (was: Incompatible change)

This isn't intended to be an incompatible change.  I will look into the 32-bit JVM issue.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Thomas Graves (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281611#comment-13281611 ] 

Thomas Graves commented on HADOOP-8368:
---------------------------------------

Couple of questions.

 are the trimmed patches the official ones or just meant for review and you actually want the renames to happen?  I am applying by first running HADOOP-8368.012.rm.patch and then the HADOOP-8368.015.trimmed.patch. 

What is the ant command you are using to build hadoop-mapreduce-project/src? I'm guessing you probably only built pipes.

When I run:  ant  -Dresolvers=internal  -Dcompile.c++=true -Dcompile.native=true veryclean all-jars 

it errors with:

check-c++-configure:

create-c++-pipes-configure:
     [exec] autoreconf: `configure.ac' or `configure.in' is required

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289600#comment-13289600 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Common-trunk-Commit #2324 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2324/])
    svn merge -c -1345421 for reverting HADOOP-8368. (Revision 1346491)

     Result = SUCCESS
szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1346491
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Luke Lu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281367#comment-13281367 ] 

Luke Lu commented on HADOOP-8368:
---------------------------------

The trimmed patch still has the directory renaming (container-executor -> container_executor).
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.trimmed.013.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment:     (was: HADOOP-8368.001.patch)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283707#comment-13283707 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529772/HADOOP-8368.021.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1037//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1037//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.001.patch
                HADOOP-8368.001.patch
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13285203#comment-13285203 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530104/HADOOP-8368.024.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1052//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Status: Patch Available  (was: Reopened)
    
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13293605#comment-13293605 ] 

Hudson commented on HADOOP-8368:
--------------------------------

Integrated in Hadoop-Mapreduce-trunk #1107 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1107/])
    HADOOP-8368. Amendment to add entry in CHANGES.txt (Revision 1348960)
HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu) (Revision 1348957)

     Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348960
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1348957
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.autom4te.cfg
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/.deps/container-executor.Po
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/Makefile.am
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/configure.ac
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, HADOOP-8368-b2.003.trimmed.patch, HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.021.trimmed.patch

* Build both shared AND static versions of libhadoop and libhdfs (thanks for pointing this out, Thomas)

Still looking at 32-bit issues...
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13286201#comment-13286201 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530295/HADOOP-8368.026.trimmed.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1062//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1062//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.005.patch

* new patch which implements the transition for all subprojects
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279527#comment-13279527 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12528225/HADOOP-8368.012.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

    +1 core tests.  The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1014//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1014//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

Posted by "Colin Patrick McCabe (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Colin Patrick McCabe updated HADOOP-8368:
-----------------------------------------

    Attachment: HADOOP-8368.009.patch

* use cmake variables rather than environment variables

* use generated config.h files rather than passing -DFOO=bar type arguments on the command line.  Escaping on the command line is a pain, whereas the config.h files don't have that problem.

* get rid of STR(), STRINGIFY(), etc.  config.h files don't have this problem.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, HADOOP-8368.009.patch
>
>
> It would be good to use cmake rather than autotools to build the native (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on different operating systems.  It would be extremely difficult, and perhaps impossible, to use autotools under Windows.  Even if it were possible, it might require horrible workarounds like installing cygwin.  Even on Linux variants like Ubuntu 12.04, there are major build issues because /bin/sh is the Dash shell, rather than the Bash shell as it is in other Linux versions.  It is currently impossible to build the native code under Ubuntu 12.04 because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method "path" via package "Autom4te..." are common error messages.  In order to even start debugging automake problems you need to learn shell, m4, sed, and the a bunch of other things.  With CMake, all you have to learn is the syntax of CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For example, the version installed under openSUSE defaults to putting libraries in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults to installing the same libraries under /usr/local/lib.  (This is why the FUSE build is currently broken when using OpenSUSE.)  This is another source of build failures and complexity.  If things go wrong, you will often get an error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a particular CMakeLists.txt will accept.  In addition, CMake maintains strict backwards compatibility between different versions.  This prevents build bugs due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to build time.
> For all these reasons, I think we should switch to CMake for compiling native (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira