You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Jun Li <jl...@gmail.com> on 2013/06/21 10:35:03 UTC

how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web
site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat
Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of
"test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
understand how NativeIO.posixFadviseIfPossible behaves, in particular,
to check whether "posix_fadvise" is turned on or not. I am interested
in this call as it is used in Read-Ahead-Pool to cache data to the
OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the
particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags)
{
#ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available");
#else

...
}

I believe that the problem of throwing the exception is because
"HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
library is loaded properly in the Java code, as I can successfully run
the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get
the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!

Piping to HDFS (from Linux or HDFS)

Posted by Sanjay Subramanian <Sa...@wizecommerce.com>.
Hi guys

While I was trying to get some test data and configurations done quickly I realized one can do this and I think its super cool

Processing existing file on Linux/HDFS and Piping it directly to hdfs

source = Linux  dest=HDFS
======================
File = sanjay.conf.template
We want to replace one line in the file 9999-99-99 ----> 1947-08-15
DATE_STR=9999-99-99

cat sanjay.conf.template | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/sanjay.conf

source = HDFS  dest=HDFS
======================
hdfs dfs -cat  /user/nextag/sanjay.conf.template  | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/1947-08-15/nextag.conf


Thanks

sanjay

CONFIDENTIALITY NOTICE
======================
This email message and any attachments are for the exclusive use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message along with any attachments, from your computer system. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's Email System Administrator.

Piping to HDFS (from Linux or HDFS)

Posted by Sanjay Subramanian <Sa...@wizecommerce.com>.
Hi guys

While I was trying to get some test data and configurations done quickly I realized one can do this and I think its super cool

Processing existing file on Linux/HDFS and Piping it directly to hdfs

source = Linux  dest=HDFS
======================
File = sanjay.conf.template
We want to replace one line in the file 9999-99-99 ----> 1947-08-15
DATE_STR=9999-99-99

cat sanjay.conf.template | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/sanjay.conf

source = HDFS  dest=HDFS
======================
hdfs dfs -cat  /user/nextag/sanjay.conf.template  | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/1947-08-15/nextag.conf


Thanks

sanjay

CONFIDENTIALITY NOTICE
======================
This email message and any attachments are for the exclusive use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message along with any attachments, from your computer system. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's Email System Administrator.

Piping to HDFS (from Linux or HDFS)

Posted by Sanjay Subramanian <Sa...@wizecommerce.com>.
Hi guys

While I was trying to get some test data and configurations done quickly I realized one can do this and I think its super cool

Processing existing file on Linux/HDFS and Piping it directly to hdfs

source = Linux  dest=HDFS
======================
File = sanjay.conf.template
We want to replace one line in the file 9999-99-99 ----> 1947-08-15
DATE_STR=9999-99-99

cat sanjay.conf.template | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/sanjay.conf

source = HDFS  dest=HDFS
======================
hdfs dfs -cat  /user/nextag/sanjay.conf.template  | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/1947-08-15/nextag.conf


Thanks

sanjay

CONFIDENTIALITY NOTICE
======================
This email message and any attachments are for the exclusive use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message along with any attachments, from your computer system. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's Email System Administrator.

Piping to HDFS (from Linux or HDFS)

Posted by Sanjay Subramanian <Sa...@wizecommerce.com>.
Hi guys

While I was trying to get some test data and configurations done quickly I realized one can do this and I think its super cool

Processing existing file on Linux/HDFS and Piping it directly to hdfs

source = Linux  dest=HDFS
======================
File = sanjay.conf.template
We want to replace one line in the file 9999-99-99 ----> 1947-08-15
DATE_STR=9999-99-99

cat sanjay.conf.template | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/sanjay.conf

source = HDFS  dest=HDFS
======================
hdfs dfs -cat  /user/nextag/sanjay.conf.template  | sed 's/9999-99-99/1947-08-15/g' | hdfs dfs -put - /user/sanjay/1947-08-15/nextag.conf


Thanks

sanjay

CONFIDENTIALITY NOTICE
======================
This email message and any attachments are for the exclusive use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message along with any attachments, from your computer system. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's Email System Administrator.

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
You're welcome.
Have fun with hadoop!

Regards,
Leo


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Sunday, June 23, 2013 11:24 PM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,
Following your instruction on moving the sections in configure.ac<http://configure.ac>, I am able to re-compile the code base and I can find that the "objdump" returns the correct fadvise symbols as you described in your earlier email. I then ran the TestNativeIO.java class and I believe that the Fadvise test case gets passed now.
Thank you very much for the help!

Regards,

Jun

On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R
to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Saturday, June 22, 2013 3:26 AM

To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!




RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
You're welcome.
Have fun with hadoop!

Regards,
Leo


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Sunday, June 23, 2013 11:24 PM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,
Following your instruction on moving the sections in configure.ac<http://configure.ac>, I am able to re-compile the code base and I can find that the "objdump" returns the correct fadvise symbols as you described in your earlier email. I then ran the TestNativeIO.java class and I believe that the Fadvise test case gets passed now.
Thank you very much for the help!

Regards,

Jun

On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R
to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Saturday, June 22, 2013 3:26 AM

To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!




RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
You're welcome.
Have fun with hadoop!

Regards,
Leo


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Sunday, June 23, 2013 11:24 PM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,
Following your instruction on moving the sections in configure.ac<http://configure.ac>, I am able to re-compile the code base and I can find that the "objdump" returns the correct fadvise symbols as you described in your earlier email. I then ran the TestNativeIO.java class and I believe that the Fadvise test case gets passed now.
Thank you very much for the help!

Regards,

Jun

On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R
to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Saturday, June 22, 2013 3:26 AM

To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!




RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
You're welcome.
Have fun with hadoop!

Regards,
Leo


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Sunday, June 23, 2013 11:24 PM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,
Following your instruction on moving the sections in configure.ac<http://configure.ac>, I am able to re-compile the code base and I can find that the "objdump" returns the correct fadvise symbols as you described in your earlier email. I then ran the TestNativeIO.java class and I believe that the Fadvise test case gets passed now.
Thank you very much for the help!

Regards,

Jun

On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R
to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Saturday, June 22, 2013 3:26 AM

To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!




Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction on moving the sections in configure.ac, I am
able to re-compile the code base and I can find that the "objdump" returns
the correct fadvise symbols as you described in your earlier email. I then
ran the TestNativeIO.java class and I believe that the Fadvise test case
gets passed now.

Thank you very much for the help!

Regards,

Jun



On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun,****
>
> ** **
>
>   Try moving the section between:****
>
> dnl Check for headers needed by the native Group….
>   […include everything in between inclusive …]
> AC_FUNC_STRERROR_R
>
> ****
>
> to higher place (insert before)****
>
> ** **
>
> # Checks for libraries.****
>
> dnl Check for ‘-ldl’
>
> Hope this helps J****
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Saturday, June 22, 2013 3:26 AM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,
>
> Thanks for the detailed instruction. I started from the clean downloaded
> hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.
>
> And with the configure.ac that comes with the downloaded package, without
> any changes, the following is the compilation output:
>
>
>      [exec] checking fcntl.h usability... yes
>      [exec] checking fcntl.h presence... yes
>      [exec] checking for fcntl.h... yes
>      [exec] checking for stdlib.h... (cached) yes
>      [exec] checking for string.h... (cached) yes
>      [exec] checking for unistd.h... (cached) yes
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for posix_fadvise... no
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for sync_file_range... no
>      [exec] checking for an ANSI C-conforming const... yes
>      [exec] checking for memset... no
>      [exec] checking whether strerror_r is declared... yes
>      [exec] checking for strerror_r... no****
>
> I have difficulty to follow the "+" and "-" relate patching sequence that
> you stated in your email earlier. I have attached the "configure.ac" file
> that I copied out of the hadoop1.1.12/src/native directory. Could you help
> me to construct the configure.ac with the correct patching that you
> described, using the one that I attached in this email, and send back to
> me? Then I will use the patched one to build the native library.****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:****
>
> Hi Jun:****
>
>  ****
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
>  ****
>
> To build the native libs:****
>
>  ****
>
> 1)      Get the source code and install the dependencies to compile hadoop
> ****
>
> 2)      You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> 3)      Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
>  ****
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
>  ****
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
>  ****
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
>  ****
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
>  ****
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
>  ****
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
>  ****
>
>  ****
>
> Hope this helps****
>
>  ****
>
>  ****
>
>  ****
>
>  ****
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
>  ****
>
> Hi Leo,****
>
>  ****
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
>  ****
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
>  ****
>
> ** **
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction on moving the sections in configure.ac, I am
able to re-compile the code base and I can find that the "objdump" returns
the correct fadvise symbols as you described in your earlier email. I then
ran the TestNativeIO.java class and I believe that the Fadvise test case
gets passed now.

Thank you very much for the help!

Regards,

Jun



On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun,****
>
> ** **
>
>   Try moving the section between:****
>
> dnl Check for headers needed by the native Group….
>   […include everything in between inclusive …]
> AC_FUNC_STRERROR_R
>
> ****
>
> to higher place (insert before)****
>
> ** **
>
> # Checks for libraries.****
>
> dnl Check for ‘-ldl’
>
> Hope this helps J****
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Saturday, June 22, 2013 3:26 AM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,
>
> Thanks for the detailed instruction. I started from the clean downloaded
> hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.
>
> And with the configure.ac that comes with the downloaded package, without
> any changes, the following is the compilation output:
>
>
>      [exec] checking fcntl.h usability... yes
>      [exec] checking fcntl.h presence... yes
>      [exec] checking for fcntl.h... yes
>      [exec] checking for stdlib.h... (cached) yes
>      [exec] checking for string.h... (cached) yes
>      [exec] checking for unistd.h... (cached) yes
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for posix_fadvise... no
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for sync_file_range... no
>      [exec] checking for an ANSI C-conforming const... yes
>      [exec] checking for memset... no
>      [exec] checking whether strerror_r is declared... yes
>      [exec] checking for strerror_r... no****
>
> I have difficulty to follow the "+" and "-" relate patching sequence that
> you stated in your email earlier. I have attached the "configure.ac" file
> that I copied out of the hadoop1.1.12/src/native directory. Could you help
> me to construct the configure.ac with the correct patching that you
> described, using the one that I attached in this email, and send back to
> me? Then I will use the patched one to build the native library.****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:****
>
> Hi Jun:****
>
>  ****
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
>  ****
>
> To build the native libs:****
>
>  ****
>
> 1)      Get the source code and install the dependencies to compile hadoop
> ****
>
> 2)      You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> 3)      Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
>  ****
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
>  ****
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
>  ****
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
>  ****
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
>  ****
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
>  ****
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
>  ****
>
>  ****
>
> Hope this helps****
>
>  ****
>
>  ****
>
>  ****
>
>  ****
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
>  ****
>
> Hi Leo,****
>
>  ****
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
>  ****
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
>  ****
>
> ** **
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction on moving the sections in configure.ac, I am
able to re-compile the code base and I can find that the "objdump" returns
the correct fadvise symbols as you described in your earlier email. I then
ran the TestNativeIO.java class and I believe that the Fadvise test case
gets passed now.

Thank you very much for the help!

Regards,

Jun



On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun,****
>
> ** **
>
>   Try moving the section between:****
>
> dnl Check for headers needed by the native Group….
>   […include everything in between inclusive …]
> AC_FUNC_STRERROR_R
>
> ****
>
> to higher place (insert before)****
>
> ** **
>
> # Checks for libraries.****
>
> dnl Check for ‘-ldl’
>
> Hope this helps J****
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Saturday, June 22, 2013 3:26 AM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,
>
> Thanks for the detailed instruction. I started from the clean downloaded
> hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.
>
> And with the configure.ac that comes with the downloaded package, without
> any changes, the following is the compilation output:
>
>
>      [exec] checking fcntl.h usability... yes
>      [exec] checking fcntl.h presence... yes
>      [exec] checking for fcntl.h... yes
>      [exec] checking for stdlib.h... (cached) yes
>      [exec] checking for string.h... (cached) yes
>      [exec] checking for unistd.h... (cached) yes
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for posix_fadvise... no
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for sync_file_range... no
>      [exec] checking for an ANSI C-conforming const... yes
>      [exec] checking for memset... no
>      [exec] checking whether strerror_r is declared... yes
>      [exec] checking for strerror_r... no****
>
> I have difficulty to follow the "+" and "-" relate patching sequence that
> you stated in your email earlier. I have attached the "configure.ac" file
> that I copied out of the hadoop1.1.12/src/native directory. Could you help
> me to construct the configure.ac with the correct patching that you
> described, using the one that I attached in this email, and send back to
> me? Then I will use the patched one to build the native library.****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:****
>
> Hi Jun:****
>
>  ****
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
>  ****
>
> To build the native libs:****
>
>  ****
>
> 1)      Get the source code and install the dependencies to compile hadoop
> ****
>
> 2)      You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> 3)      Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
>  ****
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
>  ****
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
>  ****
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
>  ****
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
>  ****
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
>  ****
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
>  ****
>
>  ****
>
> Hope this helps****
>
>  ****
>
>  ****
>
>  ****
>
>  ****
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
>  ****
>
> Hi Leo,****
>
>  ****
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
>  ****
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
>  ****
>
> ** **
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction on moving the sections in configure.ac, I am
able to re-compile the code base and I can find that the "objdump" returns
the correct fadvise symbols as you described in your earlier email. I then
ran the TestNativeIO.java class and I believe that the Fadvise test case
gets passed now.

Thank you very much for the help!

Regards,

Jun



On Sat, Jun 22, 2013 at 4:42 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun,****
>
> ** **
>
>   Try moving the section between:****
>
> dnl Check for headers needed by the native Group….
>   […include everything in between inclusive …]
> AC_FUNC_STRERROR_R
>
> ****
>
> to higher place (insert before)****
>
> ** **
>
> # Checks for libraries.****
>
> dnl Check for ‘-ldl’
>
> Hope this helps J****
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Saturday, June 22, 2013 3:26 AM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,
>
> Thanks for the detailed instruction. I started from the clean downloaded
> hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.
>
> And with the configure.ac that comes with the downloaded package, without
> any changes, the following is the compilation output:
>
>
>      [exec] checking fcntl.h usability... yes
>      [exec] checking fcntl.h presence... yes
>      [exec] checking for fcntl.h... yes
>      [exec] checking for stdlib.h... (cached) yes
>      [exec] checking for string.h... (cached) yes
>      [exec] checking for unistd.h... (cached) yes
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for posix_fadvise... no
>      [exec] checking for fcntl.h... (cached) yes
>      [exec] checking for sync_file_range... no
>      [exec] checking for an ANSI C-conforming const... yes
>      [exec] checking for memset... no
>      [exec] checking whether strerror_r is declared... yes
>      [exec] checking for strerror_r... no****
>
> I have difficulty to follow the "+" and "-" relate patching sequence that
> you stated in your email earlier. I have attached the "configure.ac" file
> that I copied out of the hadoop1.1.12/src/native directory. Could you help
> me to construct the configure.ac with the correct patching that you
> described, using the one that I attached in this email, and send back to
> me? Then I will use the patched one to build the native library.****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:****
>
> Hi Jun:****
>
>  ****
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
>  ****
>
> To build the native libs:****
>
>  ****
>
> 1)      Get the source code and install the dependencies to compile hadoop
> ****
>
> 2)      You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> 3)      Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
>  ****
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
>  ****
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
>  ****
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
>  ****
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
>  ****
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
>  ****
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
>  ****
>
>  ****
>
> Hope this helps****
>
>  ****
>
>  ****
>
>  ****
>
>  ****
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
>  ****
>
> Hi Leo,****
>
>  ****
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
>  ****
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
>  ****
>
> ** **
>

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R

to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Saturday, June 22, 2013 3:26 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!



RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R

to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Saturday, June 22, 2013 3:26 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!



RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R

to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Saturday, June 22, 2013 3:26 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!



RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun,

  Try moving the section between:
dnl Check for headers needed by the native Group....
  [...include everything in between inclusive ...]
AC_FUNC_STRERROR_R

to higher place (insert before)

# Checks for libraries.
dnl Check for '-ldl'

Hope this helps :)


From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Saturday, June 22, 2013 3:26 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac<http://configure.ac> that comes with the downloaded package, without any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no
I have difficulty to follow the "+" and "-" relate patching sequence that you stated in your email earlier. I have attached the "configure.ac<http://configure.ac>" file that I copied out of the hadoop1.1.12/src/native directory. Could you help me to construct the configure.ac<http://configure.ac> with the correct patching that you described, using the one that I attached in this email, and send back to me? Then I will use the patched one to build the native library.
Regards,

Jun

On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com>> wrote:
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac<http://configure.ac>
Try to modify the configure.ac<http://configure.ac>
--- src/native/configure.ac<http://configure.ac>     (revision 1495498)
+++ src/native/configure.ac<http://configure.ac>     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!



Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded
hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac that comes with the downloaded package, without
any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no

I have difficulty to follow the "+" and "-" relate patching sequence that
you stated in your email earlier. I have attached the "configure.ac" file
that I copied out of the hadoop1.1.12/src/native directory. Could you help
me to construct the configure.ac with the correct patching that you
described, using the one that I attached in this email, and send back to
me? Then I will use the patched one to build the native library.

Regards,

Jun



On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun:****
>
> ** **
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
> ** **
>
> To build the native libs:****
>
> ** **
>
> **1)      **Get the source code and install the dependencies to compile
> hadoop****
>
> **2)      **You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> **3)      **Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
> ** **
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
> ** **
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
> ** **
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
> ** **
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
> ** **
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
> ** **
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
> ** **
>
> ** **
>
> Hope this helps****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,****
>
> ** **
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
> ** **
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded
hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac that comes with the downloaded package, without
any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no

I have difficulty to follow the "+" and "-" relate patching sequence that
you stated in your email earlier. I have attached the "configure.ac" file
that I copied out of the hadoop1.1.12/src/native directory. Could you help
me to construct the configure.ac with the correct patching that you
described, using the one that I attached in this email, and send back to
me? Then I will use the patched one to build the native library.

Regards,

Jun



On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun:****
>
> ** **
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
> ** **
>
> To build the native libs:****
>
> ** **
>
> **1)      **Get the source code and install the dependencies to compile
> hadoop****
>
> **2)      **You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> **3)      **Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
> ** **
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
> ** **
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
> ** **
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
> ** **
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
> ** **
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
> ** **
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
> ** **
>
> ** **
>
> Hope this helps****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,****
>
> ** **
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
> ** **
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded
hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac that comes with the downloaded package, without
any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no

I have difficulty to follow the "+" and "-" relate patching sequence that
you stated in your email earlier. I have attached the "configure.ac" file
that I copied out of the hadoop1.1.12/src/native directory. Could you help
me to construct the configure.ac with the correct patching that you
described, using the one that I attached in this email, and send back to
me? Then I will use the patched one to build the native library.

Regards,

Jun



On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun:****
>
> ** **
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
> ** **
>
> To build the native libs:****
>
> ** **
>
> **1)      **Get the source code and install the dependencies to compile
> hadoop****
>
> **2)      **You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> **3)      **Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
> ** **
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
> ** **
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
> ** **
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
> ** **
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
> ** **
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
> ** **
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
> ** **
>
> ** **
>
> Hope this helps****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,****
>
> ** **
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
> ** **
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Thanks for the detailed instruction. I started from the clean downloaded
hadoop1.1.12 copy and then I issue the command: ant -Dcompile.native=true.

And with the configure.ac that comes with the downloaded package, without
any changes, the following is the compilation output:


     [exec] checking fcntl.h usability... yes
     [exec] checking fcntl.h presence... yes
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... no
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for memset... no
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... no

I have difficulty to follow the "+" and "-" relate patching sequence that
you stated in your email earlier. I have attached the "configure.ac" file
that I copied out of the hadoop1.1.12/src/native directory. Could you help
me to construct the configure.ac with the correct patching that you
described, using the one that I attached in this email, and send back to
me? Then I will use the patched one to build the native library.

Regards,

Jun



On Fri, Jun 21, 2013 at 12:35 PM, Leo Leung <ll...@ddn.com> wrote:

>  Hi Jun:****
>
> ** **
>
> Looks like it is not in the apache hadoop 1.x release binaries,  this
> probably will have to be answered by the apache hadoop 1.x release team***
> *
>
> ** **
>
> To build the native libs:****
>
> ** **
>
> **1)      **Get the source code and install the dependencies to compile
> hadoop****
>
> **2)      **You’ll have to setup the env for it,  such as JAVA_HOME. Gcc
> installed etc.****
>
> **3)      **Got to <hadoop-source dir>     ant [veryclean]
> compile-core-native  >compile.out****
>
> ** **
>
> Verify with ****
>
> [branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so |
> grep fadv****
>
> 00000000000056a0 g     F .text  00000000000000a3
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> 0000000000000000       F *UND*  0000000000000000
> posix_fadvise@@GLIBC_2.2.5****
>
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> ****
>
> 00000000000056a0 g    DF .text  00000000000000a3  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
>
> ****
>
> If you get an error about  “make distclean” error****
>
> Just cd in src/native and run  make distclean (to cleanup )  ****
>
> ** **
>
> Example of ant output that shows checking for fadvise ****
>
>      [exec] checking for fcntl.h... yes****
>
>      [exec] checking for stdlib.h... (cached) yes****
>
>      [exec] checking for string.h... (cached) yes****
>
>      [exec] checking for unistd.h... (cached) yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for posix_fadvise... yes****
>
>      [exec] checking for fcntl.h... (cached) yes****
>
>      [exec] checking for sync_file_range... yes****
>
> ** **
>
> If you see “no” for posix_fadvise or “sync_file_range” ****
>
> And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)**
> **
>
> ** **
>
> you may need to move the section on src/native/configure.ac****
>
> Try to modify the configure.ac
> --- src/native/configure.ac     (revision 1495498)****
>
> +++ src/native/configure.ac     (working copy)****
>
> @@ -47,6 +47,21 @@****
>
> AC_PROG_CC****
>
> AC_PROG_LIBTOOL****
>
> ** **
>
> +dnl check for posix_fadvise****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> +****
>
> +dnl check for sync_file_range****
>
> +AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> +****
>
> +# Checks for typedefs, structures, and compiler characteristics.****
>
> +AC_C_CONST****
>
> +****
>
> +# Checks for library functions.****
>
> +AC_CHECK_FUNCS([memset])****
>
> +****
>
> +# Check for nonstandard STRERROR_R****
>
> +AC_FUNC_STRERROR_R****
>
> +****
>
> # Checks for libraries.****
>
> dnl Check for '-ldl'****
>
> AC_CHECK_LIB([dl], [dlopen])****
>
> @@ -104,21 +119,6 @@****
>
> dnl Check for headers needed by the native Group resolution implementation
> ****
>
> AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [],
> AC_MSG_ERROR(Some system headers not found... please ensure their presence
> on your platform.))****
>
> ** **
>
> -dnl check for posix_fadvise****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])****
>
> -****
>
> -dnl check for sync_file_range****
>
> -AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])****
>
> -****
>
> -# Checks for typedefs, structures, and compiler characteristics.****
>
> -AC_C_CONST****
>
> -****
>
> -# Checks for library functions.****
>
> -AC_CHECK_FUNCS([memset])****
>
> -****
>
> -# Check for nonstandard STRERROR_R****
>
> -AC_FUNC_STRERROR_R****
>
> -****
>
> AC_CONFIG_FILES([Makefile])****
>
> AC_OUTPUT****
>
> ** **
>
> ** **
>
> Hope this helps****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Jun Li [mailto:jltz922181@gmail.com]
> *Sent:* Friday, June 21, 2013 9:44 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: how to turn on NativeIO.posixFadviseIfPossible****
>
> ** **
>
> Hi Leo,****
>
> ** **
>
> Following your instruction,  the following is what I got:
>
> [junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
> fadvise
> 0000000000004d70 g     F .text    000000000000006f
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000004d70 g    DF .text    000000000000006f  Base
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise****
>
> Apparently the part of "the GLIBC" is not there ****
>
> The libhadoop.so is the one that I downloaded as part of the  tar.gz of
> hadoop.1.1.2 from the Hadoop Apache web site. ****
>
> Could you let me know how I can use the source code directory in the
> downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
> make sure that the fadvise call is able to get referenced?****
>
> Regards,
>
> Jun****
>
> ** **
>
> On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:****
>
> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)****
>
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!****
>
> ** **
>

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise


If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac
Try to modify the configure.ac
--- src/native/configure.ac     (revision 1495498)
+++ src/native/configure.ac     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!


RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise


If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac
Try to modify the configure.ac
--- src/native/configure.ac     (revision 1495498)
+++ src/native/configure.ac     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!


RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise


If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac
Try to modify the configure.ac
--- src/native/configure.ac     (revision 1495498)
+++ src/native/configure.ac     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!


RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
Hi Jun:

Looks like it is not in the apache hadoop 1.x release binaries,  this probably will have to be answered by the apache hadoop 1.x release team

To build the native libs:


1)      Get the source code and install the dependencies to compile hadoop

2)      You'll have to setup the env for it,  such as JAVA_HOME. Gcc installed etc.

3)      Got to <hadoop-source dir>     ant [veryclean] compile-core-native  >compile.out

Verify with
[branch-1]$ objdump -Tt ./build/native/Linux-amd64-64/lib/libhadoop.so | grep fadv
00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise


If you get an error about  "make distclean" error
Just cd in src/native and run  make distclean (to cleanup )

Example of ant output that shows checking for fadvise
     [exec] checking for fcntl.h... yes
     [exec] checking for stdlib.h... (cached) yes
     [exec] checking for string.h... (cached) yes
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for posix_fadvise... yes
     [exec] checking for fcntl.h... (cached) yes
     [exec] checking for sync_file_range... yes

If you see "no" for posix_fadvise or "sync_file_range"
And you are sure you have posix_fdavise support (RHEL 6.x or CentOS 6.x)

you may need to move the section on src/native/configure.ac
Try to modify the configure.ac
--- src/native/configure.ac     (revision 1495498)
+++ src/native/configure.ac     (working copy)
@@ -47,6 +47,21 @@
AC_PROG_CC
AC_PROG_LIBTOOL

+dnl check for posix_fadvise
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
+
+dnl check for sync_file_range
+AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
+
+# Checks for typedefs, structures, and compiler characteristics.
+AC_C_CONST
+
+# Checks for library functions.
+AC_CHECK_FUNCS([memset])
+
+# Check for nonstandard STRERROR_R
+AC_FUNC_STRERROR_R
+
# Checks for libraries.
dnl Check for '-ldl'
AC_CHECK_LIB([dl], [dlopen])
@@ -104,21 +119,6 @@
dnl Check for headers needed by the native Group resolution implementation
AC_CHECK_HEADERS([fcntl.h stdlib.h string.h unistd.h], [], AC_MSG_ERROR(Some system headers not found... please ensure their presence on your platform.))

-dnl check for posix_fadvise
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(posix_fadvise)])
-
-dnl check for sync_file_range
-AC_CHECK_HEADERS(fcntl.h, [AC_CHECK_FUNCS(sync_file_range)])
-
-# Checks for typedefs, structures, and compiler characteristics.
-AC_C_CONST
-
-# Checks for library functions.
-AC_CHECK_FUNCS([memset])
-
-# Check for nonstandard STRERROR_R
-AC_FUNC_STRERROR_R
-
AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Hope this helps




From: Jun Li [mailto:jltz922181@gmail.com]
Sent: Friday, June 21, 2013 9:44 AM
To: user@hadoop.apache.org
Subject: Re: how to turn on NativeIO.posixFadviseIfPossible

Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I fadvise
0000000000004d70 g     F .text    000000000000006f              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
Apparently the part of "the GLIBC" is not there
The libhadoop.so is the one that I downloaded as part of the  tar.gz of hadoop.1.1.2 from the Hadoop Apache web site.
Could you let me know how I can use the source code directory in the downloaded Hadoop Apache package, to re-compile the libhadoop.so and to make sure that the fadvise call is able to get referenced?
Regards,

Jun

On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com>> wrote:
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com<ma...@gmail.com>]
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!


Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
fadvise
0000000000004d70 g     F .text    000000000000006f
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Apparently the part of "the GLIBC" is not there

The libhadoop.so is the one that I downloaded as part of the  tar.gz of
hadoop.1.1.2 from the Hadoop Apache web site.

Could you let me know how I can use the source code directory in the
downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
make sure that the fadvise call is able to get referenced?

Regards,

Jun



On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:

> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
fadvise
0000000000004d70 g     F .text    000000000000006f
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Apparently the part of "the GLIBC" is not there

The libhadoop.so is the one that I downloaded as part of the  tar.gz of
hadoop.1.1.2 from the Hadoop Apache web site.

Could you let me know how I can use the source code directory in the
downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
make sure that the fadvise call is able to get referenced?

Regards,

Jun



On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:

> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
fadvise
0000000000004d70 g     F .text    000000000000006f
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Apparently the part of "the GLIBC" is not there

The libhadoop.so is the one that I downloaded as part of the  tar.gz of
hadoop.1.1.2 from the Hadoop Apache web site.

Could you let me know how I can use the source code directory in the
downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
make sure that the fadvise call is able to get referenced?

Regards,

Jun



On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:

> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!
>

Re: how to turn on NativeIO.posixFadviseIfPossible

Posted by Jun Li <jl...@gmail.com>.
Hi Leo,

Following your instruction,  the following is what I got:

[junli@mercoop-26 Linux-amd64-64]$ objdump -Tt libhadoop.so | grep -I
fadvise
0000000000004d70 g     F .text    000000000000006f
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000004d70 g    DF .text    000000000000006f  Base
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Apparently the part of "the GLIBC" is not there

The libhadoop.so is the one that I downloaded as part of the  tar.gz of
hadoop.1.1.2 from the Hadoop Apache web site.

Could you let me know how I can use the source code directory in the
downloaded Hadoop Apache package, to re-compile the libhadoop.so and to
make sure that the fadvise call is able to get referenced?

Regards,

Jun



On Fri, Jun 21, 2013 at 9:19 AM, Leo Leung <ll...@ddn.com> wrote:

> This looks like a compilation problem on the native hadoop libraries.
>
> Please locate the libhadoop.so library on your system and run
> [shell]  objdump -Tt libhadoop.so | grep -I fadvise
>
> If you don't see something like the following *the GLIBC* part  (that
> means the system where the share lib was compiled did not have it)
>
> 00000000000056a0 g     F .text  00000000000000a3
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
> 0000000000000000       F *UND*  0000000000000000
>  posix_fadvise@@GLIBC_2.2.5
> 0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
> 00000000000056a0 g    DF .text  00000000000000a3  Base
>  Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
>
> Note: objdump is from binutils- rpm  (you can use yum install to install
> it if you don't have it)
>
>
> -----Original Message-----
> From: Jun Li [mailto:jltz922181@gmail.com]
> Sent: Friday, June 21, 2013 1:35 AM
> To: user@hadoop.apache.org
> Subject: how to turn on NativeIO.posixFadviseIfPossible
>
> Hi,
>
> I downloaded the current stable version from the Apache Hadoop web site,
> hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1.
> The detailed Linux kernel version is:
> 2.6.32-131.0.15.el6.x86_64
>
>  I ran the TestNativeIO.java under the distribution directory of
> "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to
> understand how NativeIO.posixFadviseIfPossible behaves, in particular, to
> check whether "posix_fadvise" is turned on or not. I am interested in this
> call as it is used in Read-Ahead-Pool to cache data to the OS's buffer
> cache.  The following is the test case that I ran:
>
> @Test
>   public void testPosixFadvise() throws Exception {
>     FileInputStream fis = new FileInputStream("/dev/zero");
>     try {
>       NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
>                              NativeIO.POSIX_FADV_SEQUENTIAL);
>     } catch (NativeIOException noe) {
>       // we should just skip the unit test on machines where we don't
>       // have fadvise support
>       assumeTrue(false);
>     } finally {
>       fis.close();
>     }
>
> However, when I stepped into the code and reached "NativeIO.java"
> under the package of "org.apache.hadoop.io.nativeio",  in the particular
> call below:
>
> public static void posixFadviseIfPossible(
>       FileDescriptor fd, long offset, long len, int flags)
>       throws NativeIOException {
>
>     if (nativeLoaded && fadvisePossible) {
>       try {
>         posix_fadvise(fd, offset, len, flags);
>       } catch (UnsupportedOperationException uoe) {
>         fadvisePossible = false;
>       } catch (UnsatisfiedLinkError ule) {
>         fadvisePossible = false;
>       }
>     }
>   }
>
> The call to "posix_fadvise"  threw the "UnsupportedOperationException"
> exception.
>
> I further traced to the native library, and in the code "NativeIO.c", I
> found
>
> JNIEXPORT void JNICALL
> Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
>   JNIEnv *env, jclass clazz,
>   jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef
> HAVE_POSIX_FADVISE
>   THROW(env, "java/lang/UnsupportedOperationException",
>         "fadvise support not available"); #else
>
> ...
> }
>
> I believe that the problem of throwing the exception is because
> "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO
> library is loaded properly in the Java code, as I can successfully run the
> other test cases in "TestNativeIO.java".
>
> So my question is: should I re-compile the "libhadoop" in order to get the
> version of the shared library that can have "HAVE_POSIX_FADVISE"
> turned on? Or by default, FADVISE is turned on already?
>
> Thank you!
>

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com] 
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com] 
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com] 
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!

RE: how to turn on NativeIO.posixFadviseIfPossible

Posted by Leo Leung <ll...@ddn.com>.
This looks like a compilation problem on the native hadoop libraries.

Please locate the libhadoop.so library on your system and run
[shell]  objdump -Tt libhadoop.so | grep -I fadvise

If you don't see something like the following *the GLIBC* part  (that means the system where the share lib was compiled did not have it)

00000000000056a0 g     F .text  00000000000000a3              Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise
0000000000000000       F *UND*  0000000000000000              posix_fadvise@@GLIBC_2.2.5
0000000000000000      DF *UND*  0000000000000000  GLIBC_2.2.5 posix_fadvise
00000000000056a0 g    DF .text  00000000000000a3  Base        Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise

Note: objdump is from binutils- rpm  (you can use yum install to install it if you don't have it)


-----Original Message-----
From: Jun Li [mailto:jltz922181@gmail.com] 
Sent: Friday, June 21, 2013 1:35 AM
To: user@hadoop.apache.org
Subject: how to turn on NativeIO.posixFadviseIfPossible

Hi,

I downloaded the current stable version from the Apache Hadoop web site, hadoop-1.1.2. My machine is an AMD-based machine and Redhat Enterprise 6.1. The detailed Linux kernel version is:
2.6.32-131.0.15.el6.x86_64

 I ran the TestNativeIO.java under the distribution directory of "test/org/apache/hadoop/io/nativeio/TestNativeIO.java" and tried to understand how NativeIO.posixFadviseIfPossible behaves, in particular, to check whether "posix_fadvise" is turned on or not. I am interested in this call as it is used in Read-Ahead-Pool to cache data to the OS's buffer cache.  The following is the test case that I ran:

@Test
  public void testPosixFadvise() throws Exception {
    FileInputStream fis = new FileInputStream("/dev/zero");
    try {
      NativeIO.posixFadviseIfPossible(fis.getFD(), 0, 0,
                             NativeIO.POSIX_FADV_SEQUENTIAL);
    } catch (NativeIOException noe) {
      // we should just skip the unit test on machines where we don't
      // have fadvise support
      assumeTrue(false);
    } finally {
      fis.close();
    }

However, when I stepped into the code and reached "NativeIO.java"
under the package of "org.apache.hadoop.io.nativeio",  in the particular call below:

public static void posixFadviseIfPossible(
      FileDescriptor fd, long offset, long len, int flags)
      throws NativeIOException {

    if (nativeLoaded && fadvisePossible) {
      try {
        posix_fadvise(fd, offset, len, flags);
      } catch (UnsupportedOperationException uoe) {
        fadvisePossible = false;
      } catch (UnsatisfiedLinkError ule) {
        fadvisePossible = false;
      }
    }
  }

The call to "posix_fadvise"  threw the "UnsupportedOperationException"
exception.

I further traced to the native library, and in the code "NativeIO.c", I found

JNIEXPORT void JNICALL
Java_org_apache_hadoop_io_nativeio_NativeIO_posix_1fadvise(
  JNIEnv *env, jclass clazz,
  jobject fd_object, jlong offset, jlong len, jint flags) { #ifndef HAVE_POSIX_FADVISE
  THROW(env, "java/lang/UnsupportedOperationException",
        "fadvise support not available"); #else

...
}

I believe that the problem of throwing the exception is because "HAVE_POSIX_FADVISE" is not defined.  I made sure that the native IO library is loaded properly in the Java code, as I can successfully run the other test cases in "TestNativeIO.java".

So my question is: should I re-compile the "libhadoop" in order to get the version of the shared library that can have "HAVE_POSIX_FADVISE"
turned on? Or by default, FADVISE is turned on already?

Thank you!