You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Paula Logan (Jira)" <ji...@apache.org> on 2021/10/27 22:14:00 UTC

[jira] [Created] (HDFS-16288) Native Test Case #35 Fails in RHEL 8.4

Paula Logan created HDFS-16288:
----------------------------------

             Summary: Native Test Case #35 Fails in RHEL 8.4
                 Key: HDFS-16288
                 URL: https://issues.apache.org/jira/browse/HDFS-16288
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: libhdfs, libhdfs++, native, test
    Affects Versions: 3.3.1
         Environment: RHEL 8.4

 
            Reporter: Paula Logan


When running the following maven command, Native Test Case #35 Fails and the tests are halted.

mvn test -Pnative,parallel-tests,yarn-ui -Dparallel-tests=true -Dtests=allNative -Drequire.bzip2=true -Drequire.fuse=true -Drequire.isal=true -Disal.prefix=/usr/local -Disal.lib=/usr/local/lib64 -Dbundle.isal=true -Drequire.openssl=true -Dopenssl.prefix=/usr -Dopenssl.include=/usr/include -Dopenssl.lib=/usr/lib64 -Dbundle.openssl=true -Drequire.pmdk=true -Dpmdk.lib=/usr/lib64 -Dbundle.pmdk=true -Drequire.snappy=true -Dsnappy.prefix=/usr -Dsnappy.include=/usr/include -Dsnappy.lib=/usr/lib64 -Dbundle.snappy=true -Drequire.valgrind=true -Dhbase.profile=2.0 -Drequire.zstd=true -Dzstd.prefix=/usr -Dzstd.include=/usr/include -Dzstd.lib=/usr/lib64 -Dbundle.zstd=true 

I added the following to assist with the debugging:

-Dnative_ctest_args="-V -VV --debug --stop-on-failure" -Dnative_cmake_args="-D_LIBHDFS_JNI_HELPER_DEBUGGING_ON_ -DLIBHDFSPP_C_API_ENABLE_DEBUG" -Droot.log.level=DEBUG

After much work tracing through Test Case #35 functions in hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_threaded.c, I found the problem, and it deals with both the test_libhdfs_threaded.c calls and the functions listed in hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c file.

test_libhdfs_threaded.c uses two file systems: hdfs (C) and hdfspp (C++).  It makes a call to hdfsGetPathInfo of hdfs_shim.c passing in a C++ FS which calls the libhdfspp_hdfsGetPathInfo, which is correct.  Then test_libhdfs_threaded.c wants to know if the file is encrypted, so it calls hdfsFileIsEncrypted of hdfs_shim.c (knowing it is using a C++ PathInfo structure) which calls libhdfs_hdfsFileIsEncrypted, which is not correct.

In hdfs_shim.c:

hdfsFileInfo *hdfsGetPathInfo(hdfsFS fs, const char* path) {
 return (hdfsFileInfo *)libhdfspp_hdfsGetPathInfo(fs->libhdfsppRep, path);
}

int hdfsFileIsEncrypted(hdfsFileInfo *hdfsFileInfo) {
 return libhdfs_hdfsFileIsEncrypted
 ((libhdfs_hdfsFileInfo *) hdfsFileInfo);
}

The crux of the problem deals with the extendedHdfsFileInfo which exists in the C structures but not the C++ structures.

In RHEL 8.4, I get a 1 returned from hdfsFileIsEncrypted (uses bitwise and) when the file is not encrypted. 

I  put in a quick fix to get passed Test Case #35, but then found that it clashed with #40.

I modified the following files by adding new functions appropriate for the FS type and Path structures returned just to get all (40) of the Native Test Cases to pass to be able to test the remaining test cases:  hdfs.h, libhdfs_wrapper_defines.h, libhdfs_wrapper_undefs.h, libhdfspp_wrapper.h, hdfs_shim.c, hdfs.c, test_libhdfs_threaded.c. 

Your fix, I would think, would be to add the extendedHdfsFileInfo to the C++ code or change test_libhdfs_threaded.c to call hdfsFileIsEncrypted using a C Path structure.

I've communicated with a Hadoop Native developer who said he didn't have any problem in Ubuntu.  My guess would be that the memory is initialized by default on that system to 0.  Not sure as I haven't used Ubuntu.

Notes:

I followed the CentOS 8 procedures in the BUILDING.txt.

HDFS-9359 has a note that might be relevant to this problem.  "hdfs_shim.c is created to support testing part of libhdfs++ that have been implemented. Functions not implemented in libhdfs++ are delegated to libhdfs." 

The hdfsFileIsEncrypted does care which FS is being used as the Path structures returned are different sizes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org