You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2020/02/01 01:47:13 UTC
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1398/
[Feb 1, 2020 12:13:02 AM] (weichiu) HDFS-7175. Client-side SocketTimeoutException during Fsck. Contributed
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/
[Feb 28, 2020 1:03:06 PM] (surendralilhore) HDFS-15199. NPE in BlockSender. Contributed by Ayush Saxena.
[Feb 29, 2020 1:02:41 AM] (github) HADOOP-16891. Upgrade jackson-databind to 2.9.10.3 (#1865)
[Feb 29, 2020 1:30:26 AM] (tasanuma) HDFS-15190. HttpFS: Add Support for Storage Policy Satisfier.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestDatanodeLayoutUpgrade
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestCrcCorruption
hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.fs.contract.hdfs.TestHDFSContractCreate
hadoop.hdfs.server.namenode.TestAddStripedBlocks
hadoop.hdfs.TestDFSInputStreamBlockLocations
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFSImage
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestReplication
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.TestDecommission
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestStripedFileAppend
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDFSRename
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.TestPread
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.server.datanode.TestBlockRecovery
hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData
hadoop.hdfs.server.namenode.TestEditLog
hadoop.fs.contract.hdfs.TestHDFSContractPathHandle
hadoop.hdfs.server.balancer.TestBalancerRPCDelay
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.namenode.TestNamenodeRetryCache
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestReservedRawPaths
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDatanodeDeath
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.datanode.TestDataNodeReconfiguration
hadoop.hdfs.server.namenode.TestCacheDirectives
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestFileConcurrentReader
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestDatanodeRegistration
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractRename
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [180K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.7M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [204K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/
[Feb 28, 2020 1:03:06 PM] (surendralilhore) HDFS-15199. NPE in BlockSender. Contributed by Ayush Saxena.
[Feb 29, 2020 1:02:41 AM] (github) HADOOP-16891. Upgrade jackson-databind to 2.9.10.3 (#1865)
[Feb 29, 2020 1:30:26 AM] (tasanuma) HDFS-15190. HttpFS: Add Support for Storage Policy Satisfier.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestDatanodeLayoutUpgrade
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestCrcCorruption
hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.fs.contract.hdfs.TestHDFSContractCreate
hadoop.hdfs.server.namenode.TestAddStripedBlocks
hadoop.hdfs.TestDFSInputStreamBlockLocations
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFSImage
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestReplication
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.TestDecommission
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestStripedFileAppend
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDFSRename
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.TestPread
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.server.datanode.TestBlockRecovery
hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData
hadoop.hdfs.server.namenode.TestEditLog
hadoop.fs.contract.hdfs.TestHDFSContractPathHandle
hadoop.hdfs.server.balancer.TestBalancerRPCDelay
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.namenode.TestNamenodeRetryCache
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestReservedRawPaths
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDatanodeDeath
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.datanode.TestDataNodeReconfiguration
hadoop.hdfs.server.namenode.TestCacheDirectives
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestFileConcurrentReader
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestDatanodeRegistration
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractRename
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [180K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.7M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [204K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/
[Feb 28, 2020 1:03:06 PM] (surendralilhore) HDFS-15199. NPE in BlockSender. Contributed by Ayush Saxena.
[Feb 29, 2020 1:02:41 AM] (github) HADOOP-16891. Upgrade jackson-databind to 2.9.10.3 (#1865)
[Feb 29, 2020 1:30:26 AM] (tasanuma) HDFS-15190. HttpFS: Add Support for Storage Policy Satisfier.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestDatanodeLayoutUpgrade
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestCrcCorruption
hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.fs.contract.hdfs.TestHDFSContractCreate
hadoop.hdfs.server.namenode.TestAddStripedBlocks
hadoop.hdfs.TestDFSInputStreamBlockLocations
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFSImage
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestReplication
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.TestDecommission
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestStripedFileAppend
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDFSRename
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.TestPread
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.server.datanode.TestBlockRecovery
hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData
hadoop.hdfs.server.namenode.TestEditLog
hadoop.fs.contract.hdfs.TestHDFSContractPathHandle
hadoop.hdfs.server.balancer.TestBalancerRPCDelay
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.namenode.TestNamenodeRetryCache
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestReservedRawPaths
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDatanodeDeath
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.datanode.TestDataNodeReconfiguration
hadoop.hdfs.server.namenode.TestCacheDirectives
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestFileConcurrentReader
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestDatanodeRegistration
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractRename
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [180K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.7M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [204K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/
[Feb 28, 2020 1:03:06 PM] (surendralilhore) HDFS-15199. NPE in BlockSender. Contributed by Ayush Saxena.
[Feb 29, 2020 1:02:41 AM] (github) HADOOP-16891. Upgrade jackson-databind to 2.9.10.3 (#1865)
[Feb 29, 2020 1:30:26 AM] (tasanuma) HDFS-15190. HttpFS: Add Support for Storage Policy Satisfier.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestDatanodeLayoutUpgrade
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestCrcCorruption
hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.fs.contract.hdfs.TestHDFSContractCreate
hadoop.hdfs.server.namenode.TestAddStripedBlocks
hadoop.hdfs.TestDFSInputStreamBlockLocations
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFSImage
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestReplication
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.TestDecommission
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestStripedFileAppend
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDFSRename
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.TestPread
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.server.datanode.TestBlockRecovery
hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData
hadoop.hdfs.server.namenode.TestEditLog
hadoop.fs.contract.hdfs.TestHDFSContractPathHandle
hadoop.hdfs.server.balancer.TestBalancerRPCDelay
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.namenode.TestNamenodeRetryCache
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestReservedRawPaths
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDatanodeDeath
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.datanode.TestDataNodeReconfiguration
hadoop.hdfs.server.namenode.TestCacheDirectives
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestFileConcurrentReader
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestDatanodeRegistration
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractRename
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [180K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.7M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [204K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1424/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/
[Feb 27, 2020 8:38:42 AM] (surendralilhore) HDFS-15167. Block Report Interval shouldn't be reset apart from first
[Feb 27, 2020 3:48:14 PM] (github) HDFS-14668 Support Fuse with Users from multiple Security Realms (#1739)
[Feb 27, 2020 4:49:35 PM] (ayushsaxena) HDFS-15124. Crashing bugs in NameNode when using a valid configuration
[Feb 27, 2020 6:27:22 PM] (tmarq) HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS).
[Feb 27, 2020 7:01:55 PM] (ayushsaxena) HDFS-15186. Erasure Coding: Decommission may generate the parity block's
[Feb 27, 2020 7:10:32 PM] (snemeth) YARN-10148. Add Unit test for queue ACL for both FS and CS. Contributed
[Feb 27, 2020 8:53:20 PM] (inigoiri) YARN-10155. TestDelegationTokenRenewer.testTokenThreadTimeout fails in
[Feb 27, 2020 9:18:30 PM] (inigoiri) YARN-10161. TestRouterWebServicesREST is corrupting STDOUT. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.mover.TestStorageMover
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.tools.TestViewFSStoragePolicyCommands
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestMultiThreadedHflush
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestClientProtocolForPipelineRecovery
hadoop.hdfs.TestReplication
hadoop.hdfs.TestDFSShell
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.namenode.TestAddBlock
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.datanode.TestBPOfferService
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestTrashWithEncryptionZones
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.cli.TestHDFSCLI
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.TestHdfsAdmin
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [188K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.6M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [152K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/
[Feb 27, 2020 8:38:42 AM] (surendralilhore) HDFS-15167. Block Report Interval shouldn't be reset apart from first
[Feb 27, 2020 3:48:14 PM] (github) HDFS-14668 Support Fuse with Users from multiple Security Realms (#1739)
[Feb 27, 2020 4:49:35 PM] (ayushsaxena) HDFS-15124. Crashing bugs in NameNode when using a valid configuration
[Feb 27, 2020 6:27:22 PM] (tmarq) HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS).
[Feb 27, 2020 7:01:55 PM] (ayushsaxena) HDFS-15186. Erasure Coding: Decommission may generate the parity block's
[Feb 27, 2020 7:10:32 PM] (snemeth) YARN-10148. Add Unit test for queue ACL for both FS and CS. Contributed
[Feb 27, 2020 8:53:20 PM] (inigoiri) YARN-10155. TestDelegationTokenRenewer.testTokenThreadTimeout fails in
[Feb 27, 2020 9:18:30 PM] (inigoiri) YARN-10161. TestRouterWebServicesREST is corrupting STDOUT. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.mover.TestStorageMover
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.tools.TestViewFSStoragePolicyCommands
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestMultiThreadedHflush
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestClientProtocolForPipelineRecovery
hadoop.hdfs.TestReplication
hadoop.hdfs.TestDFSShell
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.namenode.TestAddBlock
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.datanode.TestBPOfferService
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestTrashWithEncryptionZones
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.cli.TestHDFSCLI
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.TestHdfsAdmin
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [188K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.6M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [152K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/
[Feb 27, 2020 8:38:42 AM] (surendralilhore) HDFS-15167. Block Report Interval shouldn't be reset apart from first
[Feb 27, 2020 3:48:14 PM] (github) HDFS-14668 Support Fuse with Users from multiple Security Realms (#1739)
[Feb 27, 2020 4:49:35 PM] (ayushsaxena) HDFS-15124. Crashing bugs in NameNode when using a valid configuration
[Feb 27, 2020 6:27:22 PM] (tmarq) HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS).
[Feb 27, 2020 7:01:55 PM] (ayushsaxena) HDFS-15186. Erasure Coding: Decommission may generate the parity block's
[Feb 27, 2020 7:10:32 PM] (snemeth) YARN-10148. Add Unit test for queue ACL for both FS and CS. Contributed
[Feb 27, 2020 8:53:20 PM] (inigoiri) YARN-10155. TestDelegationTokenRenewer.testTokenThreadTimeout fails in
[Feb 27, 2020 9:18:30 PM] (inigoiri) YARN-10161. TestRouterWebServicesREST is corrupting STDOUT. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.mover.TestStorageMover
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.tools.TestViewFSStoragePolicyCommands
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestMultiThreadedHflush
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestClientProtocolForPipelineRecovery
hadoop.hdfs.TestReplication
hadoop.hdfs.TestDFSShell
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.namenode.TestAddBlock
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.datanode.TestBPOfferService
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestTrashWithEncryptionZones
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.cli.TestHDFSCLI
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.TestHdfsAdmin
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [188K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.6M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [152K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/
[Feb 27, 2020 8:38:42 AM] (surendralilhore) HDFS-15167. Block Report Interval shouldn't be reset apart from first
[Feb 27, 2020 3:48:14 PM] (github) HDFS-14668 Support Fuse with Users from multiple Security Realms (#1739)
[Feb 27, 2020 4:49:35 PM] (ayushsaxena) HDFS-15124. Crashing bugs in NameNode when using a valid configuration
[Feb 27, 2020 6:27:22 PM] (tmarq) HADOOP-16730: ABFS: Support for Shared Access Signatures (SAS).
[Feb 27, 2020 7:01:55 PM] (ayushsaxena) HDFS-15186. Erasure Coding: Decommission may generate the parity block's
[Feb 27, 2020 7:10:32 PM] (snemeth) YARN-10148. Add Unit test for queue ACL for both FS and CS. Contributed
[Feb 27, 2020 8:53:20 PM] (inigoiri) YARN-10155. TestDelegationTokenRenewer.testTokenThreadTimeout fails in
[Feb 27, 2020 9:18:30 PM] (inigoiri) YARN-10161. TestRouterWebServicesREST is corrupting STDOUT. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.metrics2.source.TestJvmMetrics
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.mover.TestStorageMover
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.tools.TestViewFSStoragePolicyCommands
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestMultiThreadedHflush
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestClientProtocolForPipelineRecovery
hadoop.hdfs.TestReplication
hadoop.hdfs.TestDFSShell
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.namenode.TestAddBlock
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
hadoop.hdfs.TestErasureCodingMultipleRacks
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.datanode.TestBPOfferService
hadoop.hdfs.TestFileCreation
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestTrashWithEncryptionZones
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.cli.TestHDFSCLI
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.TestHdfsAdmin
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestErasureCodingPolicies
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.TestEncryptedTransfer
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.TestWriteRead
hadoop.hdfs.TestWriteReadStripedFile
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.federation.router.TestRouterQuota
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup
hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat
hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [188K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.6M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [152K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1423/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/
[Feb 26, 2020 8:29:26 AM] (github) YARN-10156. Fix typo 'complaint' which means quite different in
[Feb 26, 2020 9:34:29 AM] (pjoseph) YARN-9593. Support Comma in the value of Scheduler Configuration
[Feb 26, 2020 3:33:29 PM] (kihwal) HDFS-15147. LazyPersistTestCase wait logic is flawed. Contributed by
[Feb 26, 2020 5:54:00 PM] (brahma) YARN-10141.Interceptor in FederationInterceptorREST doesnt update on RM
[Feb 26, 2020 8:52:24 PM] (ayushsaxena) HDFS-15120. Refresh BlockPlacementPolicy at runtime. Contributed by
[Feb 26, 2020 10:32:26 PM] (ayushsaxena) HDFS-15111. stopStandbyServices() should log which service state it is
[Feb 27, 2020 1:36:49 AM] (github) YARN-10152. Fix findbugs warnings in hadoop-yarn-applications-mawo-core
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestDecommissioningStatus
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.hdfs.TestRollingUpgrade
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/
[Feb 26, 2020 8:29:26 AM] (github) YARN-10156. Fix typo 'complaint' which means quite different in
[Feb 26, 2020 9:34:29 AM] (pjoseph) YARN-9593. Support Comma in the value of Scheduler Configuration
[Feb 26, 2020 3:33:29 PM] (kihwal) HDFS-15147. LazyPersistTestCase wait logic is flawed. Contributed by
[Feb 26, 2020 5:54:00 PM] (brahma) YARN-10141.Interceptor in FederationInterceptorREST doesnt update on RM
[Feb 26, 2020 8:52:24 PM] (ayushsaxena) HDFS-15120. Refresh BlockPlacementPolicy at runtime. Contributed by
[Feb 26, 2020 10:32:26 PM] (ayushsaxena) HDFS-15111. stopStandbyServices() should log which service state it is
[Feb 27, 2020 1:36:49 AM] (github) YARN-10152. Fix findbugs warnings in hadoop-yarn-applications-mawo-core
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestDecommissioningStatus
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.hdfs.TestRollingUpgrade
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/
[Feb 26, 2020 8:29:26 AM] (github) YARN-10156. Fix typo 'complaint' which means quite different in
[Feb 26, 2020 9:34:29 AM] (pjoseph) YARN-9593. Support Comma in the value of Scheduler Configuration
[Feb 26, 2020 3:33:29 PM] (kihwal) HDFS-15147. LazyPersistTestCase wait logic is flawed. Contributed by
[Feb 26, 2020 5:54:00 PM] (brahma) YARN-10141.Interceptor in FederationInterceptorREST doesnt update on RM
[Feb 26, 2020 8:52:24 PM] (ayushsaxena) HDFS-15120. Refresh BlockPlacementPolicy at runtime. Contributed by
[Feb 26, 2020 10:32:26 PM] (ayushsaxena) HDFS-15111. stopStandbyServices() should log which service state it is
[Feb 27, 2020 1:36:49 AM] (github) YARN-10152. Fix findbugs warnings in hadoop-yarn-applications-mawo-core
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestDecommissioningStatus
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.hdfs.TestRollingUpgrade
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/
[Feb 26, 2020 8:29:26 AM] (github) YARN-10156. Fix typo 'complaint' which means quite different in
[Feb 26, 2020 9:34:29 AM] (pjoseph) YARN-9593. Support Comma in the value of Scheduler Configuration
[Feb 26, 2020 3:33:29 PM] (kihwal) HDFS-15147. LazyPersistTestCase wait logic is flawed. Contributed by
[Feb 26, 2020 5:54:00 PM] (brahma) YARN-10141.Interceptor in FederationInterceptorREST doesnt update on RM
[Feb 26, 2020 8:52:24 PM] (ayushsaxena) HDFS-15120. Refresh BlockPlacementPolicy at runtime. Contributed by
[Feb 26, 2020 10:32:26 PM] (ayushsaxena) HDFS-15111. stopStandbyServices() should log which service state it is
[Feb 27, 2020 1:36:49 AM] (github) YARN-10152. Fix findbugs warnings in hadoop-yarn-applications-mawo-core
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestDecommissioningStatus
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.hdfs.TestRollingUpgrade
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1422/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/
[Feb 24, 2020 2:47:01 PM] (ayushsaxena) HDFS-15166. Remove redundant field fStream in ByteStringLog. Contributed
[Feb 24, 2020 3:08:04 PM] (ayushsaxena) HDFS-15187. CORRUPT replica mismatch between namenodes after failover.
[Feb 24, 2020 4:28:00 PM] (github) HADOOP-16859: ABFS: Add unbuffer support to ABFS connector.
[Feb 24, 2020 6:45:34 PM] (github) HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3
[Feb 24, 2020 8:45:49 PM] (snemeth) YARN-10157. FS-CS converter: initPropertyActions() is not called without
[Feb 24, 2020 8:54:07 PM] (snemeth) YARN-10135. FS-CS converter tool: issue warning on dynamic auto-create
[Feb 24, 2020 9:39:16 PM] (weichiu) HDFS-15174. Optimize ReplicaCachingGetSpaceUsed by reducing unnecessary
[Feb 25, 2020 2:08:13 AM] (tasanuma) HADOOP-16841. The description of
[Feb 25, 2020 4:47:52 AM] (github) YARN-10074. Update netty to 4.1.42Final in yarn-csi. Contributed by
[Feb 25, 2020 8:30:04 PM] (snemeth) YARN-10130. FS-CS converter: Do not allow output dir to be the same as
[Feb 25, 2020 8:48:16 PM] (snemeth) YARN-8767. TestStreamingStatus fails. Contributed by Andras Bokor
[Feb 25, 2020 9:28:50 PM] (weichiu) HDFS-14861. Reset LowRedundancyBlocks Iterator periodically. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen shadedclient unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-maven-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth-examples.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [524K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/
[Feb 24, 2020 2:47:01 PM] (ayushsaxena) HDFS-15166. Remove redundant field fStream in ByteStringLog. Contributed
[Feb 24, 2020 3:08:04 PM] (ayushsaxena) HDFS-15187. CORRUPT replica mismatch between namenodes after failover.
[Feb 24, 2020 4:28:00 PM] (github) HADOOP-16859: ABFS: Add unbuffer support to ABFS connector.
[Feb 24, 2020 6:45:34 PM] (github) HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3
[Feb 24, 2020 8:45:49 PM] (snemeth) YARN-10157. FS-CS converter: initPropertyActions() is not called without
[Feb 24, 2020 8:54:07 PM] (snemeth) YARN-10135. FS-CS converter tool: issue warning on dynamic auto-create
[Feb 24, 2020 9:39:16 PM] (weichiu) HDFS-15174. Optimize ReplicaCachingGetSpaceUsed by reducing unnecessary
[Feb 25, 2020 2:08:13 AM] (tasanuma) HADOOP-16841. The description of
[Feb 25, 2020 4:47:52 AM] (github) YARN-10074. Update netty to 4.1.42Final in yarn-csi. Contributed by
[Feb 25, 2020 8:30:04 PM] (snemeth) YARN-10130. FS-CS converter: Do not allow output dir to be the same as
[Feb 25, 2020 8:48:16 PM] (snemeth) YARN-8767. TestStreamingStatus fails. Contributed by Andras Bokor
[Feb 25, 2020 9:28:50 PM] (weichiu) HDFS-14861. Reset LowRedundancyBlocks Iterator periodically. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen shadedclient unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-maven-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth-examples.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [524K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/
[Feb 24, 2020 2:47:01 PM] (ayushsaxena) HDFS-15166. Remove redundant field fStream in ByteStringLog. Contributed
[Feb 24, 2020 3:08:04 PM] (ayushsaxena) HDFS-15187. CORRUPT replica mismatch between namenodes after failover.
[Feb 24, 2020 4:28:00 PM] (github) HADOOP-16859: ABFS: Add unbuffer support to ABFS connector.
[Feb 24, 2020 6:45:34 PM] (github) HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3
[Feb 24, 2020 8:45:49 PM] (snemeth) YARN-10157. FS-CS converter: initPropertyActions() is not called without
[Feb 24, 2020 8:54:07 PM] (snemeth) YARN-10135. FS-CS converter tool: issue warning on dynamic auto-create
[Feb 24, 2020 9:39:16 PM] (weichiu) HDFS-15174. Optimize ReplicaCachingGetSpaceUsed by reducing unnecessary
[Feb 25, 2020 2:08:13 AM] (tasanuma) HADOOP-16841. The description of
[Feb 25, 2020 4:47:52 AM] (github) YARN-10074. Update netty to 4.1.42Final in yarn-csi. Contributed by
[Feb 25, 2020 8:30:04 PM] (snemeth) YARN-10130. FS-CS converter: Do not allow output dir to be the same as
[Feb 25, 2020 8:48:16 PM] (snemeth) YARN-8767. TestStreamingStatus fails. Contributed by Andras Bokor
[Feb 25, 2020 9:28:50 PM] (weichiu) HDFS-14861. Reset LowRedundancyBlocks Iterator periodically. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen shadedclient unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-maven-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth-examples.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [524K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/
[Feb 24, 2020 2:47:01 PM] (ayushsaxena) HDFS-15166. Remove redundant field fStream in ByteStringLog. Contributed
[Feb 24, 2020 3:08:04 PM] (ayushsaxena) HDFS-15187. CORRUPT replica mismatch between namenodes after failover.
[Feb 24, 2020 4:28:00 PM] (github) HADOOP-16859: ABFS: Add unbuffer support to ABFS connector.
[Feb 24, 2020 6:45:34 PM] (github) HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3
[Feb 24, 2020 8:45:49 PM] (snemeth) YARN-10157. FS-CS converter: initPropertyActions() is not called without
[Feb 24, 2020 8:54:07 PM] (snemeth) YARN-10135. FS-CS converter tool: issue warning on dynamic auto-create
[Feb 24, 2020 9:39:16 PM] (weichiu) HDFS-15174. Optimize ReplicaCachingGetSpaceUsed by reducing unnecessary
[Feb 25, 2020 2:08:13 AM] (tasanuma) HADOOP-16841. The description of
[Feb 25, 2020 4:47:52 AM] (github) YARN-10074. Update netty to 4.1.42Final in yarn-csi. Contributed by
[Feb 25, 2020 8:30:04 PM] (snemeth) YARN-10130. FS-CS converter: Do not allow output dir to be the same as
[Feb 25, 2020 8:48:16 PM] (snemeth) YARN-8767. TestStreamingStatus fails. Contributed by Andras Bokor
[Feb 25, 2020 9:28:50 PM] (weichiu) HDFS-14861. Reset LowRedundancyBlocks Iterator periodically. Contributed
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen shadedclient unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.tools.TestECAdmin
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.server.namenode.TestFileContextXAttr
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.tools.TestDFSAdmin
hadoop.hdfs.server.namenode.TestFsck
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
hadoop.hdfs.tools.TestDFSAdminWithHA
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-compile-javac-root.txt [424K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-maven-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth-examples.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [524K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/
[Feb 23, 2020 8:55:39 AM] (ayushsaxena) HDFS-15041. Make MAX_LOCK_HOLD_MS and full queue size configurable.
[Feb 23, 2020 6:37:18 PM] (ayushsaxena) HDFS-15176. Enable GcTimePercentage Metric in NameNode's JvmMetrics.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestGetBlocks
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestSetrepDecreasing
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.applications.distributedshell.TestDSWithMultipleNodeManager
hadoop.mapreduce.v2.app.webapp.TestAMWebApp
hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.yarn.sls.TestReservationSystemInvariants
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [452K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [52K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [120K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/
[Feb 23, 2020 8:55:39 AM] (ayushsaxena) HDFS-15041. Make MAX_LOCK_HOLD_MS and full queue size configurable.
[Feb 23, 2020 6:37:18 PM] (ayushsaxena) HDFS-15176. Enable GcTimePercentage Metric in NameNode's JvmMetrics.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestGetBlocks
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestSetrepDecreasing
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.applications.distributedshell.TestDSWithMultipleNodeManager
hadoop.mapreduce.v2.app.webapp.TestAMWebApp
hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.yarn.sls.TestReservationSystemInvariants
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [452K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [52K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [120K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/
[Feb 23, 2020 8:55:39 AM] (ayushsaxena) HDFS-15041. Make MAX_LOCK_HOLD_MS and full queue size configurable.
[Feb 23, 2020 6:37:18 PM] (ayushsaxena) HDFS-15176. Enable GcTimePercentage Metric in NameNode's JvmMetrics.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestGetBlocks
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestSetrepDecreasing
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.applications.distributedshell.TestDSWithMultipleNodeManager
hadoop.mapreduce.v2.app.webapp.TestAMWebApp
hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.yarn.sls.TestReservationSystemInvariants
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [452K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [52K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [120K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/
[Feb 23, 2020 8:55:39 AM] (ayushsaxena) HDFS-15041. Make MAX_LOCK_HOLD_MS and full queue size configurable.
[Feb 23, 2020 6:37:18 PM] (ayushsaxena) HDFS-15176. Enable GcTimePercentage Metric in NameNode's JvmMetrics.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestReadStripedFileWithDNFailure
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.TestGetBlocks
hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestSetrepDecreasing
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.applications.distributedshell.TestDSWithMultipleNodeManager
hadoop.mapreduce.v2.app.webapp.TestAMWebApp
hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.yarn.sls.TestReservationSystemInvariants
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [452K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [52K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [120K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1420/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/
[Feb 22, 2020 1:53:37 AM] (shv) HDFS-14731. [FGL] Remove redundant locking on NameNode. Contributed by
[Feb 22, 2020 8:57:26 AM] (ayushsaxena) HDFS-15182. TestBlockManager#testOneOfTwoRacksDecommissioned() fail in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [392K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/
[Feb 22, 2020 1:53:37 AM] (shv) HDFS-14731. [FGL] Remove redundant locking on NameNode. Contributed by
[Feb 22, 2020 8:57:26 AM] (ayushsaxena) HDFS-15182. TestBlockManager#testOneOfTwoRacksDecommissioned() fail in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [392K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/
[Feb 22, 2020 1:53:37 AM] (shv) HDFS-14731. [FGL] Remove redundant locking on NameNode. Contributed by
[Feb 22, 2020 8:57:26 AM] (ayushsaxena) HDFS-15182. TestBlockManager#testOneOfTwoRacksDecommissioned() fail in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [392K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/
[Feb 22, 2020 1:53:37 AM] (shv) HDFS-14731. [FGL] Remove redundant locking on NameNode. Contributed by
[Feb 22, 2020 8:57:26 AM] (ayushsaxena) HDFS-15182. TestBlockManager#testOneOfTwoRacksDecommissioned() fail in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [392K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1419/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/
[Feb 21, 2020 2:56:07 AM] (github) HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem object
[Feb 21, 2020 3:22:16 AM] (github) HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn
[Feb 21, 2020 11:13:38 AM] (github) HADOOP-16706. ITestClientUrlScheme fails for accounts which don't
[Feb 21, 2020 1:44:46 PM] (stevel) HADOOP-16711.
[Feb 21, 2020 6:51:14 PM] (shv) HDFS-15185. StartupProgress reports edits segments until the entire
[Feb 22, 2020 12:36:30 AM] (inigoiri) HDFS-15172. Remove unnecessary deadNodeDetectInterval in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.TestMRJobsWithProfiler
hadoop.mapreduce.lib.join.TestJoinDatamerge
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/
[Feb 21, 2020 2:56:07 AM] (github) HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem object
[Feb 21, 2020 3:22:16 AM] (github) HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn
[Feb 21, 2020 11:13:38 AM] (github) HADOOP-16706. ITestClientUrlScheme fails for accounts which don't
[Feb 21, 2020 1:44:46 PM] (stevel) HADOOP-16711.
[Feb 21, 2020 6:51:14 PM] (shv) HDFS-15185. StartupProgress reports edits segments until the entire
[Feb 22, 2020 12:36:30 AM] (inigoiri) HDFS-15172. Remove unnecessary deadNodeDetectInterval in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.TestMRJobsWithProfiler
hadoop.mapreduce.lib.join.TestJoinDatamerge
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/
[Feb 21, 2020 2:56:07 AM] (github) HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem object
[Feb 21, 2020 3:22:16 AM] (github) HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn
[Feb 21, 2020 11:13:38 AM] (github) HADOOP-16706. ITestClientUrlScheme fails for accounts which don't
[Feb 21, 2020 1:44:46 PM] (stevel) HADOOP-16711.
[Feb 21, 2020 6:51:14 PM] (shv) HDFS-15185. StartupProgress reports edits segments until the entire
[Feb 22, 2020 12:36:30 AM] (inigoiri) HDFS-15172. Remove unnecessary deadNodeDetectInterval in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.TestMRJobsWithProfiler
hadoop.mapreduce.lib.join.TestJoinDatamerge
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/
[Feb 21, 2020 2:56:07 AM] (github) HDFS-15052. WebHDFS getTrashRoot leads to OOM due to FileSystem object
[Feb 21, 2020 3:22:16 AM] (github) HADOOP-16869. Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn
[Feb 21, 2020 11:13:38 AM] (github) HADOOP-16706. ITestClientUrlScheme fails for accounts which don't
[Feb 21, 2020 1:44:46 PM] (stevel) HADOOP-16711.
[Feb 21, 2020 6:51:14 PM] (shv) HDFS-15185. StartupProgress reports edits segments until the entire
[Feb 22, 2020 12:36:30 AM] (inigoiri) HDFS-15172. Remove unnecessary deadNodeDetectInterval in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.TestMRJobsWithProfiler
hadoop.mapreduce.lib.join.TestJoinDatamerge
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [100K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1418/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/
[Feb 20, 2020 2:27:15 PM] (snemeth) YARN-10143. YARN-10101 broke Yarn logs CLI. Contributed by Adam Antal
[Feb 20, 2020 3:04:06 PM] (pjoseph) YARN-10119. Option to reset AM failure count for YARN Service
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [444K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/
[Feb 20, 2020 2:27:15 PM] (snemeth) YARN-10143. YARN-10101 broke Yarn logs CLI. Contributed by Adam Antal
[Feb 20, 2020 3:04:06 PM] (pjoseph) YARN-10119. Option to reset AM failure count for YARN Service
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [444K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/
[Feb 20, 2020 2:27:15 PM] (snemeth) YARN-10143. YARN-10101 broke Yarn logs CLI. Contributed by Adam Antal
[Feb 20, 2020 3:04:06 PM] (pjoseph) YARN-10119. Option to reset AM failure count for YARN Service
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [444K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/
[Feb 20, 2020 2:27:15 PM] (snemeth) YARN-10143. YARN-10101 broke Yarn logs CLI. Contributed by Adam Antal
[Feb 20, 2020 3:04:06 PM] (pjoseph) YARN-10119. Option to reset AM failure count for YARN Service
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestHAAppend
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [444K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1417/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/
[Feb 19, 2020 2:50:59 AM] (ayushsaxena) HDFS-13739. Add option to disable rack local write preference.
[Feb 19, 2020 5:47:22 AM] (sunilg) YARN-10139. ValidateAndGetSchedulerConfiguration API fails when cluster
[Feb 19, 2020 5:53:08 AM] (weichiu) HADOOP-16868. ipc.Server readAndProcess threw NullPointerException.
[Feb 19, 2020 2:54:25 PM] (snemeth) YARN-10147. FPGA plugin can't find the localized aocx file. Contributed
[Feb 19, 2020 7:33:58 PM] (inigoiri) HDFS-15165. In Du missed calling getAttributesProvider. Contributed by
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestPread
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [512K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/
[Feb 19, 2020 2:50:59 AM] (ayushsaxena) HDFS-13739. Add option to disable rack local write preference.
[Feb 19, 2020 5:47:22 AM] (sunilg) YARN-10139. ValidateAndGetSchedulerConfiguration API fails when cluster
[Feb 19, 2020 5:53:08 AM] (weichiu) HADOOP-16868. ipc.Server readAndProcess threw NullPointerException.
[Feb 19, 2020 2:54:25 PM] (snemeth) YARN-10147. FPGA plugin can't find the localized aocx file. Contributed
[Feb 19, 2020 7:33:58 PM] (inigoiri) HDFS-15165. In Du missed calling getAttributesProvider. Contributed by
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestPread
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [512K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/
[Feb 19, 2020 2:50:59 AM] (ayushsaxena) HDFS-13739. Add option to disable rack local write preference.
[Feb 19, 2020 5:47:22 AM] (sunilg) YARN-10139. ValidateAndGetSchedulerConfiguration API fails when cluster
[Feb 19, 2020 5:53:08 AM] (weichiu) HADOOP-16868. ipc.Server readAndProcess threw NullPointerException.
[Feb 19, 2020 2:54:25 PM] (snemeth) YARN-10147. FPGA plugin can't find the localized aocx file. Contributed
[Feb 19, 2020 7:33:58 PM] (inigoiri) HDFS-15165. In Du missed calling getAttributesProvider. Contributed by
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestPread
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [512K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/
[Feb 19, 2020 2:50:59 AM] (ayushsaxena) HDFS-13739. Add option to disable rack local write preference.
[Feb 19, 2020 5:47:22 AM] (sunilg) YARN-10139. ValidateAndGetSchedulerConfiguration API fails when cluster
[Feb 19, 2020 5:53:08 AM] (weichiu) HADOOP-16868. ipc.Server readAndProcess threw NullPointerException.
[Feb 19, 2020 2:54:25 PM] (snemeth) YARN-10147. FPGA plugin can't find the localized aocx file. Contributed
[Feb 19, 2020 7:33:58 PM] (inigoiri) HDFS-15165. In Du missed calling getAttributesProvider. Contributed by
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestFileChecksumCompositeCrc
hadoop.hdfs.TestDFSStorageStateRecovery
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestPread
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestEncryptionZonesWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDistributedFileSystemWithECFile
hadoop.hdfs.TestErasureCodeBenchmarkThroughput
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [512K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1416/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/
[Feb 18, 2020 5:50:11 PM] (aagarwal) HADOOP-16833. InstrumentedLock should log lock queue time. Contributed
[Feb 19, 2020 12:50:37 AM] (github) YARN-8374. Upgrade objenesis to 2.6 (#1798)
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvninstall mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestStagingCleanup
hadoop.yarn.sls.TestReservationSystemInvariants
mvninstall:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvninstall-root.txt [1.2M]
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt []
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvnsite-root.txt [4.0K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [108K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/
[Feb 18, 2020 5:50:11 PM] (aagarwal) HADOOP-16833. InstrumentedLock should log lock queue time. Contributed
[Feb 19, 2020 12:50:37 AM] (github) YARN-8374. Upgrade objenesis to 2.6 (#1798)
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvninstall mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestStagingCleanup
hadoop.yarn.sls.TestReservationSystemInvariants
mvninstall:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvninstall-root.txt [1.2M]
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt []
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvnsite-root.txt [4.0K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [108K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/
[Feb 18, 2020 5:50:11 PM] (aagarwal) HADOOP-16833. InstrumentedLock should log lock queue time. Contributed
[Feb 19, 2020 12:50:37 AM] (github) YARN-8374. Upgrade objenesis to 2.6 (#1798)
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvninstall mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestStagingCleanup
hadoop.yarn.sls.TestReservationSystemInvariants
mvninstall:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvninstall-root.txt [1.2M]
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt []
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvnsite-root.txt [4.0K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [108K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/
[Feb 18, 2020 5:50:11 PM] (aagarwal) HADOOP-16833. InstrumentedLock should log lock queue time. Contributed
[Feb 19, 2020 12:50:37 AM] (github) YARN-8374. Upgrade objenesis to 2.6 (#1798)
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvninstall mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.v2.app.rm.TestRMCommunicator
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestStagingCleanup
hadoop.yarn.sls.TestReservationSystemInvariants
mvninstall:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvninstall-root.txt [1.2M]
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-compile-root.txt [20K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt []
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-mvnsite-root.txt [4.0K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [92K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [108K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1415/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/
[Feb 17, 2020 6:55:10 AM] (github) HDFS-15173. RBF: Delete repeated configuration
[Feb 17, 2020 7:13:33 PM] (ayushsaxena) HADOOP-13666. Supporting rack exclusion in countNumOfAvailableNodes in
[Feb 17, 2020 10:06:34 PM] (stevel) HADOOP-15961. S3A committers: make sure there's regular progress()
[Feb 17, 2020 10:14:39 PM] (github) HADOOP-16759. FileSystem Javadocs to list what breaks on API changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/
[Feb 17, 2020 6:55:10 AM] (github) HDFS-15173. RBF: Delete repeated configuration
[Feb 17, 2020 7:13:33 PM] (ayushsaxena) HADOOP-13666. Supporting rack exclusion in countNumOfAvailableNodes in
[Feb 17, 2020 10:06:34 PM] (stevel) HADOOP-15961. S3A committers: make sure there's regular progress()
[Feb 17, 2020 10:14:39 PM] (github) HADOOP-16759. FileSystem Javadocs to list what breaks on API changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/
[Feb 17, 2020 6:55:10 AM] (github) HDFS-15173. RBF: Delete repeated configuration
[Feb 17, 2020 7:13:33 PM] (ayushsaxena) HADOOP-13666. Supporting rack exclusion in countNumOfAvailableNodes in
[Feb 17, 2020 10:06:34 PM] (stevel) HADOOP-15961. S3A committers: make sure there's regular progress()
[Feb 17, 2020 10:14:39 PM] (github) HADOOP-16759. FileSystem Javadocs to list what breaks on API changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/
[Feb 17, 2020 6:55:10 AM] (github) HDFS-15173. RBF: Delete repeated configuration
[Feb 17, 2020 7:13:33 PM] (ayushsaxena) HADOOP-13666. Supporting rack exclusion in countNumOfAvailableNodes in
[Feb 17, 2020 10:06:34 PM] (stevel) HADOOP-15961. S3A committers: make sure there's regular progress()
[Feb 17, 2020 10:14:39 PM] (github) HADOOP-16759. FileSystem Javadocs to list what breaks on API changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [296K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/
[Feb 16, 2020 6:45:03 AM] (surendralilhore) HDFS-15135. EC : ArrayIndexOutOfBoundsException in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.server.namenode.TestNameNodeMXBean
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.service.TestYarnNativeServices
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [40K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/
[Feb 16, 2020 6:45:03 AM] (surendralilhore) HDFS-15135. EC : ArrayIndexOutOfBoundsException in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.server.namenode.TestNameNodeMXBean
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.service.TestYarnNativeServices
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [40K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/
[Feb 16, 2020 6:45:03 AM] (surendralilhore) HDFS-15135. EC : ArrayIndexOutOfBoundsException in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.server.namenode.TestNameNodeMXBean
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.service.TestYarnNativeServices
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [40K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/
[Feb 16, 2020 6:45:03 AM] (surendralilhore) HDFS-15135. EC : ArrayIndexOutOfBoundsException in
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestDeadNodeDetection
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
hadoop.hdfs.TestRollingUpgrade
hadoop.hdfs.server.namenode.TestNameNodeMXBean
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.service.TestYarnNativeServices
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [372K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [40K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1413/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/
No changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [752K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [20K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [96K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/
No changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [752K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [20K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [96K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/
No changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [752K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [20K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [96K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/
No changes
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMultipleNNPortQOP
hadoop.hdfs.TestReconstructStripedFile
hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor
hadoop.hdfs.TestFileAppend4
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestReencryptionWithKMS
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [752K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [20K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [96K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1412/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/
[Feb 14, 2020 6:20:28 AM] (aajisaka) HADOOP-16850. Support getting thread info from thread group for
[Feb 14, 2020 11:20:29 AM] (brahma) YARN-10136. [Router] : Application metrics are hardcode as N/A in UI.
[Feb 14, 2020 4:37:24 PM] (ayushsaxena) HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMaintenanceState
hadoop.hdfs.TestDeadNodeDetection
hadoop.yarn.applications.distributedshell.TestDistributedShell
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-checkstyle-root.txt [16M]
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-mvnsite-root.txt [288K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-javadoc-javadoc-root.txt [980K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [364K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/
[Feb 14, 2020 6:20:28 AM] (aajisaka) HADOOP-16850. Support getting thread info from thread group for
[Feb 14, 2020 11:20:29 AM] (brahma) YARN-10136. [Router] : Application metrics are hardcode as N/A in UI.
[Feb 14, 2020 4:37:24 PM] (ayushsaxena) HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMaintenanceState
hadoop.hdfs.TestDeadNodeDetection
hadoop.yarn.applications.distributedshell.TestDistributedShell
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-checkstyle-root.txt [16M]
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-mvnsite-root.txt [288K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-javadoc-javadoc-root.txt [980K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [364K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/
[Feb 14, 2020 6:20:28 AM] (aajisaka) HADOOP-16850. Support getting thread info from thread group for
[Feb 14, 2020 11:20:29 AM] (brahma) YARN-10136. [Router] : Application metrics are hardcode as N/A in UI.
[Feb 14, 2020 4:37:24 PM] (ayushsaxena) HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMaintenanceState
hadoop.hdfs.TestDeadNodeDetection
hadoop.yarn.applications.distributedshell.TestDistributedShell
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-checkstyle-root.txt [16M]
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-mvnsite-root.txt [288K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-javadoc-javadoc-root.txt [980K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [364K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/
[Feb 14, 2020 6:20:28 AM] (aajisaka) HADOOP-16850. Support getting thread info from thread group for
[Feb 14, 2020 11:20:29 AM] (brahma) YARN-10136. [Router] : Application metrics are hardcode as N/A in UI.
[Feb 14, 2020 4:37:24 PM] (ayushsaxena) HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.
-1 overall
The following subsystems voted -1:
asflicense compile findbugs mvnsite pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestMaintenanceState
hadoop.hdfs.TestDeadNodeDetection
hadoop.yarn.applications.distributedshell.TestDistributedShell
compile:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-compile-root.txt [564K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-checkstyle-root.txt [16M]
mvnsite:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-mvnsite-root.txt [288K]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/diff-javadoc-javadoc-root.txt [980K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [364K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1411/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/
[Feb 13, 2020 4:09:13 AM] (pjoseph) YARN-9521. Handle FileSystem close in ApiServiceClient
[Feb 13, 2020 11:08:54 AM] (snemeth) YARN-10029. Add option to UIv2 to get container logs from the new JHS
[Feb 13, 2020 11:27:41 AM] (surendralilhore) HDFS-15086. Block scheduled counter never get decremet if the block got
[Feb 13, 2020 3:31:35 PM] (snemeth) YARN-10137. UIv2 build is broken in trunk. Contributed by Adam Antal
[Feb 13, 2020 7:09:49 PM] (stevel) HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [600K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/
[Feb 13, 2020 4:09:13 AM] (pjoseph) YARN-9521. Handle FileSystem close in ApiServiceClient
[Feb 13, 2020 11:08:54 AM] (snemeth) YARN-10029. Add option to UIv2 to get container logs from the new JHS
[Feb 13, 2020 11:27:41 AM] (surendralilhore) HDFS-15086. Block scheduled counter never get decremet if the block got
[Feb 13, 2020 3:31:35 PM] (snemeth) YARN-10137. UIv2 build is broken in trunk. Contributed by Adam Antal
[Feb 13, 2020 7:09:49 PM] (stevel) HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [600K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/
[Feb 13, 2020 4:09:13 AM] (pjoseph) YARN-9521. Handle FileSystem close in ApiServiceClient
[Feb 13, 2020 11:08:54 AM] (snemeth) YARN-10029. Add option to UIv2 to get container logs from the new JHS
[Feb 13, 2020 11:27:41 AM] (surendralilhore) HDFS-15086. Block scheduled counter never get decremet if the block got
[Feb 13, 2020 3:31:35 PM] (snemeth) YARN-10137. UIv2 build is broken in trunk. Contributed by Adam Antal
[Feb 13, 2020 7:09:49 PM] (stevel) HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [600K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/
[Feb 13, 2020 4:09:13 AM] (pjoseph) YARN-9521. Handle FileSystem close in ApiServiceClient
[Feb 13, 2020 11:08:54 AM] (snemeth) YARN-10029. Add option to UIv2 to get container logs from the new JHS
[Feb 13, 2020 11:27:41 AM] (surendralilhore) HDFS-15086. Block scheduled counter never get decremet if the block got
[Feb 13, 2020 3:31:35 PM] (snemeth) YARN-10137. UIv2 build is broken in trunk. Contributed by Adam Antal
[Feb 13, 2020 7:09:49 PM] (stevel) HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.TestDecommissionWithStriped
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.TestFSEditLogLoader
hadoop.hdfs.TestDecommissionWithBackoffMonitor
hadoop.hdfs.TestErasureCodingExerciseAPIs
hadoop.hdfs.server.namenode.TestQuotaByStorageType
hadoop.hdfs.TestFileChecksum
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestDFSStripedOutputStream
hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
hadoop.hdfs.TestDFSStripedInputStream
hadoop.hdfs.TestLeaseRecovery
hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [600K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [44K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1410/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1409/
[Feb 12, 2020 5:06:23 AM] (aajisaka) HADOOP-16849. start-build-env.sh behaves incorrectly when username is
[Feb 12, 2020 12:17:33 PM] (github) HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
[Feb 12, 2020 2:11:04 PM] (ayushsaxena) HDFS-15127. RBF: Do not allow writes when a subcluster is unavailable
[Feb 12, 2020 2:53:33 PM] (snemeth) MAPREDUCE-7263. Remove obsolete validateTargetPath() from
[Feb 12, 2020 2:59:35 PM] (ayushsaxena) HDFS-15161. When evictableMmapped or evictable size is zero, do not
[Feb 13, 2020 1:06:07 AM] (github) HDFS-13989. RBF: Add FSCK to the Router (#1832)
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1409/
[Feb 12, 2020 5:06:23 AM] (aajisaka) HADOOP-16849. start-build-env.sh behaves incorrectly when username is
[Feb 12, 2020 12:17:33 PM] (github) HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
[Feb 12, 2020 2:11:04 PM] (ayushsaxena) HDFS-15127. RBF: Do not allow writes when a subcluster is unavailable
[Feb 12, 2020 2:53:33 PM] (snemeth) MAPREDUCE-7263. Remove obsolete validateTargetPath() from
[Feb 12, 2020 2:59:35 PM] (ayushsaxena) HDFS-15161. When evictableMmapped or evictable size is zero, do not
[Feb 13, 2020 1:06:07 AM] (github) HDFS-13989. RBF: Add FSCK to the Router (#1832)
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1409/
[Feb 12, 2020 5:06:23 AM] (aajisaka) HADOOP-16849. start-build-env.sh behaves incorrectly when username is
[Feb 12, 2020 12:17:33 PM] (github) HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
[Feb 12, 2020 2:11:04 PM] (ayushsaxena) HDFS-15127. RBF: Do not allow writes when a subcluster is unavailable
[Feb 12, 2020 2:53:33 PM] (snemeth) MAPREDUCE-7263. Remove obsolete validateTargetPath() from
[Feb 12, 2020 2:59:35 PM] (ayushsaxena) HDFS-15161. When evictableMmapped or evictable size is zero, do not
[Feb 13, 2020 1:06:07 AM] (github) HDFS-13989. RBF: Add FSCK to the Router (#1832)
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1409/
[Feb 12, 2020 5:06:23 AM] (aajisaka) HADOOP-16849. start-build-env.sh behaves incorrectly when username is
[Feb 12, 2020 12:17:33 PM] (github) HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
[Feb 12, 2020 2:11:04 PM] (ayushsaxena) HDFS-15127. RBF: Do not allow writes when a subcluster is unavailable
[Feb 12, 2020 2:53:33 PM] (snemeth) MAPREDUCE-7263. Remove obsolete validateTargetPath() from
[Feb 12, 2020 2:59:35 PM] (ayushsaxena) HDFS-15161. When evictableMmapped or evictable size is zero, do not
[Feb 13, 2020 1:06:07 AM] (github) HDFS-13989. RBF: Add FSCK to the Router (#1832)
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/
[Feb 11, 2020 11:22:07 AM] (github) HADOOP-16847. Test can fail if HashSet iterates in a different order.
[Feb 11, 2020 11:51:45 AM] (github) HADOOP-16851. Removed unused import in Configuration
[Feb 11, 2020 4:00:15 PM] (weichiu) HDFS-15150. Introduce read write lock to Datanode. Contributed Stephen
[Feb 11, 2020 4:31:58 PM] (pjoseph) YARN-10127. Remove setting App Ordering Policy to ParentQueue in
[Feb 11, 2020 6:40:00 PM] (kihwal) HDFS-14758. Make lease hard limit configurable and reduce the default.
-1 overall
The following subsystems voted -1:
findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.split.TestJobSplitWriterWithEC
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestFetchFailure
hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [300K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [288K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [80K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [68K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-docker.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt [0]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/
[Feb 11, 2020 11:22:07 AM] (github) HADOOP-16847. Test can fail if HashSet iterates in a different order.
[Feb 11, 2020 11:51:45 AM] (github) HADOOP-16851. Removed unused import in Configuration
[Feb 11, 2020 4:00:15 PM] (weichiu) HDFS-15150. Introduce read write lock to Datanode. Contributed Stephen
[Feb 11, 2020 4:31:58 PM] (pjoseph) YARN-10127. Remove setting App Ordering Policy to ParentQueue in
[Feb 11, 2020 6:40:00 PM] (kihwal) HDFS-14758. Make lease hard limit configurable and reduce the default.
-1 overall
The following subsystems voted -1:
findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.split.TestJobSplitWriterWithEC
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestFetchFailure
hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [300K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [288K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [80K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [68K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-docker.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt [0]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/
[Feb 11, 2020 11:22:07 AM] (github) HADOOP-16847. Test can fail if HashSet iterates in a different order.
[Feb 11, 2020 11:51:45 AM] (github) HADOOP-16851. Removed unused import in Configuration
[Feb 11, 2020 4:00:15 PM] (weichiu) HDFS-15150. Introduce read write lock to Datanode. Contributed Stephen
[Feb 11, 2020 4:31:58 PM] (pjoseph) YARN-10127. Remove setting App Ordering Policy to ParentQueue in
[Feb 11, 2020 6:40:00 PM] (kihwal) HDFS-14758. Make lease hard limit configurable and reduce the default.
-1 overall
The following subsystems voted -1:
findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.split.TestJobSplitWriterWithEC
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestFetchFailure
hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [300K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [288K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [80K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [68K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-docker.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt [0]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/
[Feb 11, 2020 11:22:07 AM] (github) HADOOP-16847. Test can fail if HashSet iterates in a different order.
[Feb 11, 2020 11:51:45 AM] (github) HADOOP-16851. Removed unused import in Configuration
[Feb 11, 2020 4:00:15 PM] (weichiu) HDFS-15150. Introduce read write lock to Datanode. Contributed Stephen
[Feb 11, 2020 4:31:58 PM] (pjoseph) YARN-10127. Remove setting App Ordering Policy to ParentQueue in
[Feb 11, 2020 6:40:00 PM] (kihwal) HDFS-14758. Make lease hard limit configurable and reduce the default.
-1 overall
The following subsystems voted -1:
findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.balancer.TestBalancer
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
hadoop.hdfs.server.datanode.TestDataNodeLifeline
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.mapreduce.split.TestJobSplitWriterWithEC
hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
hadoop.mapreduce.v2.app.TestFetchFailure
hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [300K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [288K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [4.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [80K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [68K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-webapp.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-catalog_hadoop-yarn-applications-catalog-docker.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-site.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [0]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1408/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt [0]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/
[Feb 10, 2020 4:13:11 AM] (iwasakims) HADOOP-16739. Fix native build failure of hadoop-pipes on CentOS 8.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/
[Feb 10, 2020 4:13:11 AM] (iwasakims) HADOOP-16739. Fix native build failure of hadoop-pipes on CentOS 8.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/
[Feb 10, 2020 4:13:11 AM] (iwasakims) HADOOP-16739. Fix native build failure of hadoop-pipes on CentOS 8.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/
[Feb 10, 2020 4:13:11 AM] (iwasakims) HADOOP-16739. Fix native build failure of hadoop-pipes on CentOS 8.
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.yarn.applications.distributedshell.TestDistributedShell
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/diff-javadoc-javadoc-root.txt [976K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1407/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1406/
[Feb 9, 2020 1:44:18 PM] (ayushsaxena) YARN-9624. Use switch case for ProtoUtils#convertFromProtoFormat
[Feb 9, 2020 3:44:53 PM] (sunilg) YARN-10109. Allow stop and convert from leaf to parent queue in a single
[Feb 9, 2020 6:02:22 PM] (ayushsaxena) HDFS-15158. The number of failed volumes mismatch with volumeFailures of
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1406/
[Feb 9, 2020 1:44:18 PM] (ayushsaxena) YARN-9624. Use switch case for ProtoUtils#convertFromProtoFormat
[Feb 9, 2020 3:44:53 PM] (sunilg) YARN-10109. Allow stop and convert from leaf to parent queue in a single
[Feb 9, 2020 6:02:22 PM] (ayushsaxena) HDFS-15158. The number of failed volumes mismatch with volumeFailures of
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1406/
[Feb 9, 2020 1:44:18 PM] (ayushsaxena) YARN-9624. Use switch case for ProtoUtils#convertFromProtoFormat
[Feb 9, 2020 3:44:53 PM] (sunilg) YARN-10109. Allow stop and convert from leaf to parent queue in a single
[Feb 9, 2020 6:02:22 PM] (ayushsaxena) HDFS-15158. The number of failed volumes mismatch with volumeFailures of
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1406/
[Feb 9, 2020 1:44:18 PM] (ayushsaxena) YARN-9624. Use switch case for ProtoUtils#convertFromProtoFormat
[Feb 9, 2020 3:44:53 PM] (sunilg) YARN-10109. Allow stop and convert from leaf to parent queue in a single
[Feb 9, 2020 6:02:22 PM] (ayushsaxena) HDFS-15158. The number of failed volumes mismatch with volumeFailures of
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1405/
[Feb 8, 2020 5:03:57 AM] (ayushsaxena) HDFS-15115. Namenode crash caused by NPE in BlockPlacementPolicyDefault
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1405/
[Feb 8, 2020 5:03:57 AM] (ayushsaxena) HDFS-15115. Namenode crash caused by NPE in BlockPlacementPolicyDefault
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1405/
[Feb 8, 2020 5:03:57 AM] (ayushsaxena) HDFS-15115. Namenode crash caused by NPE in BlockPlacementPolicyDefault
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1405/
[Feb 8, 2020 5:03:57 AM] (ayushsaxena) HDFS-15115. Namenode crash caused by NPE in BlockPlacementPolicyDefault
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1404/
[Feb 6, 2020 11:25:06 AM] (snemeth) YARN-10101. Support listing of aggregated logs for containers belonging
[Feb 6, 2020 2:13:25 PM] (github) HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard
[Feb 6, 2020 6:41:06 PM] (tmarq) HADOOP-16845: Disable
[Feb 6, 2020 6:48:00 PM] (tmarq) HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. Contributed
[Feb 7, 2020 9:21:24 AM] (github) HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from
[Feb 7, 2020 10:30:06 AM] (aajisaka) HADOOP-16834. Replace com.sun.istack.Nullable with
[Feb 7, 2020 10:32:10 AM] (github) Bump checkstyle from 8.26 to 8.29 (#1828)
[Feb 7, 2020 7:47:59 PM] (ayushsaxena) HDFS-15136. LOG flooding in secure mode when Cookies are not set in
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1404/
[Feb 6, 2020 11:25:06 AM] (snemeth) YARN-10101. Support listing of aggregated logs for containers belonging
[Feb 6, 2020 2:13:25 PM] (github) HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard
[Feb 6, 2020 6:41:06 PM] (tmarq) HADOOP-16845: Disable
[Feb 6, 2020 6:48:00 PM] (tmarq) HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. Contributed
[Feb 7, 2020 9:21:24 AM] (github) HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from
[Feb 7, 2020 10:30:06 AM] (aajisaka) HADOOP-16834. Replace com.sun.istack.Nullable with
[Feb 7, 2020 10:32:10 AM] (github) Bump checkstyle from 8.26 to 8.29 (#1828)
[Feb 7, 2020 7:47:59 PM] (ayushsaxena) HDFS-15136. LOG flooding in secure mode when Cookies are not set in
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1404/
[Feb 6, 2020 11:25:06 AM] (snemeth) YARN-10101. Support listing of aggregated logs for containers belonging
[Feb 6, 2020 2:13:25 PM] (github) HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard
[Feb 6, 2020 6:41:06 PM] (tmarq) HADOOP-16845: Disable
[Feb 6, 2020 6:48:00 PM] (tmarq) HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. Contributed
[Feb 7, 2020 9:21:24 AM] (github) HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from
[Feb 7, 2020 10:30:06 AM] (aajisaka) HADOOP-16834. Replace com.sun.istack.Nullable with
[Feb 7, 2020 10:32:10 AM] (github) Bump checkstyle from 8.26 to 8.29 (#1828)
[Feb 7, 2020 7:47:59 PM] (ayushsaxena) HDFS-15136. LOG flooding in secure mode when Cookies are not set in
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1404/
[Feb 6, 2020 11:25:06 AM] (snemeth) YARN-10101. Support listing of aggregated logs for containers belonging
[Feb 6, 2020 2:13:25 PM] (github) HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard
[Feb 6, 2020 6:41:06 PM] (tmarq) HADOOP-16845: Disable
[Feb 6, 2020 6:48:00 PM] (tmarq) HADOOP-16825: ITestAzureBlobFileSystemCheckAccess failing. Contributed
[Feb 7, 2020 9:21:24 AM] (github) HADOOP-16596. [pb-upgrade] Use shaded protobuf classes from
[Feb 7, 2020 10:30:06 AM] (aajisaka) HADOOP-16834. Replace com.sun.istack.Nullable with
[Feb 7, 2020 10:32:10 AM] (github) Bump checkstyle from 8.26 to 8.29 (#1828)
[Feb 7, 2020 7:47:59 PM] (ayushsaxena) HDFS-15136. LOG flooding in secure mode when Cookies are not set in
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/
[Feb 5, 2020 1:44:05 AM] (jhung) YARN-10116. Expose diagnostics in RMAppManager summary
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.ipc.TestRPC
hadoop.hdfs.server.datanode.TestDataNodeTcpNoDelay
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.server.datanode.checker.TestDatasetVolumeChecker
hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [660K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/
[Feb 5, 2020 1:44:05 AM] (jhung) YARN-10116. Expose diagnostics in RMAppManager summary
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.ipc.TestRPC
hadoop.hdfs.server.datanode.TestDataNodeTcpNoDelay
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.server.datanode.checker.TestDatasetVolumeChecker
hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [660K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/
[Feb 5, 2020 1:44:05 AM] (jhung) YARN-10116. Expose diagnostics in RMAppManager summary
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.ipc.TestRPC
hadoop.hdfs.server.datanode.TestDataNodeTcpNoDelay
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.server.datanode.checker.TestDatasetVolumeChecker
hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [660K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/
[Feb 5, 2020 1:44:05 AM] (jhung) YARN-10116. Expose diagnostics in RMAppManager summary
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.ipc.TestRPC
hadoop.hdfs.server.datanode.TestDataNodeTcpNoDelay
hadoop.hdfs.TestAclsEndToEnd
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing
hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail
hadoop.hdfs.server.datanode.checker.TestDatasetVolumeChecker
hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots
hadoop.hdfs.server.namenode.ha.TestHASafeMode
hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.TestSLSRunner
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [660K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1403/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/
[Feb 4, 2020 6:22:35 PM] (kihwal) HDFS-12491. Support wildcard in CLASSPATH for libhdfs. Contributed by
[Feb 4, 2020 8:12:35 PM] (cliang) HDFS-15148. dfs.namenode.send.qop.enabled should not apply to primary NN
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/
[Feb 4, 2020 6:22:35 PM] (kihwal) HDFS-12491. Support wildcard in CLASSPATH for libhdfs. Contributed by
[Feb 4, 2020 8:12:35 PM] (cliang) HDFS-15148. dfs.namenode.send.qop.enabled should not apply to primary NN
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/
[Feb 4, 2020 6:22:35 PM] (kihwal) HDFS-12491. Support wildcard in CLASSPATH for libhdfs. Contributed by
[Feb 4, 2020 8:12:35 PM] (cliang) HDFS-15148. dfs.namenode.send.qop.enabled should not apply to primary NN
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/
[Feb 4, 2020 6:22:35 PM] (kihwal) HDFS-12491. Support wildcard in CLASSPATH for libhdfs. Contributed by
[Feb 4, 2020 8:12:35 PM] (cliang) HDFS-15148. dfs.namenode.send.qop.enabled should not apply to primary NN
-1 overall
The following subsystems voted -1:
asflicense findbugs pathlen unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
XML :
Parsing Error(s):
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346]
Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114]
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115]
FindBugs :
module:hadoop-cloud-storage-project/hadoop-cos
Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66]
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199]
Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178]
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged
Failed junit tests :
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
hadoop.hdfs.TestReconstructStripedFile
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown
hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.yarn.applications.distributedshell.TestDistributedShell
hadoop.yarn.sls.appmaster.TestAMSimulator
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-cc-root.txt [8.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-compile-javac-root.txt [428K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-checkstyle-root.txt [16M]
pathlen:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/pathlen.txt [12K]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-pylint.txt [24K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shellcheck.txt [16K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-patch-shelldocs.txt [44K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-eol.txt [9.9M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/whitespace-tabs.txt [1.1M]
xml:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/xml.txt [20K]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html [12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/diff-javadoc-javadoc-root.txt [752K]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [56K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [16K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K]
asflicense:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1402/artifact/out/patch-asflicense-problems.txt [4.0K]
Powered by Apache Yetus 0.8.0 http://yetus.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1401/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1401/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1401/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1401/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1400/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1400/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1400/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1400/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1399/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1399/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1399/
No changes
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
Posted by Apache Jenkins Server <je...@builds.apache.org>.
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1399/
No changes