You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "tomscut (Jira)" <ji...@apache.org> on 2022/03/15 03:06:00 UTC

[jira] [Created] (HDFS-16506) Unit tests failed because of OutOfMemoryError

tomscut created HDFS-16506:
------------------------------

             Summary: Unit tests failed because of OutOfMemoryError
                 Key: HDFS-16506
                 URL: https://issues.apache.org/jira/browse/HDFS-16506
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: tomscut


Unit tests failed because of OutOfMemoryError.

An example: [[OutOfMemoryError|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4009/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt].|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4009/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt]
{code:java}
[ERROR] Tests run: 32, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 95.727 s <<< FAILURE! - in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped
[ERROR] testGetBlockInfo[4: ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5]](org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped)  Time elapsed: 15.831 s  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:717)
	at io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:32)
	at io.netty.util.internal.ThreadExecutorMap$1.execute(ThreadExecutorMap.java:57)
	at io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:975)
	at io.netty.util.concurrent.SingleThreadEventExecutor.ensureThreadStarted(SingleThreadEventExecutor.java:958)
	at io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:660)
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:163)
	at io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70)
	at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:346)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2348)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:2166)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:2156)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2135)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2109)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2102)
	at org.apache.hadoop.hdfs.MiniDFSCluster.close(MiniDFSCluster.java:3479)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped.testGetBlockInfo(TestBlockInfoStriped.java:257)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org