You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Michael Stack (Jira)" <ji...@apache.org> on 2020/04/14 14:39:00 UTC

[jira] [Reopened] (HBASE-24072) Nightlies reporting OutOfMemoryError: unable to create new native thread

     [ https://issues.apache.org/jira/browse/HBASE-24072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael Stack reopened HBASE-24072:
-----------------------------------

Reopening. Just as I closed this because we hadn't seen this failure in tests in a while, last night branch-2.3 failed with this.

The ulimit-a shows that the host had 30000 as its ulimit -u. The checkout had     HBASE-24126 Up the container nproc uplimit from 10000 to 12500 (#1504) in it too.

Reopening to see why on branch-2.3 we got this and to figure if general problem still.

> Nightlies reporting OutOfMemoryError: unable to create new native thread
> ------------------------------------------------------------------------
>
>                 Key: HBASE-24072
>                 URL: https://issues.apache.org/jira/browse/HBASE-24072
>             Project: HBase
>          Issue Type: Task
>          Components: test
>            Reporter: Michael Stack
>            Assignee: Michael Stack
>            Priority: Major
>             Fix For: 3.0.0, 2.3.0
>
>         Attachments: 0001-HBASE-24072-Nightlies-reporting-OutOfMemoryError-una.patch, print_ulimit.patch
>
>
> Seeing this kind of thing in nightly...
> {code}
> java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
> 	at org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.beforeClass(TestMultithreadedTableMapper.java:83)
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> 	at org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.beforeClass(TestMultithreadedTableMapper.java:83)
> {code}
> Chatting w/ Nick and Huaxiang, doing the math, we are likely oversubscribing our docker container. It is set to 20G (The hosts are 48G). Fork count is 0.5C on a 16 CPU machine which is 8 *2.8G our current forked jvm size. Add the maven 4G and we could be over the top.
> Play w/ downing the fork size (in earlier study we didn't seem to need this much RAM when running a fat long test). Let me also take th ms off the mvn allocation to see if that helps.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)