You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Andrew Kyle Purtell (Jira)" <ji...@apache.org> on 2021/12/11 21:58:00 UTC

[jira] [Updated] (HBASE-26563) TestCacheOnWrite and TestCompactingToCellFlatMapMemStore use too many file descriptors

     [ https://issues.apache.org/jira/browse/HBASE-26563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Kyle Purtell updated HBASE-26563:
----------------------------------------
    Description: 
This is somewhat test environment specific but an indication that TestCacheOnWrite could stand to go on a resource diet. Likewise, TestCompactingToCellFlatMapMemStore

Apache Maven 3.8.3 (ff8e977a158738155dc465c6a97ffaf31982d739)
Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre
OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac"

Even after 'ulimit -S -n 122880' (increasing from default of ~2k to current value of sysctl kern.maxfilesperproc)... this is representative output from TestCacheOnWrite:

[ERROR] org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testCachingDataBlocksThresholdDuringCompaction[35]  Time elapsed: 2.113 s  <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49)
	at org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:86)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:81)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:68)
	at org.apache.hadoop.hbase.wal.NettyAsyncFSWALConfigHelper.getEventLoopConfig(NettyAsyncFSWALConfigHelper.java:64)
	at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.doInit(AsyncFSWALProvider.java:80)
	at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.init(AbstractFSWALProvider.java:125)
	at org.apache.hadoop.hbase.wal.SyncReplicationWALProvider.init(SyncReplicationWALProvider.java:115)
	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:207)
	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:178)

Caused by: java.io.IOException: Too many open files
	at java.base/sun.nio.ch.IOUtil.makePipe(Native Method)
	at java.base/sun.nio.ch.KQueueSelectorImpl.<init>(KQueueSelectorImpl.java:86)
	at java.base/sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:35)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:173)
	... 55 more


  was:
This is somewhat test environment specific but an indication that TestCacheOnWrite could stand to go on a resource diet. 

Apache Maven 3.8.3 (ff8e977a158738155dc465c6a97ffaf31982d739)
Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre
OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac"

Even after 'ulimit -S -n 122880' (increasing from default of ~2k to current value of sysctl kern.maxfilesperproc)...

[ERROR] org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testCachingDataBlocksThresholdDuringCompaction[35]  Time elapsed: 2.113 s  <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49)
	at org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:86)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:81)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:68)
	at org.apache.hadoop.hbase.wal.NettyAsyncFSWALConfigHelper.getEventLoopConfig(NettyAsyncFSWALConfigHelper.java:64)
	at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.doInit(AsyncFSWALProvider.java:80)
	at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.init(AbstractFSWALProvider.java:125)
	at org.apache.hadoop.hbase.wal.SyncReplicationWALProvider.init(SyncReplicationWALProvider.java:115)
	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:207)
	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:178)

Caused by: java.io.IOException: Too many open files
	at java.base/sun.nio.ch.IOUtil.makePipe(Native Method)
	at java.base/sun.nio.ch.KQueueSelectorImpl.<init>(KQueueSelectorImpl.java:86)
	at java.base/sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:35)
	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:173)
	... 55 more



> TestCacheOnWrite and TestCompactingToCellFlatMapMemStore use too many file descriptors
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-26563
>                 URL: https://issues.apache.org/jira/browse/HBASE-26563
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.5.0
>            Reporter: Andrew Kyle Purtell
>            Priority: Trivial
>             Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.9, 2.6.0
>
>
> This is somewhat test environment specific but an indication that TestCacheOnWrite could stand to go on a resource diet. Likewise, TestCompactingToCellFlatMapMemStore
> Apache Maven 3.8.3 (ff8e977a158738155dc465c6a97ffaf31982d739)
> Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre
> OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac"
> Even after 'ulimit -S -n 122880' (increasing from default of ~2k to current value of sysctl kern.maxfilesperproc)... this is representative output from TestCacheOnWrite:
> [ERROR] org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testCachingDataBlocksThresholdDuringCompaction[35]  Time elapsed: 2.113 s  <<< ERROR!
> java.lang.IllegalStateException: failed to create a child event loop
> 	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
> 	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
> 	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49)
> 	at org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:86)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:81)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:68)
> 	at org.apache.hadoop.hbase.wal.NettyAsyncFSWALConfigHelper.getEventLoopConfig(NettyAsyncFSWALConfigHelper.java:64)
> 	at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.doInit(AsyncFSWALProvider.java:80)
> 	at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.init(AbstractFSWALProvider.java:125)
> 	at org.apache.hadoop.hbase.wal.SyncReplicationWALProvider.init(SyncReplicationWALProvider.java:115)
> 	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:207)
> 	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:178)
> Caused by: java.io.IOException: Too many open files
> 	at java.base/sun.nio.ch.IOUtil.makePipe(Native Method)
> 	at java.base/sun.nio.ch.KQueueSelectorImpl.<init>(KQueueSelectorImpl.java:86)
> 	at java.base/sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:35)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:173)
> 	... 55 more



--
This message was sent by Atlassian Jira
(v8.20.1#820001)