You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Andrew Kyle Purtell (Jira)" <ji...@apache.org> on 2022/05/03 00:30:00 UTC

[jira] [Resolved] (HBASE-26563) TestCacheOnWrite and TestCompactingToCellFlatMapMemStore use too many file descriptors

     [ https://issues.apache.org/jira/browse/HBASE-26563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Kyle Purtell resolved HBASE-26563.
-----------------------------------------
    Resolution: Cannot Reproduce

Does not reproduce now that I've switched to a Linux aarch64 VM. Might be worth doing some day. Reopen if you like. 

> TestCacheOnWrite and TestCompactingToCellFlatMapMemStore use too many file descriptors
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-26563
>                 URL: https://issues.apache.org/jira/browse/HBASE-26563
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.5.0
>            Reporter: Andrew Kyle Purtell
>            Priority: Trivial
>
> This is somewhat test environment specific but an indication that TestCacheOnWrite could stand to go on a resource diet. Likewise, TestCompactingToCellFlatMapMemStore
> Apache Maven 3.8.3 (ff8e977a158738155dc465c6a97ffaf31982d739)
> Java version: 1.8.0_312, vendor: Azul Systems, Inc., runtime: /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/jre
> OS name: "mac os x", version: "12.0.1", arch: "aarch64", family: "mac"
> Representative output from TestCacheOnWrite:
> [ERROR] org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite.testCachingDataBlocksThresholdDuringCompaction[35]  Time elapsed: 2.113 s  <<< ERROR!
> java.lang.IllegalStateException: failed to create a child event loop
> 	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
> 	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
> 	at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49)
> 	at org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:86)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:81)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:68)
> 	at org.apache.hadoop.hbase.wal.NettyAsyncFSWALConfigHelper.getEventLoopConfig(NettyAsyncFSWALConfigHelper.java:64)
> 	at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.doInit(AsyncFSWALProvider.java:80)
> 	at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.init(AbstractFSWALProvider.java:125)
> 	at org.apache.hadoop.hbase.wal.SyncReplicationWALProvider.init(SyncReplicationWALProvider.java:115)
> 	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:207)
> 	at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:178)
> Caused by: java.io.IOException: Too many open files
> 	at java.base/sun.nio.ch.IOUtil.makePipe(Native Method)
> 	at java.base/sun.nio.ch.KQueueSelectorImpl.<init>(KQueueSelectorImpl.java:86)
> 	at java.base/sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:35)
> 	at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:173)
> 	... 55 more



--
This message was sent by Atlassian Jira
(v8.20.7#820007)