You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by 陈加俊 <cj...@gmail.com> on 2011/01/11 09:59:14 UTC

java.net.SocketException: Too many open files

I set the env as fallows:

$ ulimit -n
65535

 $ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63943
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63943
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

RS logs as fallows ,Why?

2010-12-29 06:09:10,738 WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3118
java.net.SocketException: Too many open files
        at sun.nio.ch.Net.socket0(Native Method)
        at sun.nio.ch.Net.socket(Net.java:97)
        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
        at
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
        at
org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
        at $Proxy0.regionServerReport(Unknown Source)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
        at java.lang.Thread.run(Thread.java:619)
2010-12-29 06:09:10,765 WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3119
java.net.SocketException: Too many open files
        at sun.nio.ch.Net.socket0(Native Method)
        at sun.nio.ch.Net.socket(Net.java:97)
        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
        at
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
        at
org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
        at $Proxy0.regionServerReport(Unknown Source)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
        at java.lang.Thread.run(Thread.java:619)
2010-12-29 06:09:10,793 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: Unhandled exception.
Aborting...
java.lang.NullPointerException
        at
org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:351)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:313)
        at
org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
        at org.apache.hadoop.ipc.Client.call(Client.java:720)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy1.getFileInfo(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy1.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:619)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
        at
org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:115)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:902)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:554)
        at java.lang.Thread.run(Thread.java:619)

my cluster is : hadoop0.20.2+hbase0.20.6  and 24 RS+DN

Re: java.net.SocketException: Too many open files

Posted by Stack <st...@duboce.net>.
2011/1/15 明珠刘 <re...@gmail.com>:
> What does `netstat` look like?
>

Are you asking about the netstat command?  To learn about it, type
'man netstat'.  Or are you asking something else?
St.Ack

Re: java.net.SocketException: Too many open files

Posted by 明珠刘 <re...@gmail.com>.
What does `netstat` look like?



2011/1/11 陈加俊 <cj...@gmail.com>

> I set the env as fallows:
>
> $ ulimit -n
> 65535
>
>  $ ulimit -a
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 63943
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 65535
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 63943
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
> RS logs as fallows ,Why?
>
> 2010-12-29 06:09:10,738 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3118
> java.net.SocketException: Too many open files
>        at sun.nio.ch.Net.socket0(Native Method)
>        at sun.nio.ch.Net.socket(Net.java:97)
>        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>        at
>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>        at
>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>        at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
>        at $Proxy0.regionServerReport(Unknown Source)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-12-29 06:09:10,765 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3119
> java.net.SocketException: Too many open files
>        at sun.nio.ch.Net.socket0(Native Method)
>        at sun.nio.ch.Net.socket(Net.java:97)
>        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>        at
>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>        at
>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>        at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
>        at $Proxy0.regionServerReport(Unknown Source)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-12-29 06:09:10,793 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unhandled exception.
> Aborting...
> java.lang.NullPointerException
>        at
>
> org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:351)
>        at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:313)
>        at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>        at org.apache.hadoop.ipc.Client.call(Client.java:720)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy1.getFileInfo(Unknown Source)
>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at $Proxy1.getFileInfo(Unknown Source)
>        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:619)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
>        at
>
> org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:115)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:902)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:554)
>        at java.lang.Thread.run(Thread.java:619)
>
> my cluster is : hadoop0.20.2+hbase0.20.6  and 24 RS+DN
>

Re: java.net.SocketException: Too many open files

Posted by Alex Baranau <al...@gmail.com>.
Make sure you've set the limit for correct user. Also check out this info
(check out "File descriptor limits" section):
http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore

Alex Baranau
----
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - Hadoop - HBase

On Tue, Jan 11, 2011 at 10:59 AM, 陈加俊 <cj...@gmail.com> wrote:

> I set the env as fallows:
>
> $ ulimit -n
> 65535
>
>  $ ulimit -a
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 63943
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 65535
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 63943
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
> RS logs as fallows ,Why?
>
> 2010-12-29 06:09:10,738 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3118
> java.net.SocketException: Too many open files
>        at sun.nio.ch.Net.socket0(Native Method)
>        at sun.nio.ch.Net.socket(Net.java:97)
>        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>        at
>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>        at
>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>        at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
>        at $Proxy0.regionServerReport(Unknown Source)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-12-29 06:09:10,765 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3119
> java.net.SocketException: Too many open files
>        at sun.nio.ch.Net.socket0(Native Method)
>        at sun.nio.ch.Net.socket(Net.java:97)
>        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>        at
>
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>        at
>
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>        at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
>        at $Proxy0.regionServerReport(Unknown Source)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-12-29 06:09:10,793 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unhandled exception.
> Aborting...
> java.lang.NullPointerException
>        at
>
> org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:351)
>        at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:313)
>        at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>        at org.apache.hadoop.ipc.Client.call(Client.java:720)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy1.getFileInfo(Unknown Source)
>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at $Proxy1.getFileInfo(Unknown Source)
>        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:619)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
>        at
>
> org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:115)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:902)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:554)
>        at java.lang.Thread.run(Thread.java:619)
>
> my cluster is : hadoop0.20.2+hbase0.20.6  and 24 RS+DN
>

Re: java.net.SocketException: Too many open files

Posted by Stack <st...@duboce.net>.
Whats Alex says.  You can see what the user running hbase sees for
ulimit by looking in the log.  Its the first thing printed.  Grep
ulimit.
St.Ack

On Tue, Jan 11, 2011 at 12:59 AM, 陈加俊 <cj...@gmail.com> wrote:
> I set the env as fallows:
>
> $ ulimit -n
> 65535
>
>  $ ulimit -a
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 63943
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 65535
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 63943
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
> RS logs as fallows ,Why?
>
> 2010-12-29 06:09:10,738 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3118
> java.net.SocketException: Too many open files
>        at sun.nio.ch.Net.socket0(Native Method)
>        at sun.nio.ch.Net.socket(Net.java:97)
>        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>        at
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>        at
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
>        at $Proxy0.regionServerReport(Unknown Source)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-12-29 06:09:10,765 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3119
> java.net.SocketException: Too many open files
>        at sun.nio.ch.Net.socket0(Native Method)
>        at sun.nio.ch.Net.socket(Net.java:97)
>        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
>        at
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
>        at
> org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
>        at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
>        at $Proxy0.regionServerReport(Unknown Source)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-12-29 06:09:10,793 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unhandled exception.
> Aborting...
> java.lang.NullPointerException
>        at
> org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:351)
>        at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:313)
>        at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>        at org.apache.hadoop.ipc.Client.call(Client.java:720)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy1.getFileInfo(Unknown Source)
>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at $Proxy1.getFileInfo(Unknown Source)
>        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:619)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
>        at
> org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:115)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:902)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:554)
>        at java.lang.Thread.run(Thread.java:619)
>
> my cluster is : hadoop0.20.2+hbase0.20.6  and 24 RS+DN
>