You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "wei.he (JIRA)" <ji...@apache.org> on 2016/01/19 14:08:39 UTC

[jira] [Resolved] (HDFS-9473) access standy namenode slow

     [ https://issues.apache.org/jira/browse/HDFS-9473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

wei.he resolved HDFS-9473.
--------------------------
    Resolution: Fixed

> access standy namenode slow
> ---------------------------
>
>                 Key: HDFS-9473
>                 URL: https://issues.apache.org/jira/browse/HDFS-9473
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: wei.he
>
> access standy namenode slow
> we have a hadoop cluster with 200 nodes. And use ha namenodes.
> nn1 hadoop109  active
> nn2 hadoop110  standby
> after we switchover nms,  
> hadoop110(nn2) is active
> hadoop109(nn1) is standy
> when we access hdfs://hadoop109:8020 ( hdfs dfs -ls hdfs://hadoop109:8020),  that get the response sometimes fast, sometimes slow.
> I tuned the rpc parameters about dfs.namenode.handler.count and dfs.namenode.service.handler.count . the value from 105 to 150 ( >=20*ln(datanodes) ).
> But the problem still occured.
> Does someone have the same problem, and could give some suggestion ?
> take a look the debug log....
> 15/11/27 19:37:46 DEBUG util.Shell: setsid exited with exit code 0
> 15/11/27 19:37:46 DEBUG conf.Configuration: parsing URL jar:file:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar!/core-default.xml
> 15/11/27 19:37:46 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@720c653f
> 15/11/27 19:37:46 DEBUG conf.Configuration: parsing URL file:/usr/local/hadoop-2.4.0/etc/hadoop/core-site.xml
> 15/11/27 19:37:46 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@7038ce7b
> 15/11/27 19:37:47 DEBUG security.Groups:  Creating new Groups object
> 15/11/27 19:37:47 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
> 15/11/27 19:37:47 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
> 15/11/27 19:37:47 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
> 15/11/27 19:37:47 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> 15/11/27 19:37:47 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> 15/11/27 19:37:47 DEBUG security.UserGroupInformation: hadoop login
> 15/11/27 19:37:47 DEBUG security.UserGroupInformation: hadoop login commit
> 15/11/27 19:37:47 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hdfs
> 15/11/27 19:37:47 DEBUG security.UserGroupInformation: UGI loginUser:hdfs (auth:SIMPLE)
> 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
> 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = true
> 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
> 15/11/27 19:37:47 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket
> 15/11/27 19:37:47 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
> 15/11/27 19:37:48 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@739820e5
> 15/11/27 19:37:48 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@42988fee
> 15/11/27 19:37:49 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$1@7116778b: starting with interruptCheckPeriodMs = 60000
> 15/11/27 19:37:49 DEBUG hdfs.BlockReaderLocal: The short-circuit local reads feature is enabled.
> 15/11/27 19:37:49 DEBUG ipc.Client: The ping interval is 60000 ms.
> 15/11/27 19:37:49 DEBUG ipc.Client: Connecting to hadoop109:8020
> 15/11/27 19:37:49 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs: starting, having connections 1
>  .........#  response 4 mins #...........
> 15/11/27 19:37:49 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs sending #0   
> 15/11/27 19:41:16 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs got value #0
> ......................
> ls: Operation category READ is not supported in state standby
> 15/11/27 19:41:16 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@42988fee
> 15/11/27 19:41:16 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@42988fee
> 15/11/27 19:41:16 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@42988fee
> 15/11/27 19:41:16 DEBUG ipc.Client: Stopping client
> 15/11/27 19:41:16 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs: closed
> 15/11/27 19:41:16 DEBUG ipc.Client: IPC Client (320922331) connection to hadoop109:8020 from hdfs: stopped, remaining connections 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)