You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "ruiliang (Jira)" <ji...@apache.org> on 2022/09/30 06:21:00 UTC

[jira] [Updated] (HDFS-16788) could only be written to 2 of the 3 required nodes for RS-3-2-1024k. There are 50 datanode(s) running and no node(s) are excluded in this operation

     [ https://issues.apache.org/jira/browse/HDFS-16788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ruiliang updated HDFS-16788:
----------------------------
    Affects Version/s: 3.1.0

> could only be written to 2 of the 3 required nodes for RS-3-2-1024k. There are 50 datanode(s) running and no node(s) are excluded in this operation
> ---------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16788
>                 URL: https://issues.apache.org/jira/browse/HDFS-16788
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.1.0
>            Reporter: ruiliang
>            Priority: Major
>         Attachments: image-2022-09-30-14-14-29-963.png, image-2022-09-30-14-14-44-164.png
>
>
> !image-2022-09-30-14-14-44-164.png!
> ||Configured Capacity:|3.02 PB|
> ||Configured Remote Capacity:|0 B|
> ||DFS Used:|1.39 PB (45.96%)|
> ||Non DFS Used:|0 B|
> ||DFS Remaining:|1.62 PB (53.67%)|
> ||Block Pool Used:|1.39 PB (45.96%)|
> ||DataNodes usages% (Min/Median/Max/stdDev):|8.20% / 32.44% / 98.85% / 37.30%|
> ||[Live Nodes|http://fs-hiido-yycluster06-yynn1.hiido.host.yydevops.com:50070/dfshealth.html#tab-datanode]|50 (Decommissioned: 0, In Maintenance: 0)
>  
> |
> I've been working hard in the background to balance the data,  but before I discp when
> {code:java}
> hdfs balancer -Ddfs.datanode.balance.max.concurrent.moves=300 -Ddfs.balancer.moverThreads=1200 -Ddfs.datanode.balance.bandwidthPerSec=1073741824 -fs hdfs://yycluster06 -threshold 50{code}
> {code:java}
> //
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hive_warehouse/warehouse_old_snapshots/credit/.distcp.tmp.attempt_1663830633337_314191_m_000008_2 could only be written to 2 of the 3 required nodes for RS-3-2-1024k. There are 50 datanode(s) running and no node(s) are excluded in this operation.
>         at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2128)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:286)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2706)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1443)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1353)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>         at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:510)
>         at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>         at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1078)
>         at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:479)
>         at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:525)
>         at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>         at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
>         at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>         at java.io.DataOutputStream.write(DataOutputStream.java:107)
>         at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:290)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:193)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:123)
>         at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>         at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) {code}
> The data is written to a full disk Datanode. Is there any way to avoid not writing data to a full disk Datanode during DISCP?Balancer data is very slow, which affects the progress of the work



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org