You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Mukul Kumar Singh (JIRA)" <ji...@apache.org> on 2019/02/11 15:02:00 UTC
[jira] [Created] (HDDS-1083) Improve error code when SCM fails to
allocate block
Mukul Kumar Singh created HDDS-1083:
---------------------------------------
Summary: Improve error code when SCM fails to allocate block
Key: HDDS-1083
URL: https://issues.apache.org/jira/browse/HDDS-1083
Project: Hadoop Distributed Data Store
Issue Type: Bug
Components: SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
Fix For: 0.4.0
The following error, KEY_ALLOCATION_ERROR doesn't display information around the number of replica's. Also the error isn't detailed about whether no pipelines were found or there wasn't enough space on the datanode to create the containers.
{code}
019-02-11 14:56:12 ERROR RandomKeyGenerator:621 - Exception while adding key: key-0-91322 in bucket: org.apache.hadoop.ozone.client.OzoneBucket@24ef95df of volume: org.apache.hadoop.ozone.client.OzoneVolume@6473c338.
java.io.IOException: Create key failed, error:KEY_ALLOCATION_ERROR
at org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.openKey(OzoneManagerProtocolClientSideTranslatorPB.java:692)
at org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:571)
at org.apache.hadoop.ozone.client.OzoneBucket.createKey(OzoneBucket.java:274)
at org.apache.hadoop.ozone.freon.RandomKeyGenerator$OfflineProcessor.run(RandomKeyGenerator.java:596)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org