You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Wei-Chiu Chuang (Jira)" <ji...@apache.org> on 2019/08/24 21:21:00 UTC
[jira] [Created] (HDFS-14774) Improve
RouterWebhdfsMethods#chooseDatanode() error handling
Wei-Chiu Chuang created HDFS-14774:
--------------------------------------
Summary: Improve RouterWebhdfsMethods#chooseDatanode() error handling
Key: HDFS-14774
URL: https://issues.apache.org/jira/browse/HDFS-14774
Project: Hadoop HDFS
Issue Type: Improvement
Reporter: Wei-Chiu Chuang
HDFS-13972 added the following code:
{code}
try {
dns = rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
} catch (IOException e) {
LOG.error("Cannot get the datanodes from the RPC server", e);
} finally {
// Reset ugi to remote user for remaining operations.
RouterRpcServer.resetCurrentUser();
}
HashSet<Node> excludes = new HashSet<Node>();
if (excludeDatanodes != null) {
Collection<String> collection =
getTrimmedStringCollection(excludeDatanodes);
for (DatanodeInfo dn : dns) {
if (collection.contains(dn.getName())) {
excludes.add(dn);
}
}
}
{code}
If {{rpcServer.getDatanodeReport()}} throws an exception, {{dns}} will become null. This does't look like the best way to handle the exception. Should router retry upon exception? Does it perform retry automatically under the hood?
[~crh] [~brahmareddy]
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org