You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Shashikant Banerjee (JIRA)" <ji...@apache.org> on 2019/01/23 10:26:00 UTC
[jira] [Created] (HDDS-996) Incorrect data length gets updated in
OM by client in case it hits exception in multiple successive block writes
Shashikant Banerjee created HDDS-996:
----------------------------------------
Summary: Incorrect data length gets updated in OM by client in case it hits exception in multiple successive block writes
Key: HDDS-996
URL: https://issues.apache.org/jira/browse/HDDS-996
Project: Hadoop Distributed Data Store
Issue Type: Improvement
Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
Fix For: 0.4.0
In the retry path, the data which needs to be written to the next block should always be calculated from the data actually residing in the buffer list rather than the length of the current stream entry allocated. This leads to updating incorrect length of key updated in OM when multiple exceptions occur while doing key writes.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org