You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Aravindan Vijayan (JIRA)" <ji...@apache.org> on 2019/05/02 17:05:00 UTC
[jira] [Created] (HDDS-1485) Ozone writes fail when single threaded
client writes 100MB files repeatedly.
Aravindan Vijayan created HDDS-1485:
---------------------------------------
Summary: Ozone writes fail when single threaded client writes 100MB files repeatedly.
Key: HDDS-1485
URL: https://issues.apache.org/jira/browse/HDDS-1485
Project: Hadoop Distributed Data Store
Issue Type: Bug
Reporter: Aravindan Vijayan
*Environment*
26 node physical cluster.
All Datanodes are up and running.
Client attempting to write 1600 x 100MB files using the FsStress utility
(https://github.com/arp7/FsPerfTest) fails with the following error.
{code}
19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage Container Exception:
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: ContainerID 424 does not exist
at org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
It looks like a corruption in the container metadata.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org