You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Bharat Viswanadham (Jira)" <ji...@apache.org> on 2020/04/07 04:44:00 UTC

[jira] [Updated] (HDDS-3217) Datanode startup is slow due to iterating container DB 2-3 times

     [ https://issues.apache.org/jira/browse/HDDS-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bharat Viswanadham updated HDDS-3217:
-------------------------------------
    Status: Patch Available  (was: In Progress)

> Datanode startup is slow due to iterating container DB 2-3 times
> ----------------------------------------------------------------
>
>                 Key: HDDS-3217
>                 URL: https://issues.apache.org/jira/browse/HDDS-3217
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Critical
>              Labels: billiontest, pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> During Datanode startup, for each container we iterate 2 times entire DB
> 1. For Setting block length
> 2. For finding delete Key count.
> And for open containers, we do step 1 again.
> *Code Snippet:*
> *ContainerReader.java:*
> *For setting Bytes Used:*
> {code:java}
>       List<Map.Entry<byte[], byte[]>> liveKeys = metadata.getStore()
>           .getRangeKVs(null, Integer.MAX_VALUE,
>               MetadataKeyFilters.getNormalKeyFilter());
>       bytesUsed = liveKeys.parallelStream().mapToLong(e-> {
>         BlockData blockData;
>         try {
>           blockData = BlockUtils.getBlockData(e.getValue());
>           return blockData.getSize();
>         } catch (IOException ex) {
>           return 0L;
>         }
>       }).sum();
>       kvContainerData.setBytesUsed(bytesUsed);
> {code}
> *For setting pending deleted Key count*
> {code:java}
>           MetadataKeyFilters.KeyPrefixFilter filter =
>               new MetadataKeyFilters.KeyPrefixFilter()
>                   .addFilter(OzoneConsts.DELETING_KEY_PREFIX);
>           int numPendingDeletionBlocks =
>               containerDB.getStore().getSequentialRangeKVs(null,
>                   Integer.MAX_VALUE, filter)
>                   .size();
>           kvContainerData.incrPendingDeletionBlocks(numPendingDeletionBlocks);
> {code}
> *For open Containers*
> {code:java}
>           if (kvContainer.getContainerState()
>               == ContainerProtos.ContainerDataProto.State.OPEN) {
>             // commitSpace for Open Containers relies on usedBytes
>             initializeUsedBytes(kvContainer);
>           }
> {code}
> *Jstack of DN during startup*
> {code:java}
> "Thread-8" #34 prio=5 os_prio=0 tid=0x00007f5df5070000 nid=0x8ee runnable [0x00007f4d840f3000]
>    java.lang.Thread.State: RUNNABLE
>         at org.rocksdb.RocksIterator.next0(Native Method)
>         at org.rocksdb.AbstractRocksIterator.next(AbstractRocksIterator.java:70)
>         at org.apache.hadoop.hdds.utils.RocksDBStore.getRangeKVs(RocksDBStore.java:195)
>         at org.apache.hadoop.hdds.utils.RocksDBStore.getRangeKVs(RocksDBStore.java:155)
>         at org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:158)
>         at org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.verifyAndFixupContainerData(ContainerReader.java:191)
>         at org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.verifyContainerFile(ContainerReader.java:168)
>         at org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.readVolume(ContainerReader.java:146)
>         at org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.run(ContainerReader.java:101)
>         at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org