You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by va...@apache.org on 2023/05/02 15:50:52 UTC

[couchdb] branch main updated: Clarify encoding length in performance.rst

This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/couchdb.git


The following commit(s) were added to refs/heads/main by this push:
     new 2430728f5 Clarify encoding length in performance.rst
2430728f5 is described below

commit 2430728f57e51fb0b6fc41dfdc410ec80ced69b6
Author: Ruben Laguna <ru...@gmail.com>
AuthorDate: Tue May 2 17:23:25 2023 +0200

    Clarify encoding length in performance.rst
    
    The original text said that something that takes 16 hex digits can be represented with just 4 digits (in an hypothetical base62 encoding).
    
    I believe that was a typo since 16 hex digits encode a 8-byte sequence that will require (8/3)*4 = 11 digits in base64 (without padding).
---
 src/docs/src/maintenance/performance.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/docs/src/maintenance/performance.rst b/src/docs/src/maintenance/performance.rst
index 63d25caa6..c0290c3c0 100644
--- a/src/docs/src/maintenance/performance.rst
+++ b/src/docs/src/maintenance/performance.rst
@@ -248,8 +248,8 @@ go from 21GB to 4GB with 10 million documents (the raw JSON text when from
 Inserting with sequential (and at least sorted) ids is faster than random ids.
 Consequently you should consider generating ids yourself, allocating them
 sequentially and using an encoding scheme that consumes fewer bytes.
-For example, something that takes 16 hex digits to represent can be done in
-4 base 62 digits (10 numerals, 26 lower case, 26 upper case).
+For example, 8 bytes will take 16 hex digits to represent, and those same
+8 bytes can be encoded in only 11 digits/chars in base64url (no padding).
 
 Views
 =====