You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by GitBox <gi...@apache.org> on 2022/01/26 10:25:40 UTC

[GitHub] [apisix] bisakhmondal commented on a change in pull request #6203: feat(batchprocessor): support partial consumption of batch entries

bisakhmondal commented on a change in pull request #6203:
URL: https://github.com/apache/apisix/pull/6203#discussion_r792496732



##########
File path: apisix/utils/batch-processor.lua
##########
@@ -63,17 +63,36 @@ local function set_metrics(self, count)
 end
 
 
+local function slice_batch(batch, n)
+    local slice = {}
+    for i = n or 1, #batch, 1 do
+        slice[#slice+1] = batch[i]

Review comment:
       Oh, I see. I thought len is O(1) operation but it seems, during each invocation it gets recomputed at O(logn).
   Nice nit picking :+1: 

##########
File path: apisix/utils/batch-processor.lua
##########
@@ -63,17 +63,36 @@ local function set_metrics(self, count)
 end
 
 
+local function slice_batch(batch, n)
+    local slice = {}
+    for i = n or 1, #batch, 1 do
+        slice[#slice+1] = batch[i]
+    end
+    return slice
+end
+
+
 function execute_func(premature, self, batch)
     if premature then
         return
     end
 
-    local ok, err = self.func(batch.entries, self.batch_max_size)
+    --- In case of "err" and a valid "n" batch processor considers, all n-1 entries have been

Review comment:
       Yes, defenitely




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org