You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by "Mryange (via GitHub)" <gi...@apache.org> on 2024/04/10 10:11:48 UTC

[PR] [refine](pipelineX) refine code in agg node [doris]

Mryange opened a new pull request, #33491:
URL: https://github.com/apache/doris/pull/33491

   ## Proposed changes
   
   Issue Number: close #xxx
   
   <!--Describe your changes.-->
   
   ## Further comments
   
   If this is a relatively large or complex change, kick off the discussion at [dev@doris.apache.org](mailto:dev@doris.apache.org) by explaining why you chose the solution you did and what alternatives you considered, etc...
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "doris-robot (via GitHub)" <gi...@apache.org>.
doris-robot commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2048949788

   TeamCity be ut coverage result:
    Function Coverage: 35.64% (8904/24985) 
    Line Coverage: 27.41% (73100/266686)
    Region Coverage: 26.51% (37800/142584)
    Branch Coverage: 23.27% (19269/82814)
    Coverage Report: http://coverage.selectdb-in.cc/coverage/0c55192970f49a78afe3e35da5c26b2781a79a7d_0c55192970f49a78afe3e35da5c26b2781a79a7d/report/index.html


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "doris-robot (via GitHub)" <gi...@apache.org>.
doris-robot commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2047127170

   Thank you for your contribution to Apache Doris.
   Don't know what should be done next? See [How to process your PR](https://cwiki.apache.org/confluence/display/DORIS/How+to+process+your+PR)
   
   Since 2024-03-18, the Document has been moved to [doris-website](https://github.com/apache/doris-website).
   See [Doris Document](https://cwiki.apache.org/confluence/display/DORIS/Doris+Document).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "Mryange (via GitHub)" <gi...@apache.org>.
Mryange commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2047131988

    run buildall


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on code in PR #33491:
URL: https://github.com/apache/doris/pull/33491#discussion_r1559612678


##########
be/src/pipeline/exec/aggregation_sink_operator_helper.h:
##########
@@ -0,0 +1,274 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <cstdint>
+#include <type_traits>
+
+#include "operator.h"
+#include "pipeline/exec/aggregation_operator_helper.h"
+#include "pipeline/pipeline_x/dependency.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "runtime/block_spill_manager.h"
+#include "runtime/exec_env.h"
+#include "vec/exec/vaggregation_node.h"
+
+namespace doris {
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSinkLocalStateHelper :  public AggLocalStateHelper<Derived, OperatorX> {
+public:
+    using AggLocalStateHelper<Derived, OperatorX>::_get_hash_table_size;
+    using AggLocalStateHelper<Derived, OperatorX>::_find_in_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_emplace_into_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_shared_state;
+    using AggLocalStateHelper<Derived, OperatorX>::_derived;
+    using AggLocalStateHelper<Derived, OperatorX>::_operator;
+    using AggLocalStateHelper<Derived, OperatorX>::_create_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_destroy_agg_status;
+    AggSinkLocalStateHelper(Derived * derived) : AggLocalStateHelper<Derived, OperatorX>(derived){}
+    ~AggSinkLocalStateHelper() = default;
+    Status _execute_without_key(vectorized::Block* block) {
+        DCHECK(_shared_state()->agg_data->without_key != nullptr);
+        SCOPED_TIMER(_derived()->_build_timer);
+        for (int i = 0; i < _shared_state()->aggregate_evaluators.size(); ++i) {
+            RETURN_IF_ERROR(_shared_state()->aggregate_evaluators[i]->execute_single_add(
+                    block,
+                    _shared_state()->agg_data->without_key +
+                            _operator().offsets_of_aggregate_states[i],
+                    _shared_state()->agg_arena_pool.get()));
+        }
+        return Status::OK();
+    }
+    Status _execute_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _execute_with_serialized_key_helper<true>(block);
+        } else {
+            return _execute_with_serialized_key_helper<false>(block);
+        }
+    }
+    template <bool limit>
+    Status _execute_with_serialized_key_helper(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_build_timer);
+        auto& _probe_expr_ctxs = _shared_state()->probe_expr_ctxs;
+        auto& _places = _derived()->_places;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(!_probe_expr_ctxs.empty());
+
+        size_t key_size = _probe_expr_ctxs.size();
+        vectorized::ColumnRawPtrs key_columns(key_size);
+        {
+            SCOPED_TIMER(_derived()->_expr_timer);
+            for (size_t i = 0; i < key_size; ++i) {
+                int result_column_id = -1;
+                RETURN_IF_ERROR(_probe_expr_ctxs[i]->execute(block, &result_column_id));
+                block->get_by_position(result_column_id).column =
+                        block->get_by_position(result_column_id)
+                                .column->convert_to_full_column_if_const();
+                key_columns[i] = block->get_by_position(result_column_id).column.get();
+            }
+        }
+
+        int rows = block->rows();
+        if (_places.size() < rows) {
+            _places.resize(rows);
+        }
+
+        if constexpr (limit) {
+            _find_in_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+        } else {
+            _emplace_into_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+
+            if (_derived()->_should_limit_output) {
+                _derived()->_reach_limit = _get_hash_table_size() >= _operator()._limit;
+                if (_derived()->_reach_limit && _operator()._can_short_circuit) {
+                    _derived()->_dependency->set_ready_to_read();
+                    return Status::Error<ErrorCode::END_OF_FILE>("");
+                }
+            }
+        }
+
+        return Status::OK();
+    }
+    // We should call this function only at 1st phase.
+    // 1st phase: is_merge=true, only have one SlotRef.
+    // 2nd phase: is_merge=false, maybe have multiple exprs.
+    int _get_slot_column_id(const vectorized::AggFnEvaluator* evaluator) {
+        auto ctxs = evaluator->input_exprs_ctxs();
+        CHECK(ctxs.size() == 1 && ctxs[0]->root()->is_slot_ref())
+                << "input_exprs_ctxs is invalid, input_exprs_ctx[0]="
+                << ctxs[0]->root()->debug_string();
+        return ((vectorized::VSlotRef*)ctxs[0]->root().get())->column_id();
+    }
+
+    Status _merge_without_key(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_merge_timer);
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(agg_data->without_key != nullptr);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            if (aggregate_evaluators[i]->is_merge()) {
+                int col_id = _get_slot_column_id(aggregate_evaluators[i]);
+                auto column = block->get_by_position(col_id).column;
+                if (column->is_nullable()) {
+                    column = ((vectorized::ColumnNullable*)column.get())->get_nested_column_ptr();
+                }
+
+                SCOPED_TIMER(_derived()->_deserialize_data_timer);
+                aggregate_evaluators[i]->function()->deserialize_and_merge_from_column(
+                        agg_data->without_key + _operator().offsets_of_aggregate_states[i], *column,
+                        agg_arena_pool.get());
+            } else {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_single_add(
+                        block, agg_data->without_key + _operator().offsets_of_aggregate_states[i],
+                        agg_arena_pool.get()));
+            }
+        }
+        return Status::OK();
+    }
+
+    Status _merge_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _merge_with_serialized_key_helper<true, false>(block);
+        } else {
+            return _merge_with_serialized_key_helper<false, false>(block);
+        }
+    }
+
+    template <bool limit, bool for_spill>
+    Status _merge_with_serialized_key_helper(vectorized::Block* block) {

Review Comment:
   warning: function '_merge_with_serialized_key_helper' exceeds recommended size/complexity thresholds [readability-function-size]
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:168:** 101 lines including whitespace and comments (threshold 80)
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   
   </details>
   



##########
be/src/pipeline/exec/aggregation_sink_operator_helper.h:
##########
@@ -0,0 +1,274 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <cstdint>
+#include <type_traits>
+
+#include "operator.h"
+#include "pipeline/exec/aggregation_operator_helper.h"
+#include "pipeline/pipeline_x/dependency.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "runtime/block_spill_manager.h"
+#include "runtime/exec_env.h"
+#include "vec/exec/vaggregation_node.h"
+
+namespace doris {
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSinkLocalStateHelper :  public AggLocalStateHelper<Derived, OperatorX> {
+public:
+    using AggLocalStateHelper<Derived, OperatorX>::_get_hash_table_size;
+    using AggLocalStateHelper<Derived, OperatorX>::_find_in_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_emplace_into_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_shared_state;
+    using AggLocalStateHelper<Derived, OperatorX>::_derived;
+    using AggLocalStateHelper<Derived, OperatorX>::_operator;
+    using AggLocalStateHelper<Derived, OperatorX>::_create_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_destroy_agg_status;
+    AggSinkLocalStateHelper(Derived * derived) : AggLocalStateHelper<Derived, OperatorX>(derived){}
+    ~AggSinkLocalStateHelper() = default;
+    Status _execute_without_key(vectorized::Block* block) {
+        DCHECK(_shared_state()->agg_data->without_key != nullptr);
+        SCOPED_TIMER(_derived()->_build_timer);
+        for (int i = 0; i < _shared_state()->aggregate_evaluators.size(); ++i) {
+            RETURN_IF_ERROR(_shared_state()->aggregate_evaluators[i]->execute_single_add(
+                    block,
+                    _shared_state()->agg_data->without_key +
+                            _operator().offsets_of_aggregate_states[i],
+                    _shared_state()->agg_arena_pool.get()));
+        }
+        return Status::OK();
+    }
+    Status _execute_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _execute_with_serialized_key_helper<true>(block);
+        } else {
+            return _execute_with_serialized_key_helper<false>(block);
+        }
+    }
+    template <bool limit>
+    Status _execute_with_serialized_key_helper(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_build_timer);
+        auto& _probe_expr_ctxs = _shared_state()->probe_expr_ctxs;
+        auto& _places = _derived()->_places;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(!_probe_expr_ctxs.empty());
+
+        size_t key_size = _probe_expr_ctxs.size();
+        vectorized::ColumnRawPtrs key_columns(key_size);
+        {
+            SCOPED_TIMER(_derived()->_expr_timer);
+            for (size_t i = 0; i < key_size; ++i) {
+                int result_column_id = -1;
+                RETURN_IF_ERROR(_probe_expr_ctxs[i]->execute(block, &result_column_id));
+                block->get_by_position(result_column_id).column =
+                        block->get_by_position(result_column_id)
+                                .column->convert_to_full_column_if_const();
+                key_columns[i] = block->get_by_position(result_column_id).column.get();
+            }
+        }
+
+        int rows = block->rows();
+        if (_places.size() < rows) {
+            _places.resize(rows);
+        }
+
+        if constexpr (limit) {
+            _find_in_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+        } else {
+            _emplace_into_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+
+            if (_derived()->_should_limit_output) {
+                _derived()->_reach_limit = _get_hash_table_size() >= _operator()._limit;
+                if (_derived()->_reach_limit && _operator()._can_short_circuit) {
+                    _derived()->_dependency->set_ready_to_read();
+                    return Status::Error<ErrorCode::END_OF_FILE>("");
+                }
+            }
+        }
+
+        return Status::OK();
+    }
+    // We should call this function only at 1st phase.
+    // 1st phase: is_merge=true, only have one SlotRef.
+    // 2nd phase: is_merge=false, maybe have multiple exprs.
+    int _get_slot_column_id(const vectorized::AggFnEvaluator* evaluator) {
+        auto ctxs = evaluator->input_exprs_ctxs();
+        CHECK(ctxs.size() == 1 && ctxs[0]->root()->is_slot_ref())
+                << "input_exprs_ctxs is invalid, input_exprs_ctx[0]="
+                << ctxs[0]->root()->debug_string();
+        return ((vectorized::VSlotRef*)ctxs[0]->root().get())->column_id();
+    }
+
+    Status _merge_without_key(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_merge_timer);
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(agg_data->without_key != nullptr);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            if (aggregate_evaluators[i]->is_merge()) {
+                int col_id = _get_slot_column_id(aggregate_evaluators[i]);
+                auto column = block->get_by_position(col_id).column;
+                if (column->is_nullable()) {
+                    column = ((vectorized::ColumnNullable*)column.get())->get_nested_column_ptr();
+                }
+
+                SCOPED_TIMER(_derived()->_deserialize_data_timer);
+                aggregate_evaluators[i]->function()->deserialize_and_merge_from_column(
+                        agg_data->without_key + _operator().offsets_of_aggregate_states[i], *column,
+                        agg_arena_pool.get());
+            } else {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_single_add(
+                        block, agg_data->without_key + _operator().offsets_of_aggregate_states[i],
+                        agg_arena_pool.get()));
+            }
+        }
+        return Status::OK();
+    }
+
+    Status _merge_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _merge_with_serialized_key_helper<true, false>(block);
+        } else {
+            return _merge_with_serialized_key_helper<false, false>(block);
+        }
+    }
+
+    template <bool limit, bool for_spill>
+    Status _merge_with_serialized_key_helper(vectorized::Block* block) {

Review Comment:
   warning: function '_merge_with_serialized_key_helper' has cognitive complexity of 68 (threshold 50) [readability-function-cognitive-complexity]
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:179:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           for (size_t i = 0; i < key_size; ++i) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:180:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               if constexpr (for_spill) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:182:** +1, nesting level increased to 2
   ```cpp
               } else {
                 ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:184:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   RETURN_IF_ERROR(probe_expr_ctxs[i]->execute(block, &result_column_id));
                   ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:184:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                   RETURN_IF_ERROR(probe_expr_ctxs[i]->execute(block, &result_column_id));
                   ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:191:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           if (_places.size() < rows) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:195:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           if constexpr (limit) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:198:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               for (int i = 0; i < aggregate_evaluators.size(); ++i) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:199:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   if (aggregate_evaluators[i]->is_merge()) {
                   ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:202:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (column->is_nullable()) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:208:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (_deserialize_buffer.size() < buffer_size) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:220:** +1, nesting level increased to 3
   ```cpp
                   } else {
                     ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:221:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
                       ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:221:** +5, including nesting penalty of 4, nesting level increased to 5
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
                       ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:226:** +1, nesting level increased to 1
   ```cpp
           } else {
             ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:229:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               for (int i = 0; i < aggregate_evaluators.size(); ++i) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:230:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   if (aggregate_evaluators[i]->is_merge() || for_spill) {
                   ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:230:** +1
   ```cpp
                   if (aggregate_evaluators[i]->is_merge() || for_spill) {
                                                           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:232:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if constexpr (for_spill) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:234:** +1, nesting level increased to 4
   ```cpp
                       } else {
                         ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:238:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (column->is_nullable()) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:244:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (_deserialize_buffer.size() < buffer_size) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:256:** +1, nesting level increased to 3
   ```cpp
                   } else {
                     ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:257:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
                       ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:257:** +5, including nesting penalty of 4, nesting level increased to 5
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
                       ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:263:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               if (_derived()->_should_limit_output) {
               ^
   ```
   
   </details>
   



##########
be/src/pipeline/exec/aggregation_source_operator_helper.h:
##########
@@ -0,0 +1,45 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+#pragma once
+
+#include <stdint.h>

Review Comment:
   warning: inclusion of deprecated C++ header 'stdint.h'; consider using 'cstdint' instead [modernize-deprecated-headers]
   
   ```suggestion
   #include <cstdint>
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on code in PR #33491:
URL: https://github.com/apache/doris/pull/33491#discussion_r1560436890


##########
be/src/pipeline/exec/aggregation_sink_operator.h:
##########
@@ -138,6 +123,12 @@ class AggSinkLocalState : public PipelineXSinkLocalState<AggSharedState> {
     vectorized::Arena* _agg_arena_pool = nullptr;
 
     std::unique_ptr<ExecutorBase> _executor = nullptr;
+    struct MemoryRecord {
+        MemoryRecord() : used_in_arena(0), used_in_state(0) {}
+        int64_t used_in_arena;

Review Comment:
   warning: use default member initializer for 'used_in_arena' [modernize-use-default-member-init]
   
   be/src/pipeline/exec/aggregation_sink_operator.h:126:
   ```diff
   -         MemoryRecord() : used_in_arena(0), used_in_state(0) {}
   -         int64_t used_in_arena;
   +         MemoryRecord() : , used_in_state(0) {}
   +         int64_t used_in_arena{0};
   ```
   



##########
be/src/pipeline/exec/aggregation_sink_operator.h:
##########
@@ -138,6 +123,12 @@
     vectorized::Arena* _agg_arena_pool = nullptr;
 
     std::unique_ptr<ExecutorBase> _executor = nullptr;
+    struct MemoryRecord {
+        MemoryRecord() : used_in_arena(0), used_in_state(0) {}
+        int64_t used_in_arena;
+        int64_t used_in_state;

Review Comment:
   warning: use default member initializer for 'used_in_state' [modernize-use-default-member-init]
   
   be/src/pipeline/exec/aggregation_sink_operator.h:126:
   ```diff
   -         MemoryRecord() : used_in_arena(0), used_in_state(0) {}
   +         MemoryRecord() : used_in_arena(0), {}
   ```
   
   ```suggestion
           int64_t used_in_state{0};
   ```
   



##########
be/src/pipeline/exec/aggregation_sink_operator_helper.h:
##########
@@ -0,0 +1,310 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <cstdint>
+#include <type_traits>
+
+#include "operator.h"
+#include "pipeline/exec/aggregation_operator_helper.h"
+#include "pipeline/pipeline_x/dependency.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "runtime/block_spill_manager.h"
+#include "runtime/exec_env.h"
+#include "vec/exec/vaggregation_node.h"
+
+namespace doris {
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSinkLocalStateHelper : public AggLocalStateHelper<Derived, OperatorX> {
+public:
+    AggSinkLocalStateHelper(Derived* derived) : AggLocalStateHelper<Derived, OperatorX>(derived) {}
+    ~AggSinkLocalStateHelper() = default;
+    using AggLocalStateHelper<Derived, OperatorX>::_get_hash_table_size;
+    using AggLocalStateHelper<Derived, OperatorX>::_find_in_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_emplace_into_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_shared_state;
+    using AggLocalStateHelper<Derived, OperatorX>::_derived;
+    using AggLocalStateHelper<Derived, OperatorX>::_operator;
+    using AggLocalStateHelper<Derived, OperatorX>::_create_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_destroy_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_init_hash_method;
+
+    Status _execute_without_key(vectorized::Block* block) {
+        DCHECK(_shared_state()->agg_data->without_key != nullptr);
+        SCOPED_TIMER(_derived()->_build_timer);
+        for (int i = 0; i < _shared_state()->aggregate_evaluators.size(); ++i) {
+            RETURN_IF_ERROR(_shared_state()->aggregate_evaluators[i]->execute_single_add(
+                    block,
+                    _shared_state()->agg_data->without_key +
+                            _operator().offsets_of_aggregate_states[i],
+                    _shared_state()->agg_arena_pool.get()));
+        }
+        return Status::OK();
+    }
+    Status _execute_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _execute_with_serialized_key_helper<true>(block);
+        } else {
+            return _execute_with_serialized_key_helper<false>(block);
+        }
+    }
+    template <bool limit>
+    Status _execute_with_serialized_key_helper(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_build_timer);
+        auto& _probe_expr_ctxs = _shared_state()->probe_expr_ctxs;
+        auto& _places = _derived()->_places;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(!_probe_expr_ctxs.empty());
+
+        size_t key_size = _probe_expr_ctxs.size();
+        vectorized::ColumnRawPtrs key_columns(key_size);
+        {
+            SCOPED_TIMER(_derived()->_expr_timer);
+            for (size_t i = 0; i < key_size; ++i) {
+                int result_column_id = -1;
+                RETURN_IF_ERROR(_probe_expr_ctxs[i]->execute(block, &result_column_id));
+                block->get_by_position(result_column_id).column =
+                        block->get_by_position(result_column_id)
+                                .column->convert_to_full_column_if_const();
+                key_columns[i] = block->get_by_position(result_column_id).column.get();
+            }
+        }
+
+        int rows = block->rows();
+        if (_places.size() < rows) {
+            _places.resize(rows);
+        }
+
+        if constexpr (limit) {
+            _find_in_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+        } else {
+            _emplace_into_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+
+            if (_derived()->_should_limit_output) {
+                _derived()->_reach_limit = _get_hash_table_size() >= _operator()._limit;
+                if (_derived()->_reach_limit && _operator()._can_short_circuit) {
+                    _derived()->_dependency->set_ready_to_read();
+                    return Status::Error<ErrorCode::END_OF_FILE>("");
+                }
+            }
+        }
+
+        return Status::OK();
+    }
+    // We should call this function only at 1st phase.
+    // 1st phase: is_merge=true, only have one SlotRef.
+    // 2nd phase: is_merge=false, maybe have multiple exprs.
+    int _get_slot_column_id(const vectorized::AggFnEvaluator* evaluator) {
+        auto ctxs = evaluator->input_exprs_ctxs();
+        CHECK(ctxs.size() == 1 && ctxs[0]->root()->is_slot_ref())
+                << "input_exprs_ctxs is invalid, input_exprs_ctx[0]="
+                << ctxs[0]->root()->debug_string();
+        return ((vectorized::VSlotRef*)ctxs[0]->root().get())->column_id();
+    }
+
+    Status _merge_without_key(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_merge_timer);
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(agg_data->without_key != nullptr);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            if (aggregate_evaluators[i]->is_merge()) {
+                int col_id = _get_slot_column_id(aggregate_evaluators[i]);
+                auto column = block->get_by_position(col_id).column;
+                if (column->is_nullable()) {
+                    column = ((vectorized::ColumnNullable*)column.get())->get_nested_column_ptr();
+                }
+
+                SCOPED_TIMER(_derived()->_deserialize_data_timer);
+                aggregate_evaluators[i]->function()->deserialize_and_merge_from_column(
+                        agg_data->without_key + _operator().offsets_of_aggregate_states[i], *column,
+                        agg_arena_pool.get());
+            } else {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_single_add(
+                        block, agg_data->without_key + _operator().offsets_of_aggregate_states[i],
+                        agg_arena_pool.get()));
+            }
+        }
+        return Status::OK();
+    }
+
+    Status _merge_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _merge_with_serialized_key_helper<true, false>(block);
+        } else {
+            return _merge_with_serialized_key_helper<false, false>(block);
+        }
+    }
+
+    template <bool limit, bool for_spill>
+    Status _merge_with_serialized_key_helper(vectorized::Block* block) {

Review Comment:
   warning: function '_merge_with_serialized_key_helper' exceeds recommended size/complexity thresholds [readability-function-size]
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:170:** 101 lines including whitespace and comments (threshold 80)
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   
   </details>
   



##########
be/src/pipeline/exec/aggregation_sink_operator_helper.h:
##########
@@ -0,0 +1,310 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <cstdint>
+#include <type_traits>
+
+#include "operator.h"
+#include "pipeline/exec/aggregation_operator_helper.h"
+#include "pipeline/pipeline_x/dependency.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "runtime/block_spill_manager.h"
+#include "runtime/exec_env.h"
+#include "vec/exec/vaggregation_node.h"
+
+namespace doris {
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSinkLocalStateHelper : public AggLocalStateHelper<Derived, OperatorX> {
+public:
+    AggSinkLocalStateHelper(Derived* derived) : AggLocalStateHelper<Derived, OperatorX>(derived) {}
+    ~AggSinkLocalStateHelper() = default;
+    using AggLocalStateHelper<Derived, OperatorX>::_get_hash_table_size;
+    using AggLocalStateHelper<Derived, OperatorX>::_find_in_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_emplace_into_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_shared_state;
+    using AggLocalStateHelper<Derived, OperatorX>::_derived;
+    using AggLocalStateHelper<Derived, OperatorX>::_operator;
+    using AggLocalStateHelper<Derived, OperatorX>::_create_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_destroy_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_init_hash_method;
+
+    Status _execute_without_key(vectorized::Block* block) {
+        DCHECK(_shared_state()->agg_data->without_key != nullptr);
+        SCOPED_TIMER(_derived()->_build_timer);
+        for (int i = 0; i < _shared_state()->aggregate_evaluators.size(); ++i) {
+            RETURN_IF_ERROR(_shared_state()->aggregate_evaluators[i]->execute_single_add(
+                    block,
+                    _shared_state()->agg_data->without_key +
+                            _operator().offsets_of_aggregate_states[i],
+                    _shared_state()->agg_arena_pool.get()));
+        }
+        return Status::OK();
+    }
+    Status _execute_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _execute_with_serialized_key_helper<true>(block);
+        } else {
+            return _execute_with_serialized_key_helper<false>(block);
+        }
+    }
+    template <bool limit>
+    Status _execute_with_serialized_key_helper(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_build_timer);
+        auto& _probe_expr_ctxs = _shared_state()->probe_expr_ctxs;
+        auto& _places = _derived()->_places;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(!_probe_expr_ctxs.empty());
+
+        size_t key_size = _probe_expr_ctxs.size();
+        vectorized::ColumnRawPtrs key_columns(key_size);
+        {
+            SCOPED_TIMER(_derived()->_expr_timer);
+            for (size_t i = 0; i < key_size; ++i) {
+                int result_column_id = -1;
+                RETURN_IF_ERROR(_probe_expr_ctxs[i]->execute(block, &result_column_id));
+                block->get_by_position(result_column_id).column =
+                        block->get_by_position(result_column_id)
+                                .column->convert_to_full_column_if_const();
+                key_columns[i] = block->get_by_position(result_column_id).column.get();
+            }
+        }
+
+        int rows = block->rows();
+        if (_places.size() < rows) {
+            _places.resize(rows);
+        }
+
+        if constexpr (limit) {
+            _find_in_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+        } else {
+            _emplace_into_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+
+            if (_derived()->_should_limit_output) {
+                _derived()->_reach_limit = _get_hash_table_size() >= _operator()._limit;
+                if (_derived()->_reach_limit && _operator()._can_short_circuit) {
+                    _derived()->_dependency->set_ready_to_read();
+                    return Status::Error<ErrorCode::END_OF_FILE>("");
+                }
+            }
+        }
+
+        return Status::OK();
+    }
+    // We should call this function only at 1st phase.
+    // 1st phase: is_merge=true, only have one SlotRef.
+    // 2nd phase: is_merge=false, maybe have multiple exprs.
+    int _get_slot_column_id(const vectorized::AggFnEvaluator* evaluator) {
+        auto ctxs = evaluator->input_exprs_ctxs();
+        CHECK(ctxs.size() == 1 && ctxs[0]->root()->is_slot_ref())
+                << "input_exprs_ctxs is invalid, input_exprs_ctx[0]="
+                << ctxs[0]->root()->debug_string();
+        return ((vectorized::VSlotRef*)ctxs[0]->root().get())->column_id();
+    }
+
+    Status _merge_without_key(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_merge_timer);
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(agg_data->without_key != nullptr);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            if (aggregate_evaluators[i]->is_merge()) {
+                int col_id = _get_slot_column_id(aggregate_evaluators[i]);
+                auto column = block->get_by_position(col_id).column;
+                if (column->is_nullable()) {
+                    column = ((vectorized::ColumnNullable*)column.get())->get_nested_column_ptr();
+                }
+
+                SCOPED_TIMER(_derived()->_deserialize_data_timer);
+                aggregate_evaluators[i]->function()->deserialize_and_merge_from_column(
+                        agg_data->without_key + _operator().offsets_of_aggregate_states[i], *column,
+                        agg_arena_pool.get());
+            } else {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_single_add(
+                        block, agg_data->without_key + _operator().offsets_of_aggregate_states[i],
+                        agg_arena_pool.get()));
+            }
+        }
+        return Status::OK();
+    }
+
+    Status _merge_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _merge_with_serialized_key_helper<true, false>(block);
+        } else {
+            return _merge_with_serialized_key_helper<false, false>(block);
+        }
+    }
+
+    template <bool limit, bool for_spill>
+    Status _merge_with_serialized_key_helper(vectorized::Block* block) {

Review Comment:
   warning: function '_merge_with_serialized_key_helper' has cognitive complexity of 68 (threshold 50) [readability-function-cognitive-complexity]
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:181:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           for (size_t i = 0; i < key_size; ++i) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:182:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               if constexpr (for_spill) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:184:** +1, nesting level increased to 2
   ```cpp
               } else {
                 ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:186:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   RETURN_IF_ERROR(probe_expr_ctxs[i]->execute(block, &result_column_id));
                   ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:186:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                   RETURN_IF_ERROR(probe_expr_ctxs[i]->execute(block, &result_column_id));
                   ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:193:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           if (_places.size() < rows) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:197:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           if constexpr (limit) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:200:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               for (int i = 0; i < aggregate_evaluators.size(); ++i) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:201:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   if (aggregate_evaluators[i]->is_merge()) {
                   ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:204:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (column->is_nullable()) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:210:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (_deserialize_buffer.size() < buffer_size) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:222:** +1, nesting level increased to 3
   ```cpp
                   } else {
                     ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:223:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
                       ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:223:** +5, including nesting penalty of 4, nesting level increased to 5
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
                       ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:228:** +1, nesting level increased to 1
   ```cpp
           } else {
             ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:231:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               for (int i = 0; i < aggregate_evaluators.size(); ++i) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:232:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   if (aggregate_evaluators[i]->is_merge() || for_spill) {
                   ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:232:** +1
   ```cpp
                   if (aggregate_evaluators[i]->is_merge() || for_spill) {
                                                           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:234:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if constexpr (for_spill) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:236:** +1, nesting level increased to 4
   ```cpp
                       } else {
                         ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:240:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (column->is_nullable()) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:246:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (_deserialize_buffer.size() < buffer_size) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:258:** +1, nesting level increased to 3
   ```cpp
                   } else {
                     ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:259:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
                       ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:259:** +5, including nesting penalty of 4, nesting level increased to 5
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
                       ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_sink_operator_helper.h:265:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               if (_derived()->_should_limit_output) {
               ^
   ```
   
   </details>
   



##########
be/src/pipeline/exec/aggregation_source_operator_helper.h:
##########
@@ -0,0 +1,372 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+#pragma once
+
+#include <stdint.h>
+
+#include "common/status.h"
+#include "operator.h"
+#include "pipeline/exec/aggregation_operator_helper.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "vec/exec/vaggregation_node.h"
+namespace doris {
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSourceLocalStateHelper : public AggLocalStateHelper<Derived, OperatorX> {
+public:
+    ~AggSourceLocalStateHelper() = default;
+    AggSourceLocalStateHelper(Derived* derived)
+            : AggLocalStateHelper<Derived, OperatorX>(derived) {}
+    using AggLocalStateHelper<Derived, OperatorX>::_get_hash_table_size;
+    using AggLocalStateHelper<Derived, OperatorX>::_find_in_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_emplace_into_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_shared_state;
+    using AggLocalStateHelper<Derived, OperatorX>::_derived;
+    using AggLocalStateHelper<Derived, OperatorX>::_operator;
+    using AggLocalStateHelper<Derived, OperatorX>::_create_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_destroy_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_init_hash_method;
+
+    Status _get_without_key_result(RuntimeState* state, vectorized::Block* block, bool* eos) {
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        DCHECK(agg_data->without_key != nullptr);
+        block->clear();
+
+        auto& p = _operator();
+        *block = vectorized::VectorizedUtils::create_empty_columnswithtypename(p.row_descriptor());
+        int agg_size = aggregate_evaluators.size();
+
+        vectorized::MutableColumns columns(agg_size);
+        std::vector<vectorized::DataTypePtr> data_types(agg_size);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            data_types[i] = aggregate_evaluators[i]->function()->get_return_type();
+            columns[i] = data_types[i]->create_column();
+        }
+
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            auto* column = columns[i].get();
+            aggregate_evaluators[i]->insert_result_info(
+                    agg_data->without_key + _shared_state()->offsets_of_aggregate_states[i],
+                    column);
+        }
+
+        const auto& block_schema = block->get_columns_with_type_and_name();
+        DCHECK_EQ(block_schema.size(), columns.size());
+        for (int i = 0; i < block_schema.size(); ++i) {
+            const auto column_type = block_schema[i].type;
+            if (!column_type->equals(*data_types[i])) {
+                if (!vectorized::is_array(remove_nullable(column_type))) {
+                    if (!column_type->is_nullable() || data_types[i]->is_nullable() ||
+                        !remove_nullable(column_type)->equals(*data_types[i])) {
+                        return Status::InternalError(
+                                "node id = {}, column_type not match data_types, column_type={}, "
+                                "data_types={}",
+                                p.node_id(), column_type->get_name(), data_types[i]->get_name());
+                    }
+                }
+
+                if (column_type->is_nullable() && !data_types[i]->is_nullable()) {
+                    vectorized::ColumnPtr ptr = std::move(columns[i]);
+                    // unless `count`, other aggregate function dispose empty set should be null
+                    // so here check the children row return
+                    ptr = make_nullable(ptr, _shared_state()->input_num_rows == 0);
+                    columns[i] = ptr->assume_mutable();
+                }
+            }
+        }
+
+        block->set_columns(std::move(columns));
+        *eos = true;
+        return Status::OK();
+    }
+
+    Status _get_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,
+                                           bool* eos) {
+        auto& p = _operator();
+        auto& _values = _derived()->_values;
+        auto& aggregate_data_container = _shared_state()->aggregate_data_container;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_data = _shared_state()->agg_data;
+        // non-nullable column(id in `_make_nullable_keys`) will be converted to nullable.
+        bool mem_reuse = p._make_nullable_keys.empty() && block->mem_reuse();
+
+        auto columns_with_schema =
+                vectorized::VectorizedUtils::create_columns_with_type_and_name(p.row_descriptor());
+        int key_size = _shared_state()->probe_expr_ctxs.size();
+
+        vectorized::MutableColumns key_columns;
+        for (int i = 0; i < key_size; ++i) {
+            if (!mem_reuse) {
+                key_columns.emplace_back(columns_with_schema[i].type->create_column());
+            } else {
+                key_columns.emplace_back(std::move(*block->get_by_position(i).column).mutate());
+            }
+        }
+        vectorized::MutableColumns value_columns;
+        for (int i = key_size; i < columns_with_schema.size(); ++i) {
+            if (!mem_reuse) {
+                value_columns.emplace_back(columns_with_schema[i].type->create_column());
+            } else {
+                value_columns.emplace_back(std::move(*block->get_by_position(i).column).mutate());
+            }
+        }
+
+        SCOPED_TIMER(_derived()->_get_results_timer);
+        std::visit(
+                [&](auto&& agg_method) -> void {
+                    auto& data = *agg_method.hash_table;
+                    agg_method.init_iterator();
+                    const auto size = std::min(data.size(), size_t(state->batch_size()));
+                    using KeyType = std::decay_t<decltype(agg_method.iterator->get_first())>;
+                    std::vector<KeyType> keys(size);
+                    if (_values.size() < size) {
+                        _values.resize(size);
+                    }
+
+                    size_t num_rows = 0;
+                    aggregate_data_container->init_once();
+                    auto& iter = aggregate_data_container->iterator;
+
+                    {
+                        SCOPED_TIMER(_derived()->_hash_table_iterate_timer);
+                        while (iter != aggregate_data_container->end() &&
+                               num_rows < state->batch_size()) {
+                            keys[num_rows] = iter.template get_key<KeyType>();
+                            _values[num_rows] = iter.get_aggregate_data();
+                            ++iter;
+                            ++num_rows;
+                        }
+                    }
+
+                    {
+                        SCOPED_TIMER(_derived()->_insert_keys_to_column_timer);
+                        agg_method.insert_keys_into_columns(keys, key_columns, num_rows);
+                    }
+
+                    for (size_t i = 0; i < aggregate_evaluators.size(); ++i) {
+                        aggregate_evaluators[i]->insert_result_info_vec(
+                                _values, _shared_state()->offsets_of_aggregate_states[i],
+                                value_columns[i].get(), num_rows);
+                    }
+
+                    if (iter == aggregate_data_container->end()) {
+                        if (agg_method.hash_table->has_null_key_data()) {
+                            // only one key of group by support wrap null key
+                            // here need additional processing logic on the null key / value
+                            DCHECK(key_columns.size() == 1);
+                            DCHECK(key_columns[0]->is_nullable());
+                            if (key_columns[0]->size() < state->batch_size()) {
+                                key_columns[0]->insert_data(nullptr, 0);
+                                auto mapped = agg_method.hash_table->template get_null_key_data<
+                                        vectorized::AggregateDataPtr>();
+                                for (size_t i = 0; i < aggregate_evaluators.size(); ++i) {
+                                    aggregate_evaluators[i]->insert_result_info(
+                                            mapped +
+                                                    _shared_state()->offsets_of_aggregate_states[i],
+                                            value_columns[i].get());
+                                }
+                                *eos = true;
+                            }
+                        } else {
+                            *eos = true;
+                        }
+                    }
+                },
+                agg_data->method_variant);
+
+        if (!mem_reuse) {
+            *block = columns_with_schema;
+            vectorized::MutableColumns columns(block->columns());
+            for (int i = 0; i < block->columns(); ++i) {
+                if (i < key_size) {
+                    columns[i] = std::move(key_columns[i]);
+                } else {
+                    columns[i] = std::move(value_columns[i - key_size]);
+                }
+            }
+            block->set_columns(std::move(columns));
+        }
+
+        return Status::OK();
+    }
+
+    Status _serialize_without_key(RuntimeState* state, vectorized::Block* block, bool* eos) {
+        // 1. `child(0)->rows_returned() == 0` mean not data from child
+        // in level two aggregation node should return NULL result
+        //    level one aggregation node set `eos = true` return directly
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        SCOPED_TIMER(_derived()->_serialize_result_timer);
+        if (UNLIKELY(_shared_state()->input_num_rows == 0)) {
+            *eos = true;
+            return Status::OK();
+        }
+        block->clear();
+
+        DCHECK(agg_data->without_key != nullptr);
+        int agg_size = aggregate_evaluators.size();
+
+        vectorized::MutableColumns value_columns(agg_size);
+        std::vector<vectorized::DataTypePtr> data_types(agg_size);
+        // will serialize data to string column
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            data_types[i] = aggregate_evaluators[i]->function()->get_serialized_type();
+            value_columns[i] = aggregate_evaluators[i]->function()->create_serialize_column();
+        }
+
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            aggregate_evaluators[i]->function()->serialize_without_key_to_column(
+                    agg_data->without_key + _shared_state()->offsets_of_aggregate_states[i],
+                    *value_columns[i]);
+        }
+
+        {
+            vectorized::ColumnsWithTypeAndName data_with_schema;
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                vectorized::ColumnWithTypeAndName column_with_schema = {nullptr, data_types[i], ""};
+                data_with_schema.push_back(std::move(column_with_schema));
+            }
+            *block = vectorized::Block(data_with_schema);
+        }
+
+        block->set_columns(std::move(value_columns));
+        *eos = true;
+        return Status::OK();
+    }
+
+    Status _serialize_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,

Review Comment:
   warning: function '_serialize_with_serialized_key_result' exceeds recommended size/complexity thresholds [readability-function-size]
   ```cpp
       Status _serialize_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_source_operator_helper.h:252:** 114 lines including whitespace and comments (threshold 80)
   ```cpp
       Status _serialize_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,
              ^
   ```
   
   </details>
   



##########
be/src/pipeline/exec/aggregation_source_operator_helper.h:
##########
@@ -0,0 +1,372 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+#pragma once
+
+#include <stdint.h>
+
+#include "common/status.h"
+#include "operator.h"
+#include "pipeline/exec/aggregation_operator_helper.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "vec/exec/vaggregation_node.h"
+namespace doris {
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSourceLocalStateHelper : public AggLocalStateHelper<Derived, OperatorX> {
+public:
+    ~AggSourceLocalStateHelper() = default;
+    AggSourceLocalStateHelper(Derived* derived)
+            : AggLocalStateHelper<Derived, OperatorX>(derived) {}
+    using AggLocalStateHelper<Derived, OperatorX>::_get_hash_table_size;
+    using AggLocalStateHelper<Derived, OperatorX>::_find_in_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_emplace_into_hash_table;
+    using AggLocalStateHelper<Derived, OperatorX>::_shared_state;
+    using AggLocalStateHelper<Derived, OperatorX>::_derived;
+    using AggLocalStateHelper<Derived, OperatorX>::_operator;
+    using AggLocalStateHelper<Derived, OperatorX>::_create_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_destroy_agg_status;
+    using AggLocalStateHelper<Derived, OperatorX>::_init_hash_method;
+
+    Status _get_without_key_result(RuntimeState* state, vectorized::Block* block, bool* eos) {
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        DCHECK(agg_data->without_key != nullptr);
+        block->clear();
+
+        auto& p = _operator();
+        *block = vectorized::VectorizedUtils::create_empty_columnswithtypename(p.row_descriptor());
+        int agg_size = aggregate_evaluators.size();
+
+        vectorized::MutableColumns columns(agg_size);
+        std::vector<vectorized::DataTypePtr> data_types(agg_size);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            data_types[i] = aggregate_evaluators[i]->function()->get_return_type();
+            columns[i] = data_types[i]->create_column();
+        }
+
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            auto* column = columns[i].get();
+            aggregate_evaluators[i]->insert_result_info(
+                    agg_data->without_key + _shared_state()->offsets_of_aggregate_states[i],
+                    column);
+        }
+
+        const auto& block_schema = block->get_columns_with_type_and_name();
+        DCHECK_EQ(block_schema.size(), columns.size());
+        for (int i = 0; i < block_schema.size(); ++i) {
+            const auto column_type = block_schema[i].type;
+            if (!column_type->equals(*data_types[i])) {
+                if (!vectorized::is_array(remove_nullable(column_type))) {
+                    if (!column_type->is_nullable() || data_types[i]->is_nullable() ||
+                        !remove_nullable(column_type)->equals(*data_types[i])) {
+                        return Status::InternalError(
+                                "node id = {}, column_type not match data_types, column_type={}, "
+                                "data_types={}",
+                                p.node_id(), column_type->get_name(), data_types[i]->get_name());
+                    }
+                }
+
+                if (column_type->is_nullable() && !data_types[i]->is_nullable()) {
+                    vectorized::ColumnPtr ptr = std::move(columns[i]);
+                    // unless `count`, other aggregate function dispose empty set should be null
+                    // so here check the children row return
+                    ptr = make_nullable(ptr, _shared_state()->input_num_rows == 0);
+                    columns[i] = ptr->assume_mutable();
+                }
+            }
+        }
+
+        block->set_columns(std::move(columns));
+        *eos = true;
+        return Status::OK();
+    }
+
+    Status _get_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,

Review Comment:
   warning: function '_get_with_serialized_key_result' exceeds recommended size/complexity thresholds [readability-function-size]
   ```cpp
       Status _get_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_source_operator_helper.h:98:** 107 lines including whitespace and comments (threshold 80)
   ```cpp
       Status _get_with_serialized_key_result(RuntimeState* state, vectorized::Block* block,
              ^
   ```
   
   </details>
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on code in PR #33491:
URL: https://github.com/apache/doris/pull/33491#discussion_r1559197396


##########
be/src/pipeline/exec/aggregation_operator_helper.h:
##########
@@ -0,0 +1,382 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <cstdint>
+#include <type_traits>
+
+#include "operator.h"
+#include "pipeline/pipeline_x/dependency.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "runtime/block_spill_manager.h"
+#include "runtime/exec_env.h"
+#include "vec/exec/vaggregation_node.h"
+
+namespace doris {
+class ExecNode;
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSinkLocalStateHelper {
+public:
+    ~AggSinkLocalStateHelper() = default;
+    using LocalState = Derived;
+    void _find_in_hash_table(vectorized::AggregateDataPtr* places,
+                             vectorized::ColumnRawPtrs& key_columns, size_t num_rows) {
+        std::visit(
+                [&](auto&& agg_method) -> void {
+                    using HashMethodType = std::decay_t<decltype(agg_method)>;
+                    using AggState = typename HashMethodType::State;
+                    AggState state(key_columns);
+                    agg_method.init_serialized_keys(key_columns, num_rows);
+
+                    /// For all rows.
+                    for (size_t i = 0; i < num_rows; ++i) {
+                        auto find_result = agg_method.find(state, i);
+
+                        if (find_result.is_found()) {
+                            places[i] = find_result.get_mapped();
+                        } else {
+                            places[i] = nullptr;
+                        }
+                    }
+                },
+                _shared_state()->agg_data->method_variant);
+    }
+    Status _create_agg_status(vectorized::AggregateDataPtr data) {
+        auto& shared_state = *_shared_state();
+        for (int i = 0; i < shared_state.aggregate_evaluators.size(); ++i) {
+            try {
+                shared_state.aggregate_evaluators[i]->create(
+                        data + shared_state.offsets_of_aggregate_states[i]);
+            } catch (...) {
+                for (int j = 0; j < i; ++j) {
+                    shared_state.aggregate_evaluators[j]->destroy(
+                            data + shared_state.offsets_of_aggregate_states[j]);
+                }
+                throw;
+            }
+        }
+        return Status::OK();
+    }
+    void _emplace_into_hash_table(vectorized::AggregateDataPtr* places,
+                                  vectorized::ColumnRawPtrs& key_columns, size_t num_rows) {
+        std::visit(
+                [&](auto&& agg_method) -> void {
+                    SCOPED_TIMER(_hash_table_compute_timer);
+                    using HashMethodType = std::decay_t<decltype(agg_method)>;
+                    using AggState = typename HashMethodType::State;
+                    AggState state(key_columns);
+                    agg_method.init_serialized_keys(key_columns, num_rows);
+
+                    auto creator = [this](const auto& ctor, auto& key, auto& origin) {
+                        HashMethodType::try_presis_key_and_origin(key, origin,
+                                                                  *_shared_state()->agg_arena_pool);
+                        auto mapped =
+                                _shared_state()->aggregate_data_container->append_data(origin);
+                        auto st = _create_agg_status(mapped);
+                        if (!st) {
+                            throw Exception(st.code(), st.to_string());
+                        }
+                        ctor(key, mapped);
+                    };
+
+                    auto creator_for_null_key = [&](auto& mapped) {
+                        mapped = _shared_state()->agg_arena_pool->aligned_alloc(
+                                _operator()._total_size_of_aggregate_states,
+                                _operator()._align_aggregate_states);
+                        auto st = _create_agg_status(mapped);
+                        if (!st) {
+                            throw Exception(st.code(), st.to_string());
+                        }
+                    };
+
+                    SCOPED_TIMER(_hash_table_emplace_timer);
+                    for (size_t i = 0; i < num_rows; ++i) {
+                        places[i] =
+                                agg_method.lazy_emplace(state, i, creator, creator_for_null_key);
+                    }
+
+                    COUNTER_UPDATE(_hash_table_input_counter, num_rows);
+                },
+                _shared_state()->agg_data->method_variant);
+    }
+
+    Status _execute_without_key(vectorized::Block* block) {
+        DCHECK(_shared_state()->agg_data->without_key != nullptr);
+        SCOPED_TIMER(_build_timer);
+        for (int i = 0; i < _shared_state()->aggregate_evaluators.size(); ++i) {
+            RETURN_IF_ERROR(_shared_state()->aggregate_evaluators[i]->execute_single_add(
+                    block,
+                    _shared_state()->agg_data->without_key +
+                            _operator().offsets_of_aggregate_states[i],
+                    _shared_state()->agg_arena_pool.get()));
+        }
+        return Status::OK();
+    }
+    Status _execute_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _execute_with_serialized_key_helper<true>(block);
+        } else {
+            return _execute_with_serialized_key_helper<false>(block);
+        }
+    }
+    template <bool limit>
+    Status _execute_with_serialized_key_helper(vectorized::Block* block) {
+        SCOPED_TIMER(_build_timer);
+        auto& _probe_expr_ctxs = _shared_state()->probe_expr_ctxs;
+        auto& _places = _derived()->_places;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(!_probe_expr_ctxs.empty());
+
+        size_t key_size = _probe_expr_ctxs.size();
+        vectorized::ColumnRawPtrs key_columns(key_size);
+        {
+            SCOPED_TIMER(_expr_timer);
+            for (size_t i = 0; i < key_size; ++i) {
+                int result_column_id = -1;
+                RETURN_IF_ERROR(_probe_expr_ctxs[i]->execute(block, &result_column_id));
+                block->get_by_position(result_column_id).column =
+                        block->get_by_position(result_column_id)
+                                .column->convert_to_full_column_if_const();
+                key_columns[i] = block->get_by_position(result_column_id).column.get();
+            }
+        }
+
+        int rows = block->rows();
+        if (_places.size() < rows) {
+            _places.resize(rows);
+        }
+
+        if constexpr (limit) {
+            _find_in_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+        } else {
+            _emplace_into_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+
+            if (_derived()->_should_limit_output) {
+                _derived()->_reach_limit = _get_hash_table_size() >= _operator()._limit;
+                if (_derived()->_reach_limit && _operator()._can_short_circuit) {
+                    _derived()->_dependency->set_ready_to_read();
+                    return Status::Error<ErrorCode::END_OF_FILE>("");
+                }
+            }
+        }
+
+        return Status::OK();
+    }
+    // We should call this function only at 1st phase.
+    // 1st phase: is_merge=true, only have one SlotRef.
+    // 2nd phase: is_merge=false, maybe have multiple exprs.
+    int _get_slot_column_id(const vectorized::AggFnEvaluator* evaluator) {
+        auto ctxs = evaluator->input_exprs_ctxs();
+        CHECK(ctxs.size() == 1 && ctxs[0]->root()->is_slot_ref())
+                << "input_exprs_ctxs is invalid, input_exprs_ctx[0]="
+                << ctxs[0]->root()->debug_string();
+        return ((vectorized::VSlotRef*)ctxs[0]->root().get())->column_id();
+    }
+
+    Status _merge_without_key(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_merge_timer);
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(agg_data->without_key != nullptr);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            if (aggregate_evaluators[i]->is_merge()) {
+                int col_id = _get_slot_column_id(aggregate_evaluators[i]);
+                auto column = block->get_by_position(col_id).column;
+                if (column->is_nullable()) {
+                    column = ((vectorized::ColumnNullable*)column.get())->get_nested_column_ptr();
+                }
+
+                SCOPED_TIMER(_derived()->_deserialize_data_timer);
+                aggregate_evaluators[i]->function()->deserialize_and_merge_from_column(
+                        agg_data->without_key + _operator().offsets_of_aggregate_states[i], *column,
+                        agg_arena_pool.get());
+            } else {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_single_add(
+                        block, agg_data->without_key + _operator().offsets_of_aggregate_states[i],
+                        agg_arena_pool.get()));
+            }
+        }
+        return Status::OK();
+    }
+
+    Status _merge_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _merge_with_serialized_key_helper<true, false>(block);
+        } else {
+            return _merge_with_serialized_key_helper<false, false>(block);
+        }
+    }
+
+    template <bool limit, bool for_spill>
+    Status _merge_with_serialized_key_helper(vectorized::Block* block) {

Review Comment:
   warning: function '_merge_with_serialized_key_helper' exceeds recommended size/complexity thresholds [readability-function-size]
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_operator_helper.h:241:** 101 lines including whitespace and comments (threshold 80)
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   
   </details>
   



##########
be/src/pipeline/exec/aggregation_operator_helper.h:
##########
@@ -0,0 +1,382 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <cstdint>
+#include <type_traits>
+
+#include "operator.h"
+#include "pipeline/pipeline_x/dependency.h"
+#include "pipeline/pipeline_x/operator.h"
+#include "runtime/block_spill_manager.h"
+#include "runtime/exec_env.h"
+#include "vec/exec/vaggregation_node.h"
+
+namespace doris {
+class ExecNode;
+
+namespace pipeline {
+template <typename Derived, typename OperatorX>
+class AggSinkLocalStateHelper {
+public:
+    ~AggSinkLocalStateHelper() = default;
+    using LocalState = Derived;
+    void _find_in_hash_table(vectorized::AggregateDataPtr* places,
+                             vectorized::ColumnRawPtrs& key_columns, size_t num_rows) {
+        std::visit(
+                [&](auto&& agg_method) -> void {
+                    using HashMethodType = std::decay_t<decltype(agg_method)>;
+                    using AggState = typename HashMethodType::State;
+                    AggState state(key_columns);
+                    agg_method.init_serialized_keys(key_columns, num_rows);
+
+                    /// For all rows.
+                    for (size_t i = 0; i < num_rows; ++i) {
+                        auto find_result = agg_method.find(state, i);
+
+                        if (find_result.is_found()) {
+                            places[i] = find_result.get_mapped();
+                        } else {
+                            places[i] = nullptr;
+                        }
+                    }
+                },
+                _shared_state()->agg_data->method_variant);
+    }
+    Status _create_agg_status(vectorized::AggregateDataPtr data) {
+        auto& shared_state = *_shared_state();
+        for (int i = 0; i < shared_state.aggregate_evaluators.size(); ++i) {
+            try {
+                shared_state.aggregate_evaluators[i]->create(
+                        data + shared_state.offsets_of_aggregate_states[i]);
+            } catch (...) {
+                for (int j = 0; j < i; ++j) {
+                    shared_state.aggregate_evaluators[j]->destroy(
+                            data + shared_state.offsets_of_aggregate_states[j]);
+                }
+                throw;
+            }
+        }
+        return Status::OK();
+    }
+    void _emplace_into_hash_table(vectorized::AggregateDataPtr* places,
+                                  vectorized::ColumnRawPtrs& key_columns, size_t num_rows) {
+        std::visit(
+                [&](auto&& agg_method) -> void {
+                    SCOPED_TIMER(_hash_table_compute_timer);
+                    using HashMethodType = std::decay_t<decltype(agg_method)>;
+                    using AggState = typename HashMethodType::State;
+                    AggState state(key_columns);
+                    agg_method.init_serialized_keys(key_columns, num_rows);
+
+                    auto creator = [this](const auto& ctor, auto& key, auto& origin) {
+                        HashMethodType::try_presis_key_and_origin(key, origin,
+                                                                  *_shared_state()->agg_arena_pool);
+                        auto mapped =
+                                _shared_state()->aggregate_data_container->append_data(origin);
+                        auto st = _create_agg_status(mapped);
+                        if (!st) {
+                            throw Exception(st.code(), st.to_string());
+                        }
+                        ctor(key, mapped);
+                    };
+
+                    auto creator_for_null_key = [&](auto& mapped) {
+                        mapped = _shared_state()->agg_arena_pool->aligned_alloc(
+                                _operator()._total_size_of_aggregate_states,
+                                _operator()._align_aggregate_states);
+                        auto st = _create_agg_status(mapped);
+                        if (!st) {
+                            throw Exception(st.code(), st.to_string());
+                        }
+                    };
+
+                    SCOPED_TIMER(_hash_table_emplace_timer);
+                    for (size_t i = 0; i < num_rows; ++i) {
+                        places[i] =
+                                agg_method.lazy_emplace(state, i, creator, creator_for_null_key);
+                    }
+
+                    COUNTER_UPDATE(_hash_table_input_counter, num_rows);
+                },
+                _shared_state()->agg_data->method_variant);
+    }
+
+    Status _execute_without_key(vectorized::Block* block) {
+        DCHECK(_shared_state()->agg_data->without_key != nullptr);
+        SCOPED_TIMER(_build_timer);
+        for (int i = 0; i < _shared_state()->aggregate_evaluators.size(); ++i) {
+            RETURN_IF_ERROR(_shared_state()->aggregate_evaluators[i]->execute_single_add(
+                    block,
+                    _shared_state()->agg_data->without_key +
+                            _operator().offsets_of_aggregate_states[i],
+                    _shared_state()->agg_arena_pool.get()));
+        }
+        return Status::OK();
+    }
+    Status _execute_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _execute_with_serialized_key_helper<true>(block);
+        } else {
+            return _execute_with_serialized_key_helper<false>(block);
+        }
+    }
+    template <bool limit>
+    Status _execute_with_serialized_key_helper(vectorized::Block* block) {
+        SCOPED_TIMER(_build_timer);
+        auto& _probe_expr_ctxs = _shared_state()->probe_expr_ctxs;
+        auto& _places = _derived()->_places;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(!_probe_expr_ctxs.empty());
+
+        size_t key_size = _probe_expr_ctxs.size();
+        vectorized::ColumnRawPtrs key_columns(key_size);
+        {
+            SCOPED_TIMER(_expr_timer);
+            for (size_t i = 0; i < key_size; ++i) {
+                int result_column_id = -1;
+                RETURN_IF_ERROR(_probe_expr_ctxs[i]->execute(block, &result_column_id));
+                block->get_by_position(result_column_id).column =
+                        block->get_by_position(result_column_id)
+                                .column->convert_to_full_column_if_const();
+                key_columns[i] = block->get_by_position(result_column_id).column.get();
+            }
+        }
+
+        int rows = block->rows();
+        if (_places.size() < rows) {
+            _places.resize(rows);
+        }
+
+        if constexpr (limit) {
+            _find_in_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+        } else {
+            _emplace_into_hash_table(_places.data(), key_columns, rows);
+
+            for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
+                        block, _operator().offsets_of_aggregate_states[i], _places.data(),
+                        agg_arena_pool.get()));
+            }
+
+            if (_derived()->_should_limit_output) {
+                _derived()->_reach_limit = _get_hash_table_size() >= _operator()._limit;
+                if (_derived()->_reach_limit && _operator()._can_short_circuit) {
+                    _derived()->_dependency->set_ready_to_read();
+                    return Status::Error<ErrorCode::END_OF_FILE>("");
+                }
+            }
+        }
+
+        return Status::OK();
+    }
+    // We should call this function only at 1st phase.
+    // 1st phase: is_merge=true, only have one SlotRef.
+    // 2nd phase: is_merge=false, maybe have multiple exprs.
+    int _get_slot_column_id(const vectorized::AggFnEvaluator* evaluator) {
+        auto ctxs = evaluator->input_exprs_ctxs();
+        CHECK(ctxs.size() == 1 && ctxs[0]->root()->is_slot_ref())
+                << "input_exprs_ctxs is invalid, input_exprs_ctx[0]="
+                << ctxs[0]->root()->debug_string();
+        return ((vectorized::VSlotRef*)ctxs[0]->root().get())->column_id();
+    }
+
+    Status _merge_without_key(vectorized::Block* block) {
+        SCOPED_TIMER(_derived()->_merge_timer);
+        auto& agg_data = _shared_state()->agg_data;
+        auto& aggregate_evaluators = _shared_state()->aggregate_evaluators;
+        auto& agg_arena_pool = _shared_state()->agg_arena_pool;
+        DCHECK(agg_data->without_key != nullptr);
+        for (int i = 0; i < aggregate_evaluators.size(); ++i) {
+            if (aggregate_evaluators[i]->is_merge()) {
+                int col_id = _get_slot_column_id(aggregate_evaluators[i]);
+                auto column = block->get_by_position(col_id).column;
+                if (column->is_nullable()) {
+                    column = ((vectorized::ColumnNullable*)column.get())->get_nested_column_ptr();
+                }
+
+                SCOPED_TIMER(_derived()->_deserialize_data_timer);
+                aggregate_evaluators[i]->function()->deserialize_and_merge_from_column(
+                        agg_data->without_key + _operator().offsets_of_aggregate_states[i], *column,
+                        agg_arena_pool.get());
+            } else {
+                RETURN_IF_ERROR(aggregate_evaluators[i]->execute_single_add(
+                        block, agg_data->without_key + _operator().offsets_of_aggregate_states[i],
+                        agg_arena_pool.get()));
+            }
+        }
+        return Status::OK();
+    }
+
+    Status _merge_with_serialized_key(vectorized::Block* block) {
+        if (_derived()->_reach_limit) {
+            return _merge_with_serialized_key_helper<true, false>(block);
+        } else {
+            return _merge_with_serialized_key_helper<false, false>(block);
+        }
+    }
+
+    template <bool limit, bool for_spill>
+    Status _merge_with_serialized_key_helper(vectorized::Block* block) {

Review Comment:
   warning: function '_merge_with_serialized_key_helper' has cognitive complexity of 68 (threshold 50) [readability-function-cognitive-complexity]
   ```cpp
       Status _merge_with_serialized_key_helper(vectorized::Block* block) {
              ^
   ```
   <details>
   <summary>Additional context</summary>
   
   **be/src/pipeline/exec/aggregation_operator_helper.h:252:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           for (size_t i = 0; i < key_size; ++i) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:253:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               if constexpr (for_spill) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:255:** +1, nesting level increased to 2
   ```cpp
               } else {
                 ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:257:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   RETURN_IF_ERROR(probe_expr_ctxs[i]->execute(block, &result_column_id));
                   ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:257:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                   RETURN_IF_ERROR(probe_expr_ctxs[i]->execute(block, &result_column_id));
                   ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:264:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           if (_places.size() < rows) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:268:** +1, including nesting penalty of 0, nesting level increased to 1
   ```cpp
           if constexpr (limit) {
           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:271:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               for (int i = 0; i < aggregate_evaluators.size(); ++i) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:272:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   if (aggregate_evaluators[i]->is_merge()) {
                   ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:275:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (column->is_nullable()) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:281:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (_deserialize_buffer.size() < buffer_size) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:293:** +1, nesting level increased to 3
   ```cpp
                   } else {
                     ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:294:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
                       ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:294:** +5, including nesting penalty of 4, nesting level increased to 5
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add_selected(
                       ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:299:** +1, nesting level increased to 1
   ```cpp
           } else {
             ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:302:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               for (int i = 0; i < aggregate_evaluators.size(); ++i) {
               ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:303:** +3, including nesting penalty of 2, nesting level increased to 3
   ```cpp
                   if (aggregate_evaluators[i]->is_merge() || for_spill) {
                   ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:303:** +1
   ```cpp
                   if (aggregate_evaluators[i]->is_merge() || for_spill) {
                                                           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:305:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if constexpr (for_spill) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:307:** +1, nesting level increased to 4
   ```cpp
                       } else {
                         ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:311:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (column->is_nullable()) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:317:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       if (_deserialize_buffer.size() < buffer_size) {
                       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:329:** +1, nesting level increased to 3
   ```cpp
                   } else {
                     ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:330:** +4, including nesting penalty of 3, nesting level increased to 4
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
                       ^
   ```
   **be/src/common/status.h:541:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
       do {                                \
       ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:330:** +5, including nesting penalty of 4, nesting level increased to 5
   ```cpp
                       RETURN_IF_ERROR(aggregate_evaluators[i]->execute_batch_add(
                       ^
   ```
   **be/src/common/status.h:543:** expanded from macro 'RETURN_IF_ERROR'
   ```cpp
           if (UNLIKELY(!_status_.ok())) { \
           ^
   ```
   **be/src/pipeline/exec/aggregation_operator_helper.h:336:** +2, including nesting penalty of 1, nesting level increased to 2
   ```cpp
               if (_derived()->_should_limit_output) {
               ^
   ```
   
   </details>
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "Mryange (via GitHub)" <gi...@apache.org>.
Mryange commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2048925221

   run buildall


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "Mryange (via GitHub)" <gi...@apache.org>.
Mryange commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2047780268

   run buildall


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "Mryange (via GitHub)" <gi...@apache.org>.
Mryange commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2047127248

   run buildall


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org


Re: [PR] [refine](pipelineX) refine code in agg node [doris]

Posted by "doris-robot (via GitHub)" <gi...@apache.org>.
doris-robot commented on PR #33491:
URL: https://github.com/apache/doris/pull/33491#issuecomment-2047839080

   TeamCity be ut coverage result:
    Function Coverage: 35.64% (8911/25001) 
    Line Coverage: 27.36% (73137/267274)
    Region Coverage: 26.48% (37821/142817)
    Branch Coverage: 23.22% (19271/82982)
    Coverage Report: http://coverage.selectdb-in.cc/coverage/87d7d2ded2f5a02f615276d8558f9a033530fb78_87d7d2ded2f5a02f615276d8558f9a033530fb78/report/index.html


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org