You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@camel.apache.org by "Samuel Padou (Jira)" <ji...@apache.org> on 2021/11/24 13:44:00 UTC

[jira] [Commented] (CAMEL-17038) camel-core - EIPs with thread pools vs reactive engine

    [ https://issues.apache.org/jira/browse/CAMEL-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17448613#comment-17448613 ] 

Samuel Padou commented on CAMEL-17038:
--------------------------------------

Regarding the proposed solution, I'm not sure that losing the callers run capability for EIPs thread pools is the best solution. In the context of CAMEL-16829 I specifically rely on the caller runs policy, to be able to use a small thread pool without a queue for performances optimisation of the splitter processing, but without completely blocking the incoming route if the poll is full.

Not sure if helpful to fix the issue globally, but for my use-case I have a workaround that allow me to use the caller runs policy without blocking the route. I've replaced the default caller runs implementation by one that schedule the task synchronously on the reactive executor instead of running it directly, this way it is run synchronously in the current thread but integrate to the reactive context and avoid blocking it. It works well in my case but I'm not sure of all the implications of doing that in different contexts, specifically the schedule sync will run everything enqueued in the reactive context which may cause unexpected side effects in some cases.

Here a snippet of the rejected execution implementation that I use instead of the default caller runs one:
{code:java}
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
  if (!executor.isShutdown()) {
    camelContext.adapt(ExtendedCamelContext.class).getReactiveExecutor().scheduleSync(r);
  }
}
{code}

> camel-core - EIPs with thread pools vs reactive engine
> ------------------------------------------------------
>
>                 Key: CAMEL-17038
>                 URL: https://issues.apache.org/jira/browse/CAMEL-17038
>             Project: Camel
>          Issue Type: Improvement
>          Components: camel-core
>            Reporter: Claus Ibsen
>            Assignee: Claus Ibsen
>            Priority: Major
>             Fix For: 3.15.0
>
>
> EIPs that support thread pools for parallel processing such as splitter, wire-tap etc uses a JDK thread pool. The default sized thread pool in Camel has a backlog of 1000 slots, so the pools has capacity to process tasks as they come.
> And in case a pool is full then they by default allows to steal the caller thread to run the task.
> However this model has some flaws now
> a) The EIPs are going parallel and then the task is executed on current thread via caller runs (blocking)
> b) The other rejections discard, discard oldest, (abort) will cause problems as the task has an exchange callback that should be called to continue that inflight exchange
> For (a) it adds complexity and we have a bug such as CAMEL-16829
> For EIPs then we should consider not allowing custom rejections, and only have a default behaviour that is if a task is rejected then the exchange fails. Or we can add a strategy that will block (with timeout) until a slot is free.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)