You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@predictionio.apache.org by Chris Wewerka <ch...@gmail.com> on 2018/10/11 11:40:14 UTC

Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Hi,

we are currently using Prediction IO and the Universal Recommender in a
small production environment to test it out. The system is running well
with our currently small request numbers but we are experiencing some
seldom spikes. Our biggest Todo is to split out HBase and ES to separate
machines (or switch to ES also for event storage and move ES out to
separate machines). So I think that running that all on one machine is
probably the cause for this spikes, although we don't see much CPU usage or
IO blocks on this machine.

That said, I had a look at the source code of Prediction IO and UR and
found a thing that I wanted to ask about here:

The LEvents trait allows async calls, e.g. futureFind returns a Scala
Future.

But if you look at concrete implementations like HBLEvents and ESLEvents I
saw that blocking calls / drivers are used, even when there are async
variants available (like for ES5+, just use performRequestAsync instead of
performRequest).

These blocking calls are then "futurized" by using the standard Scala
Execution Context. I come back to this later.


Also taking a look at the interface for predict algorithms:

def predictBase(bm: Any, q: Q): P
def predict(model: NullModel, query: Query): PredictedResult

I am wondering why it is not like

def predictBase(bm: Any, q: Q): Future[P]
def predict(model: NullModel, query: Query): Future[PredictedResult]

to allow for async, non blocking algorithm implementations.

In LEventStore e.g. the above leads to

Await.result(eventsDb.futureFind(...)

with again a standard import to
scala.concurrent.ExecutionContext.Implicits.global so that alogrithms
like URAlogrithm that cannot deal with async/Futures in their methods
can simply call the synchronous code.

Having a look at the ServerActor for the QueryServer, I see that it is
implemented using spray and in queries.json route detach() is used to
"futurize" synchronous calls inside the route like the synchronous
algo.predictBase() call.

Again here the standard
scala.concurrent.ExecutionContext.Implicits.global is used to make it
async using it's internal thread pool.

Looking at the doc of scala.concurrent.ExecutionContext.Implicits.global:

"The default `ExecutionContext` implementation is backed by a
work-stealing thread pool. By default,
* the thread pool uses a target number of worker threads equal to the number of
* [[https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#availableProcessors--
available processors]]."

Especially the part "...target number of worker threads equal to the
number of available processors.".

I ask myself if that may be a problem, because our machine has 8
processors, so only 8 threads are available to do all the stuff
described above and (that's maybe a problem) these few threads may be
blocked by IO/Net.
What do you think about that? Did I make a mistake somewhere or did I
understand sth. wrong?
Thought about forking and trying to support full async at least for
ES. Would contribute that as a PR. What do you think?

Cheers
Chris

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Naoki Takezoe <ta...@gmail.com>.
I asked other committers to review my pull request. Wait a moment until
merging. I guess it doesn't take a long time.

2018年10月16日(火) 18:50 Chris Wewerka <ch...@gmail.com>:

> Hi Naoki,
>
> nice, ok I would base my work on your PR as well, so we don't need to do
> the same work twice. Does your PR gets merged into develop soon?
>
> Cheers
> Chris
>
> On Tue, 16 Oct 2018 at 09:18, Naoki Takezoe <ta...@gmail.com> wrote:
>
>> Hi Chris,
>>
>> Oh, great. My plan was only adding async methods to LEventStore. If
>> you can work for other parts, I will update my pull request to just
>> describing about the default global ExecutionContext and wait for your
>> work.
>> On Tue, Oct 16, 2018 at 4:01 PM Chris Wewerka <ch...@gmail.com>
>> wrote:
>> >
>> > Hi Naoki,
>> >
>> > thanks, that looks good. Will you continue with the other stores /
>> storage types and also introduce async methods to the
>> Algo.predict/predictBase and then the QueryServer? Just asking because I
>> started yesterday to have a look around in the Query Server/ Algo. area
>> >
>> > Cheers
>> > Chris
>> >
>> > On Tue, 16 Oct 2018 at 03:43, Naoki Takezoe <ta...@gmail.com> wrote:
>> >>
>> >> Hi Chris,
>> >>
>> >> Does this pull request work for you?
>> >> https://github.com/apache/predictionio/pull/482
>> >> On Sat, Oct 13, 2018 at 1:11 AM Naoki Takezoe <ta...@gmail.com>
>> wrote:
>> >> >
>> >> > I think the point is that LEventStore doesn't have asynchronous
>> >> > methods. We should add methods which return Future to LEventStore and
>> >> > modify current blocking methods to take ExecutionContext. I created
>> >> > JIRA ticket for that:
>> >> > https://jira.apache.org/jira/browse/PIO-182
>> >> >
>> >> > On the other hand, it makes sense to describe in the documentation.
>> At
>> >> > least, we should describe that LEventStore uses the default global
>> >> > ExecutionContext and how to configure it if we keep existing blocking
>> >> > methods.
>> >> >
>> >> > 2018年10月12日(金) 16:06 Chris Wewerka <ch...@gmail.com>:
>> >> > >
>> >> > > Hi Donald,
>> >> > >
>> >> > > thanks for your answer and the hint to base off from Naoki's Akka
>> Http thread. Saw the PR and had the same idea already, as it does not make
>> sense to base off the old spray code. I worked with spray a couple of years
>> ago and back then it already had full support for Scala Futures / Fully
>> async programming. If I get the time I will start with a fork going off
>> Naoki's Akka HTTP branch.
>> >> > >
>> >> > > Please have a look at my second mail also, as the usage of the
>> bounded "standard" Scala Execution context has a dramatic impact of how the
>> machines resources are leveraged. On our small "All in one" machine we
>> didn't see much CPU / Load until yesterday when I set the mentioned params
>> to allow much higher thread counts in the standard Scala Execution context.
>> We have proven this in our small production environment and it has an huge
>> impact. In fact the Query Server acted like a water dam, not letting enough
>> requests in the system to use all of it's resources. You might consider
>> adding this to the documentation, until I hopefully come up with a PR for
>> full async engine.
>> >> > >
>> >> > > Cheers
>> >> > > Chris
>> >> > >
>> >> > > On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org>
>> wrote:
>> >> > >>
>> >> > >> Hi Chris,
>> >> > >>
>> >> > >> It is indeed a good idea to create asynchronous versions of the
>> engine server! Naoki has recently completed the migration from spray to
>> Akka HTTP so you may want to base off from that instead. Let us know if we
>> can help in any way.
>> >> > >>
>> >> > >> I do not recall the exact reason anymore, but engine server was
>> created almost 5 years ago, and I don’t remember whether spray could take
>> futures natively as responses like Akka HTTP could now. Nowadays there
>> shouldn’t be any reason to not provide asynchronous flavors of these APIs.
>> >> > >>
>> >> > >> Regards,
>> >> > >> Donald
>> >> > >>
>> >> > >> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com>
>> wrote:
>> >> > >>>
>> >> > >>> Hi Chris,
>> >> > >>>
>> >> > >>> I think current LEventStore's blocking methods should take
>> ExecutionContext as an implicit parameter and Future version of methods
>> should be provided. I don't know why they aren't. Is there anyone who knows
>> reason for the current LEventStore API?
>> >> > >>>
>> >> > >>> At the moment, you can consider to use LEvent directly to access
>> Future version of methods as a workaround.
>> >> > >>>
>> >> > >>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>> >> > >>>
>> >> > >>> >
>> >> > >>> > Thanks George, good to hear that!
>> >> > >>> >
>> >> > >>> > Today I did a test by raising the bar for the max allowed
>> threads in the "standard"
>> >> > >>> >
>> >> > >>> > scala.concurrent.ExecutionContext.Implicits.global
>> >> > >>> >
>> >> > >>> > I did this before calling "pio deploy" by adding
>> >> > >>> >
>> >> > >>> > export JAVA_OPTS="$JAVA_OPTS
>> -Dscala.concurrent.context.numThreads=1000
>> -Dscala.concurrent.context.maxThreads=1000"
>> >> > >>> >
>> >> > >>> > Now we do see much more CPU usage by elasticsearch. So it
>> seems that the QueryServer by using the standard thread pool bounded to the
>> available processors acted like a dam.
>> >> > >>> >
>> >> > >>> > By setting the above values we now have sth. like a
>> traditional Java JEE or Spring application which blocks thread because of
>> synchronous calls and creates new threads if there is demand (requests) for
>> it.
>> >> > >>> >
>> >> > >>> > So this is far from being a good solution. Going full
>> async/reactive is still the way to go in my opinion.
>> >> > >>> >
>> >> > >>> > Cheers
>> >> > >>> > Chris
>> >> > >>> >
>> >> > >>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <
>> gyarish@griddynamics.com> wrote:
>> >> > >>> >>
>> >> > >>> >>
>> >> > >>> >> Hi Chris,
>> >> > >>> >>
>> >> > >>> >> I'm not a contributor of the predictionio. But want to
>> mention we also quite interested in that changes in my company.
>> >> > >>> >> We often develop some custom pio engines, and it doesn't look
>> right to me to use Await.result with non-blocking api.
>> >> > >>> >> Totally agree with your point.
>> >> > >>> >> Thanks for the question!
>> >> > >>> >>
>> >> > >>> >> George
>> >> > >>>
>> >> > >>>
>> >> > >>>
>> >> > >>> --
>> >> > >>> Naoki Takezoe
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Naoki Takezoe
>> >>
>> >>
>> >>
>> >> --
>> >> Naoki Takezoe
>>
>>
>>
>> --
>> Naoki Takezoe
>>
>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Chris Wewerka <ch...@gmail.com>.
Hi Naoki,

nice, ok I would base my work on your PR as well, so we don't need to do
the same work twice. Does your PR gets merged into develop soon?

Cheers
Chris

On Tue, 16 Oct 2018 at 09:18, Naoki Takezoe <ta...@gmail.com> wrote:

> Hi Chris,
>
> Oh, great. My plan was only adding async methods to LEventStore. If
> you can work for other parts, I will update my pull request to just
> describing about the default global ExecutionContext and wait for your
> work.
> On Tue, Oct 16, 2018 at 4:01 PM Chris Wewerka <ch...@gmail.com>
> wrote:
> >
> > Hi Naoki,
> >
> > thanks, that looks good. Will you continue with the other stores /
> storage types and also introduce async methods to the
> Algo.predict/predictBase and then the QueryServer? Just asking because I
> started yesterday to have a look around in the Query Server/ Algo. area
> >
> > Cheers
> > Chris
> >
> > On Tue, 16 Oct 2018 at 03:43, Naoki Takezoe <ta...@gmail.com> wrote:
> >>
> >> Hi Chris,
> >>
> >> Does this pull request work for you?
> >> https://github.com/apache/predictionio/pull/482
> >> On Sat, Oct 13, 2018 at 1:11 AM Naoki Takezoe <ta...@gmail.com>
> wrote:
> >> >
> >> > I think the point is that LEventStore doesn't have asynchronous
> >> > methods. We should add methods which return Future to LEventStore and
> >> > modify current blocking methods to take ExecutionContext. I created
> >> > JIRA ticket for that:
> >> > https://jira.apache.org/jira/browse/PIO-182
> >> >
> >> > On the other hand, it makes sense to describe in the documentation. At
> >> > least, we should describe that LEventStore uses the default global
> >> > ExecutionContext and how to configure it if we keep existing blocking
> >> > methods.
> >> >
> >> > 2018年10月12日(金) 16:06 Chris Wewerka <ch...@gmail.com>:
> >> > >
> >> > > Hi Donald,
> >> > >
> >> > > thanks for your answer and the hint to base off from Naoki's Akka
> Http thread. Saw the PR and had the same idea already, as it does not make
> sense to base off the old spray code. I worked with spray a couple of years
> ago and back then it already had full support for Scala Futures / Fully
> async programming. If I get the time I will start with a fork going off
> Naoki's Akka HTTP branch.
> >> > >
> >> > > Please have a look at my second mail also, as the usage of the
> bounded "standard" Scala Execution context has a dramatic impact of how the
> machines resources are leveraged. On our small "All in one" machine we
> didn't see much CPU / Load until yesterday when I set the mentioned params
> to allow much higher thread counts in the standard Scala Execution context.
> We have proven this in our small production environment and it has an huge
> impact. In fact the Query Server acted like a water dam, not letting enough
> requests in the system to use all of it's resources. You might consider
> adding this to the documentation, until I hopefully come up with a PR for
> full async engine.
> >> > >
> >> > > Cheers
> >> > > Chris
> >> > >
> >> > > On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org>
> wrote:
> >> > >>
> >> > >> Hi Chris,
> >> > >>
> >> > >> It is indeed a good idea to create asynchronous versions of the
> engine server! Naoki has recently completed the migration from spray to
> Akka HTTP so you may want to base off from that instead. Let us know if we
> can help in any way.
> >> > >>
> >> > >> I do not recall the exact reason anymore, but engine server was
> created almost 5 years ago, and I don’t remember whether spray could take
> futures natively as responses like Akka HTTP could now. Nowadays there
> shouldn’t be any reason to not provide asynchronous flavors of these APIs.
> >> > >>
> >> > >> Regards,
> >> > >> Donald
> >> > >>
> >> > >> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com>
> wrote:
> >> > >>>
> >> > >>> Hi Chris,
> >> > >>>
> >> > >>> I think current LEventStore's blocking methods should take
> ExecutionContext as an implicit parameter and Future version of methods
> should be provided. I don't know why they aren't. Is there anyone who knows
> reason for the current LEventStore API?
> >> > >>>
> >> > >>> At the moment, you can consider to use LEvent directly to access
> Future version of methods as a workaround.
> >> > >>>
> >> > >>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
> >> > >>>
> >> > >>> >
> >> > >>> > Thanks George, good to hear that!
> >> > >>> >
> >> > >>> > Today I did a test by raising the bar for the max allowed
> threads in the "standard"
> >> > >>> >
> >> > >>> > scala.concurrent.ExecutionContext.Implicits.global
> >> > >>> >
> >> > >>> > I did this before calling "pio deploy" by adding
> >> > >>> >
> >> > >>> > export JAVA_OPTS="$JAVA_OPTS
> -Dscala.concurrent.context.numThreads=1000
> -Dscala.concurrent.context.maxThreads=1000"
> >> > >>> >
> >> > >>> > Now we do see much more CPU usage by elasticsearch. So it seems
> that the QueryServer by using the standard thread pool bounded to the
> available processors acted like a dam.
> >> > >>> >
> >> > >>> > By setting the above values we now have sth. like a traditional
> Java JEE or Spring application which blocks thread because of synchronous
> calls and creates new threads if there is demand (requests) for it.
> >> > >>> >
> >> > >>> > So this is far from being a good solution. Going full
> async/reactive is still the way to go in my opinion.
> >> > >>> >
> >> > >>> > Cheers
> >> > >>> > Chris
> >> > >>> >
> >> > >>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <
> gyarish@griddynamics.com> wrote:
> >> > >>> >>
> >> > >>> >>
> >> > >>> >> Hi Chris,
> >> > >>> >>
> >> > >>> >> I'm not a contributor of the predictionio. But want to mention
> we also quite interested in that changes in my company.
> >> > >>> >> We often develop some custom pio engines, and it doesn't look
> right to me to use Await.result with non-blocking api.
> >> > >>> >> Totally agree with your point.
> >> > >>> >> Thanks for the question!
> >> > >>> >>
> >> > >>> >> George
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>> --
> >> > >>> Naoki Takezoe
> >> >
> >> >
> >> >
> >> > --
> >> > Naoki Takezoe
> >>
> >>
> >>
> >> --
> >> Naoki Takezoe
>
>
>
> --
> Naoki Takezoe
>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Naoki Takezoe <ta...@gmail.com>.
Hi Chris,

Oh, great. My plan was only adding async methods to LEventStore. If
you can work for other parts, I will update my pull request to just
describing about the default global ExecutionContext and wait for your
work.
On Tue, Oct 16, 2018 at 4:01 PM Chris Wewerka <ch...@gmail.com> wrote:
>
> Hi Naoki,
>
> thanks, that looks good. Will you continue with the other stores / storage types and also introduce async methods to the Algo.predict/predictBase and then the QueryServer? Just asking because I started yesterday to have a look around in the Query Server/ Algo. area
>
> Cheers
> Chris
>
> On Tue, 16 Oct 2018 at 03:43, Naoki Takezoe <ta...@gmail.com> wrote:
>>
>> Hi Chris,
>>
>> Does this pull request work for you?
>> https://github.com/apache/predictionio/pull/482
>> On Sat, Oct 13, 2018 at 1:11 AM Naoki Takezoe <ta...@gmail.com> wrote:
>> >
>> > I think the point is that LEventStore doesn't have asynchronous
>> > methods. We should add methods which return Future to LEventStore and
>> > modify current blocking methods to take ExecutionContext. I created
>> > JIRA ticket for that:
>> > https://jira.apache.org/jira/browse/PIO-182
>> >
>> > On the other hand, it makes sense to describe in the documentation. At
>> > least, we should describe that LEventStore uses the default global
>> > ExecutionContext and how to configure it if we keep existing blocking
>> > methods.
>> >
>> > 2018年10月12日(金) 16:06 Chris Wewerka <ch...@gmail.com>:
>> > >
>> > > Hi Donald,
>> > >
>> > > thanks for your answer and the hint to base off from Naoki's Akka Http thread. Saw the PR and had the same idea already, as it does not make sense to base off the old spray code. I worked with spray a couple of years ago and back then it already had full support for Scala Futures / Fully async programming. If I get the time I will start with a fork going off Naoki's Akka HTTP branch.
>> > >
>> > > Please have a look at my second mail also, as the usage of the bounded "standard" Scala Execution context has a dramatic impact of how the machines resources are leveraged. On our small "All in one" machine we didn't see much CPU / Load until yesterday when I set the mentioned params to allow much higher thread counts in the standard Scala Execution context. We have proven this in our small production environment and it has an huge impact. In fact the Query Server acted like a water dam, not letting enough requests in the system to use all of it's resources. You might consider adding this to the documentation, until I hopefully come up with a PR for full async engine.
>> > >
>> > > Cheers
>> > > Chris
>> > >
>> > > On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org> wrote:
>> > >>
>> > >> Hi Chris,
>> > >>
>> > >> It is indeed a good idea to create asynchronous versions of the engine server! Naoki has recently completed the migration from spray to Akka HTTP so you may want to base off from that instead. Let us know if we can help in any way.
>> > >>
>> > >> I do not recall the exact reason anymore, but engine server was created almost 5 years ago, and I don’t remember whether spray could take futures natively as responses like Akka HTTP could now. Nowadays there shouldn’t be any reason to not provide asynchronous flavors of these APIs.
>> > >>
>> > >> Regards,
>> > >> Donald
>> > >>
>> > >> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com> wrote:
>> > >>>
>> > >>> Hi Chris,
>> > >>>
>> > >>> I think current LEventStore's blocking methods should take ExecutionContext as an implicit parameter and Future version of methods should be provided. I don't know why they aren't. Is there anyone who knows reason for the current LEventStore API?
>> > >>>
>> > >>> At the moment, you can consider to use LEvent directly to access Future version of methods as a workaround.
>> > >>>
>> > >>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>> > >>>
>> > >>> >
>> > >>> > Thanks George, good to hear that!
>> > >>> >
>> > >>> > Today I did a test by raising the bar for the max allowed threads in the "standard"
>> > >>> >
>> > >>> > scala.concurrent.ExecutionContext.Implicits.global
>> > >>> >
>> > >>> > I did this before calling "pio deploy" by adding
>> > >>> >
>> > >>> > export JAVA_OPTS="$JAVA_OPTS -Dscala.concurrent.context.numThreads=1000 -Dscala.concurrent.context.maxThreads=1000"
>> > >>> >
>> > >>> > Now we do see much more CPU usage by elasticsearch. So it seems that the QueryServer by using the standard thread pool bounded to the available processors acted like a dam.
>> > >>> >
>> > >>> > By setting the above values we now have sth. like a traditional Java JEE or Spring application which blocks thread because of synchronous calls and creates new threads if there is demand (requests) for it.
>> > >>> >
>> > >>> > So this is far from being a good solution. Going full async/reactive is still the way to go in my opinion.
>> > >>> >
>> > >>> > Cheers
>> > >>> > Chris
>> > >>> >
>> > >>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com> wrote:
>> > >>> >>
>> > >>> >>
>> > >>> >> Hi Chris,
>> > >>> >>
>> > >>> >> I'm not a contributor of the predictionio. But want to mention we also quite interested in that changes in my company.
>> > >>> >> We often develop some custom pio engines, and it doesn't look right to me to use Await.result with non-blocking api.
>> > >>> >> Totally agree with your point.
>> > >>> >> Thanks for the question!
>> > >>> >>
>> > >>> >> George
>> > >>>
>> > >>>
>> > >>>
>> > >>> --
>> > >>> Naoki Takezoe
>> >
>> >
>> >
>> > --
>> > Naoki Takezoe
>>
>>
>>
>> --
>> Naoki Takezoe



-- 
Naoki Takezoe

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Chris Wewerka <ch...@gmail.com>.
Hi Naoki,

thanks, that looks good. Will you continue with the other stores / storage
types and also introduce async methods to the Algo.predict/predictBase and
then the QueryServer? Just asking because I started yesterday to have a
look around in the Query Server/ Algo. area

Cheers
Chris

On Tue, 16 Oct 2018 at 03:43, Naoki Takezoe <ta...@gmail.com> wrote:

> Hi Chris,
>
> Does this pull request work for you?
> https://github.com/apache/predictionio/pull/482
> On Sat, Oct 13, 2018 at 1:11 AM Naoki Takezoe <ta...@gmail.com> wrote:
> >
> > I think the point is that LEventStore doesn't have asynchronous
> > methods. We should add methods which return Future to LEventStore and
> > modify current blocking methods to take ExecutionContext. I created
> > JIRA ticket for that:
> > https://jira.apache.org/jira/browse/PIO-182
> >
> > On the other hand, it makes sense to describe in the documentation. At
> > least, we should describe that LEventStore uses the default global
> > ExecutionContext and how to configure it if we keep existing blocking
> > methods.
> >
> > 2018年10月12日(金) 16:06 Chris Wewerka <ch...@gmail.com>:
> > >
> > > Hi Donald,
> > >
> > > thanks for your answer and the hint to base off from Naoki's Akka Http
> thread. Saw the PR and had the same idea already, as it does not make sense
> to base off the old spray code. I worked with spray a couple of years ago
> and back then it already had full support for Scala Futures / Fully async
> programming. If I get the time I will start with a fork going off Naoki's
> Akka HTTP branch.
> > >
> > > Please have a look at my second mail also, as the usage of the bounded
> "standard" Scala Execution context has a dramatic impact of how the
> machines resources are leveraged. On our small "All in one" machine we
> didn't see much CPU / Load until yesterday when I set the mentioned params
> to allow much higher thread counts in the standard Scala Execution context.
> We have proven this in our small production environment and it has an huge
> impact. In fact the Query Server acted like a water dam, not letting enough
> requests in the system to use all of it's resources. You might consider
> adding this to the documentation, until I hopefully come up with a PR for
> full async engine.
> > >
> > > Cheers
> > > Chris
> > >
> > > On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org> wrote:
> > >>
> > >> Hi Chris,
> > >>
> > >> It is indeed a good idea to create asynchronous versions of the
> engine server! Naoki has recently completed the migration from spray to
> Akka HTTP so you may want to base off from that instead. Let us know if we
> can help in any way.
> > >>
> > >> I do not recall the exact reason anymore, but engine server was
> created almost 5 years ago, and I don’t remember whether spray could take
> futures natively as responses like Akka HTTP could now. Nowadays there
> shouldn’t be any reason to not provide asynchronous flavors of these APIs.
> > >>
> > >> Regards,
> > >> Donald
> > >>
> > >> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com>
> wrote:
> > >>>
> > >>> Hi Chris,
> > >>>
> > >>> I think current LEventStore's blocking methods should take
> ExecutionContext as an implicit parameter and Future version of methods
> should be provided. I don't know why they aren't. Is there anyone who knows
> reason for the current LEventStore API?
> > >>>
> > >>> At the moment, you can consider to use LEvent directly to access
> Future version of methods as a workaround.
> > >>>
> > >>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
> > >>>
> > >>> >
> > >>> > Thanks George, good to hear that!
> > >>> >
> > >>> > Today I did a test by raising the bar for the max allowed threads
> in the "standard"
> > >>> >
> > >>> > scala.concurrent.ExecutionContext.Implicits.global
> > >>> >
> > >>> > I did this before calling "pio deploy" by adding
> > >>> >
> > >>> > export JAVA_OPTS="$JAVA_OPTS
> -Dscala.concurrent.context.numThreads=1000
> -Dscala.concurrent.context.maxThreads=1000"
> > >>> >
> > >>> > Now we do see much more CPU usage by elasticsearch. So it seems
> that the QueryServer by using the standard thread pool bounded to the
> available processors acted like a dam.
> > >>> >
> > >>> > By setting the above values we now have sth. like a traditional
> Java JEE or Spring application which blocks thread because of synchronous
> calls and creates new threads if there is demand (requests) for it.
> > >>> >
> > >>> > So this is far from being a good solution. Going full
> async/reactive is still the way to go in my opinion.
> > >>> >
> > >>> > Cheers
> > >>> > Chris
> > >>> >
> > >>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <
> gyarish@griddynamics.com> wrote:
> > >>> >>
> > >>> >>
> > >>> >> Hi Chris,
> > >>> >>
> > >>> >> I'm not a contributor of the predictionio. But want to mention we
> also quite interested in that changes in my company.
> > >>> >> We often develop some custom pio engines, and it doesn't look
> right to me to use Await.result with non-blocking api.
> > >>> >> Totally agree with your point.
> > >>> >> Thanks for the question!
> > >>> >>
> > >>> >> George
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> Naoki Takezoe
> >
> >
> >
> > --
> > Naoki Takezoe
>
>
>
> --
> Naoki Takezoe
>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Naoki Takezoe <ta...@gmail.com>.
Hi Chris,

Does this pull request work for you?
https://github.com/apache/predictionio/pull/482
On Sat, Oct 13, 2018 at 1:11 AM Naoki Takezoe <ta...@gmail.com> wrote:
>
> I think the point is that LEventStore doesn't have asynchronous
> methods. We should add methods which return Future to LEventStore and
> modify current blocking methods to take ExecutionContext. I created
> JIRA ticket for that:
> https://jira.apache.org/jira/browse/PIO-182
>
> On the other hand, it makes sense to describe in the documentation. At
> least, we should describe that LEventStore uses the default global
> ExecutionContext and how to configure it if we keep existing blocking
> methods.
>
> 2018年10月12日(金) 16:06 Chris Wewerka <ch...@gmail.com>:
> >
> > Hi Donald,
> >
> > thanks for your answer and the hint to base off from Naoki's Akka Http thread. Saw the PR and had the same idea already, as it does not make sense to base off the old spray code. I worked with spray a couple of years ago and back then it already had full support for Scala Futures / Fully async programming. If I get the time I will start with a fork going off Naoki's Akka HTTP branch.
> >
> > Please have a look at my second mail also, as the usage of the bounded "standard" Scala Execution context has a dramatic impact of how the machines resources are leveraged. On our small "All in one" machine we didn't see much CPU / Load until yesterday when I set the mentioned params to allow much higher thread counts in the standard Scala Execution context. We have proven this in our small production environment and it has an huge impact. In fact the Query Server acted like a water dam, not letting enough requests in the system to use all of it's resources. You might consider adding this to the documentation, until I hopefully come up with a PR for full async engine.
> >
> > Cheers
> > Chris
> >
> > On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org> wrote:
> >>
> >> Hi Chris,
> >>
> >> It is indeed a good idea to create asynchronous versions of the engine server! Naoki has recently completed the migration from spray to Akka HTTP so you may want to base off from that instead. Let us know if we can help in any way.
> >>
> >> I do not recall the exact reason anymore, but engine server was created almost 5 years ago, and I don’t remember whether spray could take futures natively as responses like Akka HTTP could now. Nowadays there shouldn’t be any reason to not provide asynchronous flavors of these APIs.
> >>
> >> Regards,
> >> Donald
> >>
> >> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com> wrote:
> >>>
> >>> Hi Chris,
> >>>
> >>> I think current LEventStore's blocking methods should take ExecutionContext as an implicit parameter and Future version of methods should be provided. I don't know why they aren't. Is there anyone who knows reason for the current LEventStore API?
> >>>
> >>> At the moment, you can consider to use LEvent directly to access Future version of methods as a workaround.
> >>>
> >>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
> >>>
> >>> >
> >>> > Thanks George, good to hear that!
> >>> >
> >>> > Today I did a test by raising the bar for the max allowed threads in the "standard"
> >>> >
> >>> > scala.concurrent.ExecutionContext.Implicits.global
> >>> >
> >>> > I did this before calling "pio deploy" by adding
> >>> >
> >>> > export JAVA_OPTS="$JAVA_OPTS -Dscala.concurrent.context.numThreads=1000 -Dscala.concurrent.context.maxThreads=1000"
> >>> >
> >>> > Now we do see much more CPU usage by elasticsearch. So it seems that the QueryServer by using the standard thread pool bounded to the available processors acted like a dam.
> >>> >
> >>> > By setting the above values we now have sth. like a traditional Java JEE or Spring application which blocks thread because of synchronous calls and creates new threads if there is demand (requests) for it.
> >>> >
> >>> > So this is far from being a good solution. Going full async/reactive is still the way to go in my opinion.
> >>> >
> >>> > Cheers
> >>> > Chris
> >>> >
> >>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com> wrote:
> >>> >>
> >>> >>
> >>> >> Hi Chris,
> >>> >>
> >>> >> I'm not a contributor of the predictionio. But want to mention we also quite interested in that changes in my company.
> >>> >> We often develop some custom pio engines, and it doesn't look right to me to use Await.result with non-blocking api.
> >>> >> Totally agree with your point.
> >>> >> Thanks for the question!
> >>> >>
> >>> >> George
> >>>
> >>>
> >>>
> >>> --
> >>> Naoki Takezoe
>
>
>
> --
> Naoki Takezoe



-- 
Naoki Takezoe

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Naoki Takezoe <ta...@gmail.com>.
I think the point is that LEventStore doesn't have asynchronous
methods. We should add methods which return Future to LEventStore and
modify current blocking methods to take ExecutionContext. I created
JIRA ticket for that:
https://jira.apache.org/jira/browse/PIO-182

On the other hand, it makes sense to describe in the documentation. At
least, we should describe that LEventStore uses the default global
ExecutionContext and how to configure it if we keep existing blocking
methods.

2018年10月12日(金) 16:06 Chris Wewerka <ch...@gmail.com>:
>
> Hi Donald,
>
> thanks for your answer and the hint to base off from Naoki's Akka Http thread. Saw the PR and had the same idea already, as it does not make sense to base off the old spray code. I worked with spray a couple of years ago and back then it already had full support for Scala Futures / Fully async programming. If I get the time I will start with a fork going off Naoki's Akka HTTP branch.
>
> Please have a look at my second mail also, as the usage of the bounded "standard" Scala Execution context has a dramatic impact of how the machines resources are leveraged. On our small "All in one" machine we didn't see much CPU / Load until yesterday when I set the mentioned params to allow much higher thread counts in the standard Scala Execution context. We have proven this in our small production environment and it has an huge impact. In fact the Query Server acted like a water dam, not letting enough requests in the system to use all of it's resources. You might consider adding this to the documentation, until I hopefully come up with a PR for full async engine.
>
> Cheers
> Chris
>
> On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org> wrote:
>>
>> Hi Chris,
>>
>> It is indeed a good idea to create asynchronous versions of the engine server! Naoki has recently completed the migration from spray to Akka HTTP so you may want to base off from that instead. Let us know if we can help in any way.
>>
>> I do not recall the exact reason anymore, but engine server was created almost 5 years ago, and I don’t remember whether spray could take futures natively as responses like Akka HTTP could now. Nowadays there shouldn’t be any reason to not provide asynchronous flavors of these APIs.
>>
>> Regards,
>> Donald
>>
>> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com> wrote:
>>>
>>> Hi Chris,
>>>
>>> I think current LEventStore's blocking methods should take ExecutionContext as an implicit parameter and Future version of methods should be provided. I don't know why they aren't. Is there anyone who knows reason for the current LEventStore API?
>>>
>>> At the moment, you can consider to use LEvent directly to access Future version of methods as a workaround.
>>>
>>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>>>
>>> >
>>> > Thanks George, good to hear that!
>>> >
>>> > Today I did a test by raising the bar for the max allowed threads in the "standard"
>>> >
>>> > scala.concurrent.ExecutionContext.Implicits.global
>>> >
>>> > I did this before calling "pio deploy" by adding
>>> >
>>> > export JAVA_OPTS="$JAVA_OPTS -Dscala.concurrent.context.numThreads=1000 -Dscala.concurrent.context.maxThreads=1000"
>>> >
>>> > Now we do see much more CPU usage by elasticsearch. So it seems that the QueryServer by using the standard thread pool bounded to the available processors acted like a dam.
>>> >
>>> > By setting the above values we now have sth. like a traditional Java JEE or Spring application which blocks thread because of synchronous calls and creates new threads if there is demand (requests) for it.
>>> >
>>> > So this is far from being a good solution. Going full async/reactive is still the way to go in my opinion.
>>> >
>>> > Cheers
>>> > Chris
>>> >
>>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com> wrote:
>>> >>
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> I'm not a contributor of the predictionio. But want to mention we also quite interested in that changes in my company.
>>> >> We often develop some custom pio engines, and it doesn't look right to me to use Await.result with non-blocking api.
>>> >> Totally agree with your point.
>>> >> Thanks for the question!
>>> >>
>>> >> George
>>>
>>>
>>>
>>> --
>>> Naoki Takezoe



--
Naoki Takezoe

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Chris Wewerka <ch...@gmail.com>.
Hi,

here are the PRs for Prediction IO and UR.
https://github.com/apache/predictionio/pull/495
https://github.com/actionml/universal-recommender/pull/62

I tried to leverage the already present async interface of elasticsearch
and armouring the HBase and JDBC call with blocking constructs which will
tell the standard scala ExecutionContext to use a separate thread for that.

Let me know what you think,
Chris

On Fri, 12 Oct 2018 at 09:06, Chris Wewerka <ch...@gmail.com> wrote:

> Hi Donald,
>
> thanks for your answer and the hint to base off from Naoki's Akka Http
> thread. Saw the PR and had the same idea already, as it does not make sense
> to base off the old spray code. I worked with spray a couple of years ago
> and back then it already had full support for Scala Futures / Fully async
> programming. If I get the time I will start with a fork going off Naoki's
> Akka HTTP branch.
>
> Please have a look at my second mail also, as the usage of the bounded
> "standard" Scala Execution context has a dramatic impact of how the
> machines resources are leveraged. On our small "All in one" machine we
> didn't see much CPU / Load until yesterday when I set the mentioned params
> to allow much higher thread counts in the standard Scala Execution context.
> We have proven this in our small production environment and it has an huge
> impact. In fact the Query Server acted like a water dam, not letting enough
> requests in the system to use all of it's resources. You might consider
> adding this to the documentation, until I hopefully come up with a PR for
> full async engine.
>
> Cheers
> Chris
>
> On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org> wrote:
>
>> Hi Chris,
>>
>> It is indeed a good idea to create asynchronous versions of the engine
>> server! Naoki has recently completed the migration from spray to Akka HTTP
>> so you may want to base off from that instead. Let us know if we can help
>> in any way.
>>
>> I do not recall the exact reason anymore, but engine server was created
>> almost 5 years ago, and I don’t remember whether spray could take futures
>> natively as responses like Akka HTTP could now. Nowadays there shouldn’t be
>> any reason to not provide asynchronous flavors of these APIs.
>>
>> Regards,
>> Donald
>>
>> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com> wrote:
>>
>>> Hi Chris,
>>>
>>> I think current LEventStore's blocking methods should take
>>> ExecutionContext as an implicit parameter and Future version of methods
>>> should be provided. I don't know why they aren't. Is there anyone who knows
>>> reason for the current LEventStore API?
>>>
>>> At the moment, you can consider to use LEvent directly to access Future
>>> version of methods as a workaround.
>>>
>>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>>>
>>> >
>>> > Thanks George, good to hear that!
>>> >
>>> > Today I did a test by raising the bar for the max allowed threads in
>>> the "standard"
>>> >
>>> > scala.concurrent.ExecutionContext.Implicits.global
>>> >
>>> > I did this before calling "pio deploy" by adding
>>> >
>>> > export JAVA_OPTS="$JAVA_OPTS
>>> -Dscala.concurrent.context.numThreads=1000
>>> -Dscala.concurrent.context.maxThreads=1000"
>>> >
>>> > Now we do see much more CPU usage by elasticsearch. So it seems that
>>> the QueryServer by using the standard thread pool bounded to the available
>>> processors acted like a dam.
>>> >
>>> > By setting the above values we now have sth. like a traditional Java
>>> JEE or Spring application which blocks thread because of synchronous calls
>>> and creates new threads if there is demand (requests) for it.
>>> >
>>> > So this is far from being a good solution. Going full async/reactive
>>> is still the way to go in my opinion.
>>> >
>>> > Cheers
>>> > Chris
>>> >
>>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com>
>>> wrote:
>>> >>
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> I'm not a contributor of the predictionio. But want to mention we
>>> also quite interested in that changes in my company.
>>> >> We often develop some custom pio engines, and it doesn't look right
>>> to me to use Await.result with non-blocking api.
>>> >> Totally agree with your point.
>>> >> Thanks for the question!
>>> >>
>>> >> George
>>>
>>>
>>>
>>> --
>>> Naoki Takezoe
>>>
>>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Chris Wewerka <ch...@gmail.com>.
Hi Donald,

thanks for your answer and the hint to base off from Naoki's Akka Http
thread. Saw the PR and had the same idea already, as it does not make sense
to base off the old spray code. I worked with spray a couple of years ago
and back then it already had full support for Scala Futures / Fully async
programming. If I get the time I will start with a fork going off Naoki's
Akka HTTP branch.

Please have a look at my second mail also, as the usage of the bounded
"standard" Scala Execution context has a dramatic impact of how the
machines resources are leveraged. On our small "All in one" machine we
didn't see much CPU / Load until yesterday when I set the mentioned params
to allow much higher thread counts in the standard Scala Execution context.
We have proven this in our small production environment and it has an huge
impact. In fact the Query Server acted like a water dam, not letting enough
requests in the system to use all of it's resources. You might consider
adding this to the documentation, until I hopefully come up with a PR for
full async engine.

Cheers
Chris

On Fri, 12 Oct 2018 at 02:18, Donald Szeto <do...@apache.org> wrote:

> Hi Chris,
>
> It is indeed a good idea to create asynchronous versions of the engine
> server! Naoki has recently completed the migration from spray to Akka HTTP
> so you may want to base off from that instead. Let us know if we can help
> in any way.
>
> I do not recall the exact reason anymore, but engine server was created
> almost 5 years ago, and I don’t remember whether spray could take futures
> natively as responses like Akka HTTP could now. Nowadays there shouldn’t be
> any reason to not provide asynchronous flavors of these APIs.
>
> Regards,
> Donald
>
> On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com> wrote:
>
>> Hi Chris,
>>
>> I think current LEventStore's blocking methods should take
>> ExecutionContext as an implicit parameter and Future version of methods
>> should be provided. I don't know why they aren't. Is there anyone who knows
>> reason for the current LEventStore API?
>>
>> At the moment, you can consider to use LEvent directly to access Future
>> version of methods as a workaround.
>>
>> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>>
>> >
>> > Thanks George, good to hear that!
>> >
>> > Today I did a test by raising the bar for the max allowed threads in
>> the "standard"
>> >
>> > scala.concurrent.ExecutionContext.Implicits.global
>> >
>> > I did this before calling "pio deploy" by adding
>> >
>> > export JAVA_OPTS="$JAVA_OPTS -Dscala.concurrent.context.numThreads=1000
>> -Dscala.concurrent.context.maxThreads=1000"
>> >
>> > Now we do see much more CPU usage by elasticsearch. So it seems that
>> the QueryServer by using the standard thread pool bounded to the available
>> processors acted like a dam.
>> >
>> > By setting the above values we now have sth. like a traditional Java
>> JEE or Spring application which blocks thread because of synchronous calls
>> and creates new threads if there is demand (requests) for it.
>> >
>> > So this is far from being a good solution. Going full async/reactive is
>> still the way to go in my opinion.
>> >
>> > Cheers
>> > Chris
>> >
>> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com>
>> wrote:
>> >>
>> >>
>> >> Hi Chris,
>> >>
>> >> I'm not a contributor of the predictionio. But want to mention we also
>> quite interested in that changes in my company.
>> >> We often develop some custom pio engines, and it doesn't look right to
>> me to use Await.result with non-blocking api.
>> >> Totally agree with your point.
>> >> Thanks for the question!
>> >>
>> >> George
>>
>>
>>
>> --
>> Naoki Takezoe
>>
>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Donald Szeto <do...@apache.org>.
Hi Chris,

It is indeed a good idea to create asynchronous versions of the engine
server! Naoki has recently completed the migration from spray to Akka HTTP
so you may want to base off from that instead. Let us know if we can help
in any way.

I do not recall the exact reason anymore, but engine server was created
almost 5 years ago, and I don’t remember whether spray could take futures
natively as responses like Akka HTTP could now. Nowadays there shouldn’t be
any reason to not provide asynchronous flavors of these APIs.

Regards,
Donald

On Thu, Oct 11, 2018 at 3:20 PM Naoki Takezoe <ta...@gmail.com> wrote:

> Hi Chris,
>
> I think current LEventStore's blocking methods should take
> ExecutionContext as an implicit parameter and Future version of methods
> should be provided. I don't know why they aren't. Is there anyone who knows
> reason for the current LEventStore API?
>
> At the moment, you can consider to use LEvent directly to access Future
> version of methods as a workaround.
>
> 2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>
> >
> > Thanks George, good to hear that!
> >
> > Today I did a test by raising the bar for the max allowed threads in the
> "standard"
> >
> > scala.concurrent.ExecutionContext.Implicits.global
> >
> > I did this before calling "pio deploy" by adding
> >
> > export JAVA_OPTS="$JAVA_OPTS -Dscala.concurrent.context.numThreads=1000
> -Dscala.concurrent.context.maxThreads=1000"
> >
> > Now we do see much more CPU usage by elasticsearch. So it seems that the
> QueryServer by using the standard thread pool bounded to the available
> processors acted like a dam.
> >
> > By setting the above values we now have sth. like a traditional Java JEE
> or Spring application which blocks thread because of synchronous calls and
> creates new threads if there is demand (requests) for it.
> >
> > So this is far from being a good solution. Going full async/reactive is
> still the way to go in my opinion.
> >
> > Cheers
> > Chris
> >
> > On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com>
> wrote:
> >>
> >>
> >> Hi Chris,
> >>
> >> I'm not a contributor of the predictionio. But want to mention we also
> quite interested in that changes in my company.
> >> We often develop some custom pio engines, and it doesn't look right to
> me to use Await.result with non-blocking api.
> >> Totally agree with your point.
> >> Thanks for the question!
> >>
> >> George
>
>
>
> --
> Naoki Takezoe
>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Naoki Takezoe <ta...@gmail.com>.
Hi Chris,

I think current LEventStore's blocking methods should take ExecutionContext
as an implicit parameter and Future version of methods should be provided.
I don't know why they aren't. Is there anyone who knows reason for the
current LEventStore API?

At the moment, you can consider to use LEvent directly to access Future
version of methods as a workaround.

2018年10月11日(木) 23:05 Chris Wewerka <ch...@gmail.com>:
>
> Thanks George, good to hear that!
>
> Today I did a test by raising the bar for the max allowed threads in the
"standard"
>
> scala.concurrent.ExecutionContext.Implicits.global
>
> I did this before calling "pio deploy" by adding
>
> export JAVA_OPTS="$JAVA_OPTS -Dscala.concurrent.context.numThreads=1000
-Dscala.concurrent.context.maxThreads=1000"
>
> Now we do see much more CPU usage by elasticsearch. So it seems that the
QueryServer by using the standard thread pool bounded to the available
processors acted like a dam.
>
> By setting the above values we now have sth. like a traditional Java JEE
or Spring application which blocks thread because of synchronous calls and
creates new threads if there is demand (requests) for it.
>
> So this is far from being a good solution. Going full async/reactive is
still the way to go in my opinion.
>
> Cheers
> Chris
>
> On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com>
wrote:
>>
>>
>> Hi Chris,
>>
>> I'm not a contributor of the predictionio. But want to mention we also
quite interested in that changes in my company.
>> We often develop some custom pio engines, and it doesn't look right to
me to use Await.result with non-blocking api.
>> Totally agree with your point.
>> Thanks for the question!
>>
>> George



--
Naoki Takezoe

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by Chris Wewerka <ch...@gmail.com>.
Thanks George, good to hear that!

Today I did a test by raising the bar for the max allowed threads in the
"standard"

scala.concurrent.ExecutionContext.Implicits.global

I did this before calling "pio deploy" by adding

export JAVA_OPTS="$JAVA_OPTS
-Dscala.concurrent.context.numThreads=1000
-Dscala.concurrent.context.maxThreads=1000"

Now we do see much more CPU usage by elasticsearch. So it seems that the
QueryServer by using the standard thread pool bounded to the available
processors acted like a dam.

By setting the above values we now have sth. like a traditional Java JEE or
Spring application which blocks thread because of synchronous calls and
creates new threads if there is demand (requests) for it.

So this is far from being a good solution. Going full async/reactive is
still the way to go in my opinion.

Cheers
Chris

On Thu, 11 Oct 2018 at 14:07, George Yarish <gy...@griddynamics.com>
wrote:

>
> Hi Chris,
>
> I'm not a contributor of the predictionio. But want to mention we also
> quite interested in that changes in my company.
> We often develop some custom pio engines, and it doesn't look right to me
> to use Await.result with non-blocking api.
> Totally agree with your point.
> Thanks for the question!
>
> George
>

Re: Async (Non Blocking) Paradigm in Prediction IO and Universal Recommender

Posted by George Yarish <gy...@griddynamics.com>.
Hi Chris,

I'm not a contributor of the predictionio. But want to mention we also
quite interested in that changes in my company.
We often develop some custom pio engines, and it doesn't look right to me
to use Await.result with non-blocking api.
Totally agree with your point.
Thanks for the question!

George