You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by rajivgandhi <ra...@gmail.com> on 2018/01/20 20:25:40 UTC

getAll Latency

Dear Ignite Community,
We have been using ignite for close to year now - in production since a
month. We use ignite as caching layer between the application (hosted in
AWS) and Dynamodb.

Below are the latency comparisions with DynamodB:
Ignite:
get: 800 microseconds
getAll (10 items): 8 milliseconds

DynamoDb:
get: 4 millseconds
getAll: 8 milliseconds

As you can see there is significant improvement for get operations. however,
the gain in getAll is not much.

Looking at Dynatrace, it seems Ignite getAll makes several calls to
GatewayProtectedCacheProxy.get api. Not familiar with ignite source code, it
seems like getAll is just a wrapped call to sequential get calls. so the
performance is obviously linear to n - no of items requested.

Can this not be more optimized with parallel calls to relevant nodes?
You could use non blocking IO in combination with light weight threads
(fibers eg. Quasar library). 
This way the performance will not be linear to n. 

thanks,
Rajeev





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: getAll Latency

Posted by rajivgandhi <ra...@gmail.com>.
Hi Stan,
Please find the below information. 
Num Nodes in Production: 7-9
Cache Mode: Partitioned, off heap
Atomicity Mode: Transactional
Concurrency Mode: Optimistic
Isolation Mode: Serializable

Please find the stacktrace attached (this is from stage environment).
Unfortunately, it is in dynatrace xml format. I have also attached a
picture.

Code:
Well, we have wrapped ignite framework with DynamoDbMapper API. From ignite
point of view, these are basically calls to getAll Api in IgniteCache.

thanks!

stack.txt
<http://apache-ignite-users.70518.x6.nabble.com/file/t1265/stack.txt>  
stack_-_relevant_nodes.txt
<http://apache-ignite-users.70518.x6.nabble.com/file/t1265/stack_-_relevant_nodes.txt>  
Capture.JPG
<http://apache-ignite-users.70518.x6.nabble.com/file/t1265/Capture.JPG>  




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: getAll Latency

Posted by rajivgandhi <ra...@gmail.com>.
Sorry - pls delete this thread. I just reviewed our code and the bug is in
our code.

thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: getAll Latency

Posted by Stanislav Lukyanov <st...@gmail.com>.
Hi Rajeev,

Generally `getAll` should behave in a more optimized manner, not simply delegate to `get`.
If you have a stack trace could you please share it ? It would help to pinpoint the code path
that needs to be looked at.
Also, can you show some code and tell a bit about your configuration? First of all, how many nodes do you use? 

Thanks,
Stan

From: rajivgandhi
Sent: 20 января 2018 г. 23:25
To: user@ignite.apache.org
Subject: getAll Latency

Dear Ignite Community,
We have been using ignite for close to year now - in production since a
month. We use ignite as caching layer between the application (hosted in
AWS) and Dynamodb.

Below are the latency comparisions with DynamodB:
Ignite:
get: 800 microseconds
getAll (10 items): 8 milliseconds

DynamoDb:
get: 4 millseconds
getAll: 8 milliseconds

As you can see there is significant improvement for get operations. however,
the gain in getAll is not much.

Looking at Dynatrace, it seems Ignite getAll makes several calls to
GatewayProtectedCacheProxy.get api. Not familiar with ignite source code, it
seems like getAll is just a wrapped call to sequential get calls. so the
performance is obviously linear to n - no of items requested.

Can this not be more optimized with parallel calls to relevant nodes?
You could use non blocking IO in combination with light weight threads
(fibers eg. Quasar library). 
This way the performance will not be linear to n. 

thanks,
Rajeev





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: getAll Latency

Posted by rajivgandhi <ra...@gmail.com>.
Bump.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/