You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Thomas Kramer <do...@gmx.de> on 2023/03/21 09:00:51 UTC

Large data transfers with Ignite or Kafka?

I'm considering Ignite for a ondemand-scalable microservice-oriented
architecture. I'd use the memory cache for shared data across the
microservices. Maybe I would also use Ignite compute for distributed
tasks, however I believe the MOA philosophy would recommend REST for this.

My question is rather about large data transfer between the
microservices. In addition to smaller amount of data shared in the
caches across all microservices, I need to constantly send large data
blocks (50M-500M) between the microservices, typically from one sender
to one receiver. There is no need to persist these on disk.

Would Ignite be fast and efficient for this? What size of chunks should
the data be split? Or would I better use Kafka in parallel to Ignite to
transfer the large data blocks? Or maybe go even more low-level with
something like ZeroMQ?

Thanks for comments and suggestions.



Re: Large data transfers with Ignite or Kafka?

Posted by Thomas Kramer <do...@gmx.de>.
I'd be happy to provide more information as you request. I just wasn't
aware of which metrics might be important factors. Anyway, it sounds
like you feel this goes beyond people suggesting from their experience
or providing other related ideas.

Of course I did already test around with the tools I mentioned below,
and others too. From what I could read Kafka is not designed for that
large data. In various discussions it was suggested to use an object
storage for the large data and use Kafka for messaging referencing the
object keys. This is the solution I'm currently testing with and feel
this is the most reliable.

I'll do further tests. Thanks.


On 24.03.23 15:30, Jeremy McMillan wrote:
> That's a big question, and it isn't clear whether there's a large or
> small ratio of reads to writes between these microservices, for
> example. It isn't clear what your latency tolerance is for these large
> transfers either.
>
> This sounds like a big endeavor, and if there's money to be made, your
> best bet is to get architecture advice with an NDA so that the
> architecture can take all of the cost/risk/benefit factors into
> consideration. Asking free software community to give you blind
> architecture advice will not get you much closer to a decision than
> you already are.
>
> If it's not constrained by deadlines and r&d budget, maybe your best
> bet is to try a couple of things out and compare what you can squeeze
> out of each? Maybe you want to experiment before you choose an
> architectural design? We would still need to understand more of your
> performance goals to help design an experiment.
>
> It's hard to be the first or only person with a good idea. The
> difference between what you're imagining and what you can see already
> extant in the world might be constrained by execution and not
> imagination. Please consider sharing more info. FWIW, that's the other
> side of the old "bike shed" parable if you are seeking input from others.
>
>
>
> On Fri, Mar 24, 2023, 06:55 Thomas Kramer <do...@gmx.de> wrote:
>
>     Hi all,
>
>     do you have any feedback on this? Or is this rather a question for
>     StackOverflow?
>
>     Thanks.
>
>
>     On 21.03.23 10:00, Thomas Kramer wrote:
>     > I'm considering Ignite for a ondemand-scalable microservice-oriented
>     > architecture. I'd use the memory cache for shared data across the
>     > microservices. Maybe I would also use Ignite compute for distributed
>     > tasks, however I believe the MOA philosophy would recommend REST for
>     > this.
>     >
>     > My question is rather about large data transfer between the
>     > microservices. In addition to smaller amount of data shared in the
>     > caches across all microservices, I need to constantly send large
>     data
>     > blocks (50M-500M) between the microservices, typically from one
>     sender
>     > to one receiver. There is no need to persist these on disk.
>     >
>     > Would Ignite be fast and efficient for this? What size of chunks
>     should
>     > the data be split? Or would I better use Kafka in parallel to
>     Ignite to
>     > transfer the large data blocks? Or maybe go even more low-level with
>     > something like ZeroMQ?
>     >
>     > Thanks for comments and suggestions.
>     >
>     >
>

Re: Large data transfers with Ignite or Kafka?

Posted by Jeremy McMillan <je...@gridgain.com>.
That's a big question, and it isn't clear whether there's a large or small
ratio of reads to writes between these microservices, for example. It isn't
clear what your latency tolerance is for these large transfers either.

This sounds like a big endeavor, and if there's money to be made, your best
bet is to get architecture advice with an NDA so that the architecture can
take all of the cost/risk/benefit factors into consideration. Asking free
software community to give you blind architecture advice will not get you
much closer to a decision than you already are.

If it's not constrained by deadlines and r&d budget, maybe your best bet is
to try a couple of things out and compare what you can squeeze out of each?
Maybe you want to experiment before you choose an architectural design? We
would still need to understand more of your performance goals to help
design an experiment.

It's hard to be the first or only person with a good idea. The difference
between what you're imagining and what you can see already extant in the
world might be constrained by execution and not imagination. Please
consider sharing more info. FWIW, that's the other side of the old "bike
shed" parable if you are seeking input from others.



On Fri, Mar 24, 2023, 06:55 Thomas Kramer <do...@gmx.de> wrote:

> Hi all,
>
> do you have any feedback on this? Or is this rather a question for
> StackOverflow?
>
> Thanks.
>
>
> On 21.03.23 10:00, Thomas Kramer wrote:
> > I'm considering Ignite for a ondemand-scalable microservice-oriented
> > architecture. I'd use the memory cache for shared data across the
> > microservices. Maybe I would also use Ignite compute for distributed
> > tasks, however I believe the MOA philosophy would recommend REST for
> > this.
> >
> > My question is rather about large data transfer between the
> > microservices. In addition to smaller amount of data shared in the
> > caches across all microservices, I need to constantly send large data
> > blocks (50M-500M) between the microservices, typically from one sender
> > to one receiver. There is no need to persist these on disk.
> >
> > Would Ignite be fast and efficient for this? What size of chunks should
> > the data be split? Or would I better use Kafka in parallel to Ignite to
> > transfer the large data blocks? Or maybe go even more low-level with
> > something like ZeroMQ?
> >
> > Thanks for comments and suggestions.
> >
> >
>

Re: Large data transfers with Ignite or Kafka?

Posted by Thomas Kramer <do...@gmx.de>.
Hi all,

do you have any feedback on this? Or is this rather a question for
StackOverflow?

Thanks.


On 21.03.23 10:00, Thomas Kramer wrote:
> I'm considering Ignite for a ondemand-scalable microservice-oriented
> architecture. I'd use the memory cache for shared data across the
> microservices. Maybe I would also use Ignite compute for distributed
> tasks, however I believe the MOA philosophy would recommend REST for
> this.
>
> My question is rather about large data transfer between the
> microservices. In addition to smaller amount of data shared in the
> caches across all microservices, I need to constantly send large data
> blocks (50M-500M) between the microservices, typically from one sender
> to one receiver. There is no need to persist these on disk.
>
> Would Ignite be fast and efficient for this? What size of chunks should
> the data be split? Or would I better use Kafka in parallel to Ignite to
> transfer the large data blocks? Or maybe go even more low-level with
> something like ZeroMQ?
>
> Thanks for comments and suggestions.
>
>