You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@directmemory.apache.org by Roman Levenstein <ro...@gmail.com> on 2012/09/29 22:24:20 UTC

FYI: BigMemory Go announcement from Terracotta

Hi,

Terracotta announced that they offer a feature-limited version of
their BigMemory product for free under the name "BigMemory Go".
The main limitation is that you can use at most 32GB of off-heap
memory per JVM instance and there is no support for a replication and
clustering of caches between JVMs/nodes.
http://terracotta.org/products/bigmemorygo

It looks like off-heap caching solutions are gaining more attention
these days. One of the reasons for Terracotta's move could be a wish
to counteract offerings from competitors and open-source analogs.

Since it is available now, it would be very interesting to see some
benchmarks comparing off-heap memory solutions from DirectMemory and
Terracota. Is there anyone willing to give it a try? :-)
>From their documentation is sounds like they use a standard Java
serialization, which means that DirectMemory could be even faster than
BigMemory, because it uses more efficient serializers.
Also their implementation of  querying/indexing does not sound like
very optimized. May be it is another place, where DirectMemory could
be better. If overall DirectMemory would show a comparable or better
performance, it would add it a lot of credibility, IMHO.

Another thing, which could be interesting is to look at their APIs and
to see if something could be/should be modeled after it or made
similar to it, so that users can easily switch from their
closed-source solutions to an open-source solution.

The third thing is: it could be interesting and useful to join forces
among open-source projects that have a similar goal of providing
efficient serialization and off-heap memory.
Right now we have something like this:
- (distributed) off-heap caches: DirectMemory, BigCache
(http://bigcache.org/site/) and Hazelcast Enterprise (off-heap support
is closed-source at the moment)
- serialization: kryo, protostuff, lightning, protocol buffers and
many, many more implementations, which all look very similar
IMHO, too much effort is wasted re-implementing the same functionality
every time by every project. If those projects (especially off-heap
caching impls) would work together it would lead much faster to much
better results. What do you think? Any plans to co-operate with any of
those?

Regards,
  Roman

Re: FYI: BigMemory Go announcement from Terracotta

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
+1 for a benchmarking suite (interested in taking over?)

Regarding memcached - it powered membase, an interesting nosql solution
that eventually merged with couchdb in couchbase. I wouldn't classify it as
a web only solution.

Ciao,
    R
Il giorno 30/set/2012 13:09, "Roman Levenstein" <ro...@gmail.com> ha
scritto:

> Hi Rafaelle,
>
> On Sat, Sep 29, 2012 at 11:43 PM, Raffaele P. Guidi
> <ra...@gmail.com> wrote:
> > I knew about bigmemory go (I even retweeted the announcement with the
> > DirectMemory account) and, yes, I think it could be a sign that open
> source
> > efforts (ours but also the one of the bigcache guys) are, if not scary
> for
> > those big guys, at least an option that cannot be overseen. Another way
> to
> > read it is that maybe the BigMemory market share is not _that_ profitable
> > (don't know anything about Terracotta BigMemory and Hazelcast Elastic
> > Memory sale numbers, though) and prices are getting down.
>
> And there is yet another possible explanation:
> The market for in-memory solutions (DBs, data-grids, etc) is starting
> to take-off.
> Therefore it is important to get a lot of customers/developers on hook
> right from the beginning, so that you can later own a big market
> share.
> And giving the lite versions of products for free is a well-known
> practice in such cases.
>
> > We also knew of the BigCache effort- someone of the team tried to contact
> > them with a "join efforts" proposal but - with no luck - which is a pity
> -
> > wanna try pushing at them? ;)
>
> Hmm. I tried a few times to contact them on a different occasion, but
> never got any response.
>
> > Your idea about learning from their API and adopting it is quite good -
> > actually so good that we already did it :P and in our offering we have a
> > compatibility module that allows to use DirectMemory with ehcache
> replacing
> > BigMemory. T
>
> This is really cool!
>
> > his last little known but relevant fact could ease things a lot
> > comparing performance of the two but:
> >
> >    1. DirectMemory is a young product not backed by any corporate effort
> -
> >    BigMemory has been probably quite better tested on the field and it
> could
> >    possibily perform better even though it could borrow some ideas from
> the
> >    little open source guys :)
> >    2. BigMemory (and also elastic memory) is a commercial product with a
> >    closed license - keep in mind that while every one could come up with
> a
> >    benchmark comparing the two (the three?) - the license could (as it
> usually
> >    does) _prohibit_ publishing comparative benchmarks results and this
> should
> >    be thoroughly checked before attempting to do that. It could possibly
> apply
> >    to the free version as well
>
> I agree with your point that it is eventually forbidden to publish results.
> But can they forbid a customer to do her own benchmarks? I guess this
> is not possible.
> So, what can be done, is to provide a good set of easy to run
> benchmarks (but not their results) for all 2 (or 3) of available
> solutions that can be performed by any user.
> This way you still obey to the license restrictions, but customers can
> now run on their own those benchmarks and make their conclusions ;-)
>
> > Said that, someone could take care of checking #2 reading their licensing
> > notice and, in any case, another good (and maybe better) fit for a
> > performance benchmarking could be a one-to-one with memcached (with a
> java
> > connector). A tough one (memcached is written in C or C++ with a long
> > record of successes) but, if you think about it, it is probably the most
> > widely used off-heap cache ever - and it's open source, so benchmarking
> > could be published with no issues. And when the going gets though...
>
> I'm not sure that memcached is a good choice as a competitor. May be
> for web apps it is the case.
> But IMHO, in-memory databases and similar systems are the real
> competitors and this is also where the market moves. Think about Big
> Data (Hadoop with in-memory acceleration, Cassandra with off-heap
> storage, etc), in-memory DBs (e.g. Hana from SAP) for real-time
> analytics systems and many more. Almost all of those systems
> use/would benefit from a good off-heap in-memory solutions, which
> usually run in the same process for performance reasons. I think
> DirectMemory could be a valuable component to build such systems if it
> would focus more on easier integration with such systems and provide
> explicit support for many of them.
>
> Ciao,
>      Roman
>
> > On Sat, Sep 29, 2012 at 10:24 PM, Roman Levenstein <romixlev@gmail.com
> >wrote:
> >
> >> Hi,
> >>
> >> Terracotta announced that they offer a feature-limited version of
> >> their BigMemory product for free under the name "BigMemory Go".
> >> The main limitation is that you can use at most 32GB of off-heap
> >> memory per JVM instance and there is no support for a replication and
> >> clustering of caches between JVMs/nodes.
> >> http://terracotta.org/products/bigmemorygo
> >>
> >> It looks like off-heap caching solutions are gaining more attention
> >> these days. One of the reasons for Terracotta's move could be a wish
> >> to counteract offerings from competitors and open-source analogs.
> >>
> >> Since it is available now, it would be very interesting to see some
> >> benchmarks comparing off-heap memory solutions from DirectMemory and
> >> Terracota. Is there anyone willing to give it a try? :-)
> >> From their documentation is sounds like they use a standard Java
> >> serialization, which means that DirectMemory could be even faster than
> >> BigMemory, because it uses more efficient serializers.
> >> Also their implementation of  querying/indexing does not sound like
> >> very optimized. May be it is another place, where DirectMemory could
> >> be better. If overall DirectMemory would show a comparable or better
> >> performance, it would add it a lot of credibility, IMHO.
> >>
> >> Another thing, which could be interesting is to look at their APIs and
> >> to see if something could be/should be modeled after it or made
> >> similar to it, so that users can easily switch from their
> >> closed-source solutions to an open-source solution.
> >>
> >> The third thing is: it could be interesting and useful to join forces
> >> among open-source projects that have a similar goal of providing
> >> efficient serialization and off-heap memory.
> >> Right now we have something like this:
> >> - (distributed) off-heap caches: DirectMemory, BigCache
> >> (http://bigcache.org/site/) and Hazelcast Enterprise (off-heap support
> >> is closed-source at the moment)
> >> - serialization: kryo, protostuff, lightning, protocol buffers and
> >> many, many more implementations, which all look very similar
> >> IMHO, too much effort is wasted re-implementing the same functionality
> >> every time by every project. If those projects (especially off-heap
> >> caching impls) would work together it would lead much faster to much
> >> better results. What do you think? Any plans to co-operate with any of
> >> those?
> >>
> >> Regards,
> >>   Roman
> >>
>

Re: FYI: BigMemory Go announcement from Terracotta

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
+1 for a benchmarking suite (interested in taking over?)

Regarding memcached - it powered membase, an interesting nosql solution
that eventually merged with couchdb in couchbase. I wouldn't classify it as
a web only solution.

Cheers,
    R
Il giorno 30/set/2012 13:09, "Roman Levenstein" <ro...@gmail.com> ha
scritto:

> Hi Rafaelle,
>
> On Sat, Sep 29, 2012 at 11:43 PM, Raffaele P. Guidi
> <ra...@gmail.com> wrote:
> > I knew about bigmemory go (I even retweeted the announcement with the
> > DirectMemory account) and, yes, I think it could be a sign that open
> source
> > efforts (ours but also the one of the bigcache guys) are, if not scary
> for
> > those big guys, at least an option that cannot be overseen. Another way
> to
> > read it is that maybe the BigMemory market share is not _that_ profitable
> > (don't know anything about Terracotta BigMemory and Hazelcast Elastic
> > Memory sale numbers, though) and prices are getting down.
>
> And there is yet another possible explanation:
> The market for in-memory solutions (DBs, data-grids, etc) is starting
> to take-off.
> Therefore it is important to get a lot of customers/developers on hook
> right from the beginning, so that you can later own a big market
> share.
> And giving the lite versions of products for free is a well-known
> practice in such cases.
>
> > We also knew of the BigCache effort- someone of the team tried to contact
> > them with a "join efforts" proposal but - with no luck - which is a pity
> -
> > wanna try pushing at them? ;)
>
> Hmm. I tried a few times to contact them on a different occasion, but
> never got any response.
>
> > Your idea about learning from their API and adopting it is quite good -
> > actually so good that we already did it :P and in our offering we have a
> > compatibility module that allows to use DirectMemory with ehcache
> replacing
> > BigMemory. T
>
> This is really cool!
>
> > his last little known but relevant fact could ease things a lot
> > comparing performance of the two but:
> >
> >    1. DirectMemory is a young product not backed by any corporate effort
> -
> >    BigMemory has been probably quite better tested on the field and it
> could
> >    possibily perform better even though it could borrow some ideas from
> the
> >    little open source guys :)
> >    2. BigMemory (and also elastic memory) is a commercial product with a
> >    closed license - keep in mind that while every one could come up with
> a
> >    benchmark comparing the two (the three?) - the license could (as it
> usually
> >    does) _prohibit_ publishing comparative benchmarks results and this
> should
> >    be thoroughly checked before attempting to do that. It could possibly
> apply
> >    to the free version as well
>
> I agree with your point that it is eventually forbidden to publish results.
> But can they forbid a customer to do her own benchmarks? I guess this
> is not possible.
> So, what can be done, is to provide a good set of easy to run
> benchmarks (but not their results) for all 2 (or 3) of available
> solutions that can be performed by any user.
> This way you still obey to the license restrictions, but customers can
> now run on their own those benchmarks and make their conclusions ;-)
>
> > Said that, someone could take care of checking #2 reading their licensing
> > notice and, in any case, another good (and maybe better) fit for a
> > performance benchmarking could be a one-to-one with memcached (with a
> java
> > connector). A tough one (memcached is written in C or C++ with a long
> > record of successes) but, if you think about it, it is probably the most
> > widely used off-heap cache ever - and it's open source, so benchmarking
> > could be published with no issues. And when the going gets though...
>
> I'm not sure that memcached is a good choice as a competitor. May be
> for web apps it is the case.
> But IMHO, in-memory databases and similar systems are the real
> competitors and this is also where the market moves. Think about Big
> Data (Hadoop with in-memory acceleration, Cassandra with off-heap
> storage, etc), in-memory DBs (e.g. Hana from SAP) for real-time
> analytics systems and many more. Almost all of those systems
> use/would benefit from a good off-heap in-memory solutions, which
> usually run in the same process for performance reasons. I think
> DirectMemory could be a valuable component to build such systems if it
> would focus more on easier integration with such systems and provide
> explicit support for many of them.
>
> Ciao,
>      Roman
>
> > On Sat, Sep 29, 2012 at 10:24 PM, Roman Levenstein <romixlev@gmail.com
> >wrote:
> >
> >> Hi,
> >>
> >> Terracotta announced that they offer a feature-limited version of
> >> their BigMemory product for free under the name "BigMemory Go".
> >> The main limitation is that you can use at most 32GB of off-heap
> >> memory per JVM instance and there is no support for a replication and
> >> clustering of caches between JVMs/nodes.
> >> http://terracotta.org/products/bigmemorygo
> >>
> >> It looks like off-heap caching solutions are gaining more attention
> >> these days. One of the reasons for Terracotta's move could be a wish
> >> to counteract offerings from competitors and open-source analogs.
> >>
> >> Since it is available now, it would be very interesting to see some
> >> benchmarks comparing off-heap memory solutions from DirectMemory and
> >> Terracota. Is there anyone willing to give it a try? :-)
> >> From their documentation is sounds like they use a standard Java
> >> serialization, which means that DirectMemory could be even faster than
> >> BigMemory, because it uses more efficient serializers.
> >> Also their implementation of  querying/indexing does not sound like
> >> very optimized. May be it is another place, where DirectMemory could
> >> be better. If overall DirectMemory would show a comparable or better
> >> performance, it would add it a lot of credibility, IMHO.
> >>
> >> Another thing, which could be interesting is to look at their APIs and
> >> to see if something could be/should be modeled after it or made
> >> similar to it, so that users can easily switch from their
> >> closed-source solutions to an open-source solution.
> >>
> >> The third thing is: it could be interesting and useful to join forces
> >> among open-source projects that have a similar goal of providing
> >> efficient serialization and off-heap memory.
> >> Right now we have something like this:
> >> - (distributed) off-heap caches: DirectMemory, BigCache
> >> (http://bigcache.org/site/) and Hazelcast Enterprise (off-heap support
> >> is closed-source at the moment)
> >> - serialization: kryo, protostuff, lightning, protocol buffers and
> >> many, many more implementations, which all look very similar
> >> IMHO, too much effort is wasted re-implementing the same functionality
> >> every time by every project. If those projects (especially off-heap
> >> caching impls) would work together it would lead much faster to much
> >> better results. What do you think? Any plans to co-operate with any of
> >> those?
> >>
> >> Regards,
> >>   Roman
> >>
>

Re: FYI: BigMemory Go announcement from Terracotta

Posted by Roman Levenstein <ro...@gmail.com>.
Hi Rafaelle,

On Sat, Sep 29, 2012 at 11:43 PM, Raffaele P. Guidi
<ra...@gmail.com> wrote:
> I knew about bigmemory go (I even retweeted the announcement with the
> DirectMemory account) and, yes, I think it could be a sign that open source
> efforts (ours but also the one of the bigcache guys) are, if not scary for
> those big guys, at least an option that cannot be overseen. Another way to
> read it is that maybe the BigMemory market share is not _that_ profitable
> (don't know anything about Terracotta BigMemory and Hazelcast Elastic
> Memory sale numbers, though) and prices are getting down.

And there is yet another possible explanation:
The market for in-memory solutions (DBs, data-grids, etc) is starting
to take-off.
Therefore it is important to get a lot of customers/developers on hook
right from the beginning, so that you can later own a big market
share.
And giving the lite versions of products for free is a well-known
practice in such cases.

> We also knew of the BigCache effort- someone of the team tried to contact
> them with a "join efforts" proposal but - with no luck - which is a pity -
> wanna try pushing at them? ;)

Hmm. I tried a few times to contact them on a different occasion, but
never got any response.

> Your idea about learning from their API and adopting it is quite good -
> actually so good that we already did it :P and in our offering we have a
> compatibility module that allows to use DirectMemory with ehcache replacing
> BigMemory. T

This is really cool!

> his last little known but relevant fact could ease things a lot
> comparing performance of the two but:
>
>    1. DirectMemory is a young product not backed by any corporate effort -
>    BigMemory has been probably quite better tested on the field and it could
>    possibily perform better even though it could borrow some ideas from the
>    little open source guys :)
>    2. BigMemory (and also elastic memory) is a commercial product with a
>    closed license - keep in mind that while every one could come up with a
>    benchmark comparing the two (the three?) - the license could (as it usually
>    does) _prohibit_ publishing comparative benchmarks results and this should
>    be thoroughly checked before attempting to do that. It could possibly apply
>    to the free version as well

I agree with your point that it is eventually forbidden to publish results.
But can they forbid a customer to do her own benchmarks? I guess this
is not possible.
So, what can be done, is to provide a good set of easy to run
benchmarks (but not their results) for all 2 (or 3) of available
solutions that can be performed by any user.
This way you still obey to the license restrictions, but customers can
now run on their own those benchmarks and make their conclusions ;-)

> Said that, someone could take care of checking #2 reading their licensing
> notice and, in any case, another good (and maybe better) fit for a
> performance benchmarking could be a one-to-one with memcached (with a java
> connector). A tough one (memcached is written in C or C++ with a long
> record of successes) but, if you think about it, it is probably the most
> widely used off-heap cache ever - and it's open source, so benchmarking
> could be published with no issues. And when the going gets though...

I'm not sure that memcached is a good choice as a competitor. May be
for web apps it is the case.
But IMHO, in-memory databases and similar systems are the real
competitors and this is also where the market moves. Think about Big
Data (Hadoop with in-memory acceleration, Cassandra with off-heap
storage, etc), in-memory DBs (e.g. Hana from SAP) for real-time
analytics systems and many more. Almost all of those systems
use/would benefit from a good off-heap in-memory solutions, which
usually run in the same process for performance reasons. I think
DirectMemory could be a valuable component to build such systems if it
would focus more on easier integration with such systems and provide
explicit support for many of them.

Ciao,
     Roman

> On Sat, Sep 29, 2012 at 10:24 PM, Roman Levenstein <ro...@gmail.com>wrote:
>
>> Hi,
>>
>> Terracotta announced that they offer a feature-limited version of
>> their BigMemory product for free under the name "BigMemory Go".
>> The main limitation is that you can use at most 32GB of off-heap
>> memory per JVM instance and there is no support for a replication and
>> clustering of caches between JVMs/nodes.
>> http://terracotta.org/products/bigmemorygo
>>
>> It looks like off-heap caching solutions are gaining more attention
>> these days. One of the reasons for Terracotta's move could be a wish
>> to counteract offerings from competitors and open-source analogs.
>>
>> Since it is available now, it would be very interesting to see some
>> benchmarks comparing off-heap memory solutions from DirectMemory and
>> Terracota. Is there anyone willing to give it a try? :-)
>> From their documentation is sounds like they use a standard Java
>> serialization, which means that DirectMemory could be even faster than
>> BigMemory, because it uses more efficient serializers.
>> Also their implementation of  querying/indexing does not sound like
>> very optimized. May be it is another place, where DirectMemory could
>> be better. If overall DirectMemory would show a comparable or better
>> performance, it would add it a lot of credibility, IMHO.
>>
>> Another thing, which could be interesting is to look at their APIs and
>> to see if something could be/should be modeled after it or made
>> similar to it, so that users can easily switch from their
>> closed-source solutions to an open-source solution.
>>
>> The third thing is: it could be interesting and useful to join forces
>> among open-source projects that have a similar goal of providing
>> efficient serialization and off-heap memory.
>> Right now we have something like this:
>> - (distributed) off-heap caches: DirectMemory, BigCache
>> (http://bigcache.org/site/) and Hazelcast Enterprise (off-heap support
>> is closed-source at the moment)
>> - serialization: kryo, protostuff, lightning, protocol buffers and
>> many, many more implementations, which all look very similar
>> IMHO, too much effort is wasted re-implementing the same functionality
>> every time by every project. If those projects (especially off-heap
>> caching impls) would work together it would lead much faster to much
>> better results. What do you think? Any plans to co-operate with any of
>> those?
>>
>> Regards,
>>   Roman
>>

Re: FYI: BigMemory Go announcement from Terracotta

Posted by "Raffaele P. Guidi" <ra...@gmail.com>.
I knew about bigmemory go (I even retweeted the announcement with the
DirectMemory account) and, yes, I think it could be a sign that open source
efforts (ours but also the one of the bigcache guys) are, if not scary for
those big guys, at least an option that cannot be overseen. Another way to
read it is that maybe the BigMemory market share is not _that_ profitable
(don't know anything about Terracotta BigMemory and Hazelcast Elastic
Memory sale numbers, though) and prices are getting down.

We also knew of the BigCache effort- someone of the team tried to contact
them with a "join efforts" proposal but - with no luck - which is a pity -
wanna try pushing at them? ;)

Your idea about learning from their API and adopting it is quite good -
actually so good that we already did it :P and in our offering we have a
compatibility module that allows to use DirectMemory with ehcache replacing
BigMemory. This last little known but relevant fact could ease things a lot
comparing performance of the two but:

   1. DirectMemory is a young product not backed by any corporate effort -
   BigMemory has been probably quite better tested on the field and it could
   possibily perform better even though it could borrow some ideas from the
   little open source guys :)
   2. BigMemory (and also elastic memory) is a commercial product with a
   closed license - keep in mind that while every one could come up with a
   benchmark comparing the two (the three?) - the license could (as it usually
   does) _prohibit_ publishing comparative benchmarks results and this should
   be thoroughly checked before attempting to do that. It could possibly apply
   to the free version as well

Said that, someone could take care of checking #2 reading their licensing
notice and, in any case, another good (and maybe better) fit for a
performance benchmarking could be a one-to-one with memcached (with a java
connector). A tough one (memcached is written in C or C++ with a long
record of successes) but, if you think about it, it is probably the most
widely used off-heap cache ever - and it's open source, so benchmarking
could be published with no issues. And when the going gets though...

Ciao,
    R

On Sat, Sep 29, 2012 at 10:24 PM, Roman Levenstein <ro...@gmail.com>wrote:

> Hi,
>
> Terracotta announced that they offer a feature-limited version of
> their BigMemory product for free under the name "BigMemory Go".
> The main limitation is that you can use at most 32GB of off-heap
> memory per JVM instance and there is no support for a replication and
> clustering of caches between JVMs/nodes.
> http://terracotta.org/products/bigmemorygo
>
> It looks like off-heap caching solutions are gaining more attention
> these days. One of the reasons for Terracotta's move could be a wish
> to counteract offerings from competitors and open-source analogs.
>
> Since it is available now, it would be very interesting to see some
> benchmarks comparing off-heap memory solutions from DirectMemory and
> Terracota. Is there anyone willing to give it a try? :-)
> From their documentation is sounds like they use a standard Java
> serialization, which means that DirectMemory could be even faster than
> BigMemory, because it uses more efficient serializers.
> Also their implementation of  querying/indexing does not sound like
> very optimized. May be it is another place, where DirectMemory could
> be better. If overall DirectMemory would show a comparable or better
> performance, it would add it a lot of credibility, IMHO.
>
> Another thing, which could be interesting is to look at their APIs and
> to see if something could be/should be modeled after it or made
> similar to it, so that users can easily switch from their
> closed-source solutions to an open-source solution.
>
> The third thing is: it could be interesting and useful to join forces
> among open-source projects that have a similar goal of providing
> efficient serialization and off-heap memory.
> Right now we have something like this:
> - (distributed) off-heap caches: DirectMemory, BigCache
> (http://bigcache.org/site/) and Hazelcast Enterprise (off-heap support
> is closed-source at the moment)
> - serialization: kryo, protostuff, lightning, protocol buffers and
> many, many more implementations, which all look very similar
> IMHO, too much effort is wasted re-implementing the same functionality
> every time by every project. If those projects (especially off-heap
> caching impls) would work together it would lead much faster to much
> better results. What do you think? Any plans to co-operate with any of
> those?
>
> Regards,
>   Roman
>