You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Kant Kodali <ka...@peernova.com> on 2016/11/05 10:24:34 UTC

Is there a way to do Read and Set at Cassandra level?

I have a requirement where I need to know last value that is written
successfully so I could read that value and do some computation and include
it in the subsequent write. For now we are doing read before write which
significantly degrades the performance. Light weight transactions are more
of a compare and set than a Read and Set. The very first thing I tried is
to see if I can eliminate this need by the application but looks like it is
a strong requirement for us so I am wondering if there is any way I can
optimize that? I know batching could help in the sense I can do one read
for every batch so that the writes in the batch doesn't take a read
performance hit but I wonder if there is any clever ideas or tricks I can
do?

Re: Is there a way to do Read and Set at Cassandra level?

Posted by DuyHai Doan <do...@gmail.com>.
"But then don't I need to evict for every batch of writes?"

Yes, that's why I think an in-memory distributed data structure is the good
fit for your scenario. Using a log structured merged tree like C* for this
use-case is not the most efficient choice

On Sat, Nov 5, 2016 at 11:52 AM, Kant Kodali <ka...@peernova.com> wrote:

> But then don't I need to evict for every batch of writes? I thought cache
> would make sense when reads/writes > 1 per say. What do you think?
>
> On Sat, Nov 5, 2016 at 3:33 AM, DuyHai Doan <do...@gmail.com> wrote:
>
>> "I have a requirement where I need to know last value that is written
>> successfully so I could read that value and do some computation and include
>> it in the subsequent write"
>>
>> Maybe keeping the last written value in a distributed cache is cheaper
>> than doing a read before write in Cassandra ?
>>
>> On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <ka...@peernova.com> wrote:
>>
>>> I have a requirement where I need to know last value that is written
>>> successfully so I could read that value and do some computation and include
>>> it in the subsequent write. For now we are doing read before write which
>>> significantly degrades the performance. Light weight transactions are more
>>> of a compare and set than a Read and Set. The very first thing I tried is
>>> to see if I can eliminate this need by the application but looks like it is
>>> a strong requirement for us so I am wondering if there is any way I can
>>> optimize that? I know batching could help in the sense I can do one read
>>> for every batch so that the writes in the batch doesn't take a read
>>> performance hit but I wonder if there is any clever ideas or tricks I can
>>> do?
>>>
>>
>>
>

Re: Is there a way to do Read and Set at Cassandra level?

Posted by DuyHai Doan <do...@gmail.com>.
"But then don't I need to evict for every batch of writes?"

Yes, that's why I think an in-memory distributed data structure is the good
fit for your scenario. Using a log structured merged tree like C* for this
use-case is not the most efficient choice

On Sat, Nov 5, 2016 at 11:52 AM, Kant Kodali <ka...@peernova.com> wrote:

> But then don't I need to evict for every batch of writes? I thought cache
> would make sense when reads/writes > 1 per say. What do you think?
>
> On Sat, Nov 5, 2016 at 3:33 AM, DuyHai Doan <do...@gmail.com> wrote:
>
>> "I have a requirement where I need to know last value that is written
>> successfully so I could read that value and do some computation and include
>> it in the subsequent write"
>>
>> Maybe keeping the last written value in a distributed cache is cheaper
>> than doing a read before write in Cassandra ?
>>
>> On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <ka...@peernova.com> wrote:
>>
>>> I have a requirement where I need to know last value that is written
>>> successfully so I could read that value and do some computation and include
>>> it in the subsequent write. For now we are doing read before write which
>>> significantly degrades the performance. Light weight transactions are more
>>> of a compare and set than a Read and Set. The very first thing I tried is
>>> to see if I can eliminate this need by the application but looks like it is
>>> a strong requirement for us so I am wondering if there is any way I can
>>> optimize that? I know batching could help in the sense I can do one read
>>> for every batch so that the writes in the batch doesn't take a read
>>> performance hit but I wonder if there is any clever ideas or tricks I can
>>> do?
>>>
>>
>>
>

Re: Is there a way to do Read and Set at Cassandra level?

Posted by Kant Kodali <ka...@peernova.com>.
But then don't I need to evict for every batch of writes? I thought cache
would make sense when reads/writes > 1 per say. What do you think?

On Sat, Nov 5, 2016 at 3:33 AM, DuyHai Doan <do...@gmail.com> wrote:

> "I have a requirement where I need to know last value that is written
> successfully so I could read that value and do some computation and include
> it in the subsequent write"
>
> Maybe keeping the last written value in a distributed cache is cheaper
> than doing a read before write in Cassandra ?
>
> On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <ka...@peernova.com> wrote:
>
>> I have a requirement where I need to know last value that is written
>> successfully so I could read that value and do some computation and include
>> it in the subsequent write. For now we are doing read before write which
>> significantly degrades the performance. Light weight transactions are more
>> of a compare and set than a Read and Set. The very first thing I tried is
>> to see if I can eliminate this need by the application but looks like it is
>> a strong requirement for us so I am wondering if there is any way I can
>> optimize that? I know batching could help in the sense I can do one read
>> for every batch so that the writes in the batch doesn't take a read
>> performance hit but I wonder if there is any clever ideas or tricks I can
>> do?
>>
>
>

Re: Is there a way to do Read and Set at Cassandra level?

Posted by Kant Kodali <ka...@peernova.com>.
But then don't I need to evict for every batch of writes? I thought cache
would make sense when reads/writes > 1 per say. What do you think?

On Sat, Nov 5, 2016 at 3:33 AM, DuyHai Doan <do...@gmail.com> wrote:

> "I have a requirement where I need to know last value that is written
> successfully so I could read that value and do some computation and include
> it in the subsequent write"
>
> Maybe keeping the last written value in a distributed cache is cheaper
> than doing a read before write in Cassandra ?
>
> On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <ka...@peernova.com> wrote:
>
>> I have a requirement where I need to know last value that is written
>> successfully so I could read that value and do some computation and include
>> it in the subsequent write. For now we are doing read before write which
>> significantly degrades the performance. Light weight transactions are more
>> of a compare and set than a Read and Set. The very first thing I tried is
>> to see if I can eliminate this need by the application but looks like it is
>> a strong requirement for us so I am wondering if there is any way I can
>> optimize that? I know batching could help in the sense I can do one read
>> for every batch so that the writes in the batch doesn't take a read
>> performance hit but I wonder if there is any clever ideas or tricks I can
>> do?
>>
>
>

Re: Is there a way to do Read and Set at Cassandra level?

Posted by DuyHai Doan <do...@gmail.com>.
"I have a requirement where I need to know last value that is written
successfully so I could read that value and do some computation and include
it in the subsequent write"

Maybe keeping the last written value in a distributed cache is cheaper than
doing a read before write in Cassandra ?

On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <ka...@peernova.com> wrote:

> I have a requirement where I need to know last value that is written
> successfully so I could read that value and do some computation and include
> it in the subsequent write. For now we are doing read before write which
> significantly degrades the performance. Light weight transactions are more
> of a compare and set than a Read and Set. The very first thing I tried is
> to see if I can eliminate this need by the application but looks like it is
> a strong requirement for us so I am wondering if there is any way I can
> optimize that? I know batching could help in the sense I can do one read
> for every batch so that the writes in the batch doesn't take a read
> performance hit but I wonder if there is any clever ideas or tricks I can
> do?
>

Re: Is there a way to do Read and Set at Cassandra level?

Posted by DuyHai Doan <do...@gmail.com>.
"I have a requirement where I need to know last value that is written
successfully so I could read that value and do some computation and include
it in the subsequent write"

Maybe keeping the last written value in a distributed cache is cheaper than
doing a read before write in Cassandra ?

On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <ka...@peernova.com> wrote:

> I have a requirement where I need to know last value that is written
> successfully so I could read that value and do some computation and include
> it in the subsequent write. For now we are doing read before write which
> significantly degrades the performance. Light weight transactions are more
> of a compare and set than a Read and Set. The very first thing I tried is
> to see if I can eliminate this need by the application but looks like it is
> a strong requirement for us so I am wondering if there is any way I can
> optimize that? I know batching could help in the sense I can do one read
> for every batch so that the writes in the batch doesn't take a read
> performance hit but I wonder if there is any clever ideas or tricks I can
> do?
>