You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by horschi <ho...@gmail.com> on 2016/03/08 14:52:03 UTC

Dynamic TTLs / limits still not working in 2.2 ?

Hi,

according to CASSANDRA-4450
<https://issues.apache.org/jira/browse/CASSANDRA-4450> it should be fixed,
but I still can't use dynamic TTLs or limits in my CQL queries.

Query:
update mytable set data=:data where ts=:ts and randkey=:randkey using ttl
:timetolive

Exception:
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:138
missing EOF at 'using' (...:ts and randkey=:randkey [using] ttl...)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:100)

I am using Cassandra 2.2 (using Datastax java driver 2.1.9) and I still see
this, even though the Jira ticket states fixVersion 2.0.

Has anyone used this successfully? Am I doing something wrong or is there
still a bug?

kind regards,
Christian


Tickets:
https://datastax-oss.atlassian.net/browse/JAVA-54
https://issues.apache.org/jira/browse/CASSANDRA-4450

Re: nulls in prepared statement & tombstones?

Posted by Steve Robenalt <sr...@highwire.org>.
Thanks Adam, that's good to know.

On Wed, Mar 9, 2016 at 7:42 AM, Adam Holmberg <ad...@datastax.com>
wrote:

> The referenced article is accurate as far as NULL is concerned, but please
> also note that there is now the ability to specify UNSET to avoid
> unnecessary tombstones (as of Cassandra 2.2.0):
> https://issues.apache.org/jira/browse/CASSANDRA-7304
>
> Adam
>
> On Tue, Mar 8, 2016 at 12:15 PM, Henry M <he...@gmail.com> wrote:
>
>> Thank you. It's probably not specific to prepared statements then and
>> just a more general statement. That makes sense.
>>
>>
>> On Tue, Mar 8, 2016 at 10:06 AM Steve Robenalt <sr...@highwire.org>
>> wrote:
>>
>>> Hi Henry,
>>>
>>> I would suspect that the tombstones are necessary to overwrite any
>>> previous values in the null'd columns. Since Cassandra avoids
>>> read-before-write, there's no way to be sure that the nulls were not
>>> intended to remove any such previous values, so the tombstones insure that
>>> they don't re-appear.
>>>
>>> Steve
>>>
>>>
>>>
>>> On Tue, Mar 8, 2016 at 9:36 AM, Henry Manasseh <he...@gmail.com>
>>> wrote:
>>>
>>>> The following article makes the following statement which I am trying
>>>> to understand:
>>>>
>>>> *"Cassandra’s storage engine is optimized to avoid storing unnecessary
>>>> empty columns, but when using prepared statements those parameters that are
>>>> not provided result in null values being passed to Cassandra (and thus
>>>> tombstones being stored)." *
>>>> http://www.datastax.com/dev/blog/4-simple-rules-when-using-the-datastax-drivers-for-cassandra
>>>>
>>>> I was wondering if someone could help explain why sending nulls as part
>>>> of a prepared statement update would result in tombstones.
>>>>
>>>> Thank you,
>>>> - Henry
>>>>
>>>
>>>
>>>
>>> --
>>> Steve Robenalt
>>> Software Architect
>>> srobenalt@highwire.org <bz...@highwire.org>
>>> (office/cell): 916-505-1785
>>>
>>> HighWire Press, Inc.
>>> 425 Broadway St, Redwood City, CA 94063
>>> www.highwire.org
>>>
>>> Technology for Scholarly Communication
>>>
>>
>


-- 
Steve Robenalt
Software Architect
srobenalt@highwire.org <bz...@highwire.org>
(office/cell): 916-505-1785

HighWire Press, Inc.
425 Broadway St, Redwood City, CA 94063
www.highwire.org

Technology for Scholarly Communication

Re: nulls in prepared statement & tombstones?

Posted by Adam Holmberg <ad...@datastax.com>.
The referenced article is accurate as far as NULL is concerned, but please
also note that there is now the ability to specify UNSET to avoid
unnecessary tombstones (as of Cassandra 2.2.0):
https://issues.apache.org/jira/browse/CASSANDRA-7304

Adam

On Tue, Mar 8, 2016 at 12:15 PM, Henry M <he...@gmail.com> wrote:

> Thank you. It's probably not specific to prepared statements then and just
> a more general statement. That makes sense.
>
>
> On Tue, Mar 8, 2016 at 10:06 AM Steve Robenalt <sr...@highwire.org>
> wrote:
>
>> Hi Henry,
>>
>> I would suspect that the tombstones are necessary to overwrite any
>> previous values in the null'd columns. Since Cassandra avoids
>> read-before-write, there's no way to be sure that the nulls were not
>> intended to remove any such previous values, so the tombstones insure that
>> they don't re-appear.
>>
>> Steve
>>
>>
>>
>> On Tue, Mar 8, 2016 at 9:36 AM, Henry Manasseh <he...@gmail.com>
>> wrote:
>>
>>> The following article makes the following statement which I am trying to
>>> understand:
>>>
>>> *"Cassandra’s storage engine is optimized to avoid storing unnecessary
>>> empty columns, but when using prepared statements those parameters that are
>>> not provided result in null values being passed to Cassandra (and thus
>>> tombstones being stored)." *
>>> http://www.datastax.com/dev/blog/4-simple-rules-when-using-the-datastax-drivers-for-cassandra
>>>
>>> I was wondering if someone could help explain why sending nulls as part
>>> of a prepared statement update would result in tombstones.
>>>
>>> Thank you,
>>> - Henry
>>>
>>
>>
>>
>> --
>> Steve Robenalt
>> Software Architect
>> srobenalt@highwire.org <bz...@highwire.org>
>> (office/cell): 916-505-1785
>>
>> HighWire Press, Inc.
>> 425 Broadway St, Redwood City, CA 94063
>> www.highwire.org
>>
>> Technology for Scholarly Communication
>>
>

Re: nulls in prepared statement & tombstones?

Posted by Henry M <he...@gmail.com>.
Thank you. It's probably not specific to prepared statements then and just
a more general statement. That makes sense.


On Tue, Mar 8, 2016 at 10:06 AM Steve Robenalt <sr...@highwire.org>
wrote:

> Hi Henry,
>
> I would suspect that the tombstones are necessary to overwrite any
> previous values in the null'd columns. Since Cassandra avoids
> read-before-write, there's no way to be sure that the nulls were not
> intended to remove any such previous values, so the tombstones insure that
> they don't re-appear.
>
> Steve
>
>
>
> On Tue, Mar 8, 2016 at 9:36 AM, Henry Manasseh <he...@gmail.com>
> wrote:
>
>> The following article makes the following statement which I am trying to
>> understand:
>>
>> *"Cassandra’s storage engine is optimized to avoid storing unnecessary
>> empty columns, but when using prepared statements those parameters that are
>> not provided result in null values being passed to Cassandra (and thus
>> tombstones being stored)." *
>> http://www.datastax.com/dev/blog/4-simple-rules-when-using-the-datastax-drivers-for-cassandra
>>
>> I was wondering if someone could help explain why sending nulls as part
>> of a prepared statement update would result in tombstones.
>>
>> Thank you,
>> - Henry
>>
>
>
>
> --
> Steve Robenalt
> Software Architect
> srobenalt@highwire.org <bz...@highwire.org>
> (office/cell): 916-505-1785
>
> HighWire Press, Inc.
> 425 Broadway St, Redwood City, CA 94063
> www.highwire.org
>
> Technology for Scholarly Communication
>

Re: nulls in prepared statement & tombstones?

Posted by Steve Robenalt <sr...@highwire.org>.
Hi Henry,

I would suspect that the tombstones are necessary to overwrite any previous
values in the null'd columns. Since Cassandra avoids read-before-write,
there's no way to be sure that the nulls were not intended to remove any
such previous values, so the tombstones insure that they don't re-appear.

Steve



On Tue, Mar 8, 2016 at 9:36 AM, Henry Manasseh <he...@gmail.com>
wrote:

> The following article makes the following statement which I am trying to
> understand:
>
> *"Cassandra’s storage engine is optimized to avoid storing unnecessary
> empty columns, but when using prepared statements those parameters that are
> not provided result in null values being passed to Cassandra (and thus
> tombstones being stored)." *
> http://www.datastax.com/dev/blog/4-simple-rules-when-using-the-datastax-drivers-for-cassandra
>
> I was wondering if someone could help explain why sending nulls as part of
> a prepared statement update would result in tombstones.
>
> Thank you,
> - Henry
>



-- 
Steve Robenalt
Software Architect
srobenalt@highwire.org <bz...@highwire.org>
(office/cell): 916-505-1785

HighWire Press, Inc.
425 Broadway St, Redwood City, CA 94063
www.highwire.org

Technology for Scholarly Communication

nulls in prepared statement & tombstones?

Posted by Henry Manasseh <he...@gmail.com>.
The following article makes the following statement which I am trying to
understand:

*"Cassandra’s storage engine is optimized to avoid storing unnecessary
empty columns, but when using prepared statements those parameters that are
not provided result in null values being passed to Cassandra (and thus
tombstones being stored)." *
http://www.datastax.com/dev/blog/4-simple-rules-when-using-the-datastax-drivers-for-cassandra

I was wondering if someone could help explain why sending nulls as part of
a prepared statement update would result in tombstones.

Thank you,
- Henry

Re: Dynamic TTLs / limits still not working in 2.2 ?

Posted by horschi <ho...@gmail.com>.
Ok, I just realized the parameter should not be called ":limit" :-)

Also I upgraded my Java Driver from 2.1.6 to 2.1.9.

Both, TTL and limit, work fine now. Sorry again for the confusion.

cheers,
Christian


On Tue, Mar 8, 2016 at 3:19 PM, horschi <ho...@gmail.com> wrote:

> Oh, I just realized I made a mistake with the TTL query:
>
> The TTL has to be specified before the set. Like this:
> update mytable using ttl :timetolive set data=:data where ts=:ts and
> randkey=:randkey
>
> And this of course works nicely. Sorry for the confusion.
>
>
> Nevertheless, I don't think this is the issue with my "select ... limit"
> querys. But I will verify this and also try the workaround.
>
>
>
> On Tue, Mar 8, 2016 at 3:08 PM, horschi <ho...@gmail.com> wrote:
>
>> Hi Nick,
>>
>> I will try your workaround. Thanks a lot.
>>
>> I was not expecting the Java-Driver to have a bug, because in the Jira
>> Ticket (JAVA-54) it says "not a problem". So i assumed there is nothing to
>> do to support it :-)
>>
>> kind regards,
>> Christian
>>
>> On Tue, Mar 8, 2016 at 2:56 PM, Nicholas Wilson <
>> nicholas.wilson@realvnc.com> wrote:
>>
>>> Hi Christian,
>>>
>>>
>>> I ran into this problem last month; after some chasing I thought it was
>>> possibly a bug in the Datastax driver, which I'm also using. The CQL
>>> protocol itself supports dynamic TTLs fine.
>>>
>>>
>>> One workaround that seems to work is to use an unnamed bind marker for
>>> the TTL ('?') and then set it using the "[ttl]" reserved name as the bind
>>> marker name ('setLong("[ttl]", myTtl)'), which will set the correct field
>>> in the bound statement.
>>>
>>>
>>> Best,
>>>
>>> Nick​
>>>
>>>
>>> ------------------------------
>>> *From:* horschi <ho...@gmail.com>
>>> *Sent:* 08 March 2016 13:52
>>> *To:* user@cassandra.apache.org
>>> *Subject:* Dynamic TTLs / limits still not working in 2.2 ?
>>>
>>> Hi,
>>>
>>> according to CASSANDRA-4450
>>> <https://issues.apache.org/jira/browse/CASSANDRA-4450> it should be
>>> fixed, but I still can't use dynamic TTLs or limits in my CQL queries.
>>>
>>> Query:
>>> update mytable set data=:data where ts=:ts and randkey=:randkey using
>>> ttl :timetolive
>>>
>>> Exception:
>>> Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:138
>>> missing EOF at 'using' (...:ts and randkey=:randkey [using] ttl...)
>>> at
>>> com.datastax.driver.core.Responses$Error.asException(Responses.java:100)
>>>
>>> I am using Cassandra 2.2 (using Datastax java driver 2.1.9) and I still
>>> see this, even though the Jira ticket states fixVersion 2.0.
>>>
>>> Has anyone used this successfully? Am I doing something wrong or is
>>> there still a bug?
>>>
>>> kind regards,
>>> Christian
>>>
>>>
>>> Tickets:
>>> https://datastax-oss.atlassian.net/browse/JAVA-54
>>> https://issues.apache.org/jira/browse/CASSANDRA-4450
>>>
>>>
>>>
>>
>

Re: Dynamic TTLs / limits still not working in 2.2 ?

Posted by horschi <ho...@gmail.com>.
Oh, I just realized I made a mistake with the TTL query:

The TTL has to be specified before the set. Like this:
update mytable using ttl :timetolive set data=:data where ts=:ts and
randkey=:randkey

And this of course works nicely. Sorry for the confusion.


Nevertheless, I don't think this is the issue with my "select ... limit"
querys. But I will verify this and also try the workaround.



On Tue, Mar 8, 2016 at 3:08 PM, horschi <ho...@gmail.com> wrote:

> Hi Nick,
>
> I will try your workaround. Thanks a lot.
>
> I was not expecting the Java-Driver to have a bug, because in the Jira
> Ticket (JAVA-54) it says "not a problem". So i assumed there is nothing to
> do to support it :-)
>
> kind regards,
> Christian
>
> On Tue, Mar 8, 2016 at 2:56 PM, Nicholas Wilson <
> nicholas.wilson@realvnc.com> wrote:
>
>> Hi Christian,
>>
>>
>> I ran into this problem last month; after some chasing I thought it was
>> possibly a bug in the Datastax driver, which I'm also using. The CQL
>> protocol itself supports dynamic TTLs fine.
>>
>>
>> One workaround that seems to work is to use an unnamed bind marker for
>> the TTL ('?') and then set it using the "[ttl]" reserved name as the bind
>> marker name ('setLong("[ttl]", myTtl)'), which will set the correct field
>> in the bound statement.
>>
>>
>> Best,
>>
>> Nick​
>>
>>
>> ------------------------------
>> *From:* horschi <ho...@gmail.com>
>> *Sent:* 08 March 2016 13:52
>> *To:* user@cassandra.apache.org
>> *Subject:* Dynamic TTLs / limits still not working in 2.2 ?
>>
>> Hi,
>>
>> according to CASSANDRA-4450
>> <https://issues.apache.org/jira/browse/CASSANDRA-4450> it should be
>> fixed, but I still can't use dynamic TTLs or limits in my CQL queries.
>>
>> Query:
>> update mytable set data=:data where ts=:ts and randkey=:randkey using ttl
>> :timetolive
>>
>> Exception:
>> Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:138
>> missing EOF at 'using' (...:ts and randkey=:randkey [using] ttl...)
>> at
>> com.datastax.driver.core.Responses$Error.asException(Responses.java:100)
>>
>> I am using Cassandra 2.2 (using Datastax java driver 2.1.9) and I still
>> see this, even though the Jira ticket states fixVersion 2.0.
>>
>> Has anyone used this successfully? Am I doing something wrong or is there
>> still a bug?
>>
>> kind regards,
>> Christian
>>
>>
>> Tickets:
>> https://datastax-oss.atlassian.net/browse/JAVA-54
>> https://issues.apache.org/jira/browse/CASSANDRA-4450
>>
>>
>>
>

Re: Dynamic TTLs / limits still not working in 2.2 ?

Posted by horschi <ho...@gmail.com>.
Hi Nick,

I will try your workaround. Thanks a lot.

I was not expecting the Java-Driver to have a bug, because in the Jira
Ticket (JAVA-54) it says "not a problem". So i assumed there is nothing to
do to support it :-)

kind regards,
Christian

On Tue, Mar 8, 2016 at 2:56 PM, Nicholas Wilson <nicholas.wilson@realvnc.com
> wrote:

> Hi Christian,
>
>
> I ran into this problem last month; after some chasing I thought it was
> possibly a bug in the Datastax driver, which I'm also using. The CQL
> protocol itself supports dynamic TTLs fine.
>
>
> One workaround that seems to work is to use an unnamed bind marker for the
> TTL ('?') and then set it using the "[ttl]" reserved name as the bind
> marker name ('setLong("[ttl]", myTtl)'), which will set the correct field
> in the bound statement.
>
>
> Best,
>
> Nick​
>
>
> ------------------------------
> *From:* horschi <ho...@gmail.com>
> *Sent:* 08 March 2016 13:52
> *To:* user@cassandra.apache.org
> *Subject:* Dynamic TTLs / limits still not working in 2.2 ?
>
> Hi,
>
> according to CASSANDRA-4450
> <https://issues.apache.org/jira/browse/CASSANDRA-4450> it should be
> fixed, but I still can't use dynamic TTLs or limits in my CQL queries.
>
> Query:
> update mytable set data=:data where ts=:ts and randkey=:randkey using ttl
> :timetolive
>
> Exception:
> Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:138
> missing EOF at 'using' (...:ts and randkey=:randkey [using] ttl...)
> at com.datastax.driver.core.Responses$Error.asException(Responses.java:100)
>
> I am using Cassandra 2.2 (using Datastax java driver 2.1.9) and I still
> see this, even though the Jira ticket states fixVersion 2.0.
>
> Has anyone used this successfully? Am I doing something wrong or is there
> still a bug?
>
> kind regards,
> Christian
>
>
> Tickets:
> https://datastax-oss.atlassian.net/browse/JAVA-54
> https://issues.apache.org/jira/browse/CASSANDRA-4450
>
>
>

Re: Dynamic TTLs / limits still not working in 2.2 ?

Posted by Nicholas Wilson <ni...@realvnc.com>.
Hi Christian,


I ran into this problem last month; after some chasing I thought it was possibly a bug in the Datastax driver, which I'm also using. The CQL protocol itself supports dynamic TTLs fine.


One workaround that seems to work is to use an unnamed bind marker for the TTL ('?') and then set it using the "[ttl]" reserved name as the bind marker name ('setLong("[ttl]", myTtl)'), which will set the correct field in the bound statement.


Best,

Nick?


________________________________
From: horschi <ho...@gmail.com>
Sent: 08 March 2016 13:52
To: user@cassandra.apache.org
Subject: Dynamic TTLs / limits still not working in 2.2 ?

Hi,

according to CASSANDRA-4450<https://issues.apache.org/jira/browse/CASSANDRA-4450> it should be fixed, but I still can't use dynamic TTLs or limits in my CQL queries.

Query:
update mytable set data=:data where ts=:ts and randkey=:randkey using ttl :timetolive

Exception:
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:138 missing EOF at 'using' (...:ts and randkey=:randkey [using] ttl...)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:100)

I am using Cassandra 2.2 (using Datastax java driver 2.1.9) and I still see this, even though the Jira ticket states fixVersion 2.0.

Has anyone used this successfully? Am I doing something wrong or is there still a bug?

kind regards,
Christian


Tickets:
https://datastax-oss.atlassian.net/browse/JAVA-54
https://issues.apache.org/jira/browse/CASSANDRA-4450