You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Oliver Herrmann <o....@gmail.com> on 2020/03/23 22:00:07 UTC

Table not updating

Hello,

we are facing a strange issue in one of our Cassandra clusters.
We are using prepared statements to update a table with consistency local
quorum. When updating some tables it happes very often that data values are
not written to the database. When verifying the table using cqlsh (with
consistency all) the row does not exist.
When using the prepared statements we do not bind values to all placeholder
for data columns but I think this should not be a problem, right?

I checked system.log and debug.log for any hints but nothing is written
into these log files.
It's only happening in one specific cluster. When running the same software
in other clusters everything is working fine.

We are using Cassanda server version 3.11.1 and datastax cpp driver 2.13.0.

Any idea how to analyze/fix this problem?

Regards
Oliver

Re: Table not updating

Posted by Oliver Herrmann <o....@gmail.com>.
Hi Erick, thank you for the hint with NTP. For some reasons ntpd was not
running on a few nodes and time was off by 2 Minutes. After restarting ntpd
and adjusting the time everything is working as expected. Thank you very
much. Al the best.

Am Di., 24. März 2020 um 01:47 Uhr schrieb Erick Ramirez <
erick.ramirez@datastax.com>:

> Oliver, by chance are you also doing a TTL on the data? Or maybe you've
> already issued a DELETE that has a future timestamp?
>
> The only other way I can see this happening is when the node's clocks
> are skewed. Have you checked whether NTP hasn't drifted by a massive
> amount? Cheers!
>
> GOT QUESTIONS? Apache Cassandra experts from the community and DataStax
> have answers! Share your expertise on https://community.datastax.com/.
>
>

Re: Table not updating

Posted by Erick Ramirez <er...@datastax.com>.
Oliver, by chance are you also doing a TTL on the data? Or maybe you've
already issued a DELETE that has a future timestamp?

The only other way I can see this happening is when the node's clocks
are skewed. Have you checked whether NTP hasn't drifted by a massive
amount? Cheers!

GOT QUESTIONS? Apache Cassandra experts from the community and DataStax
have answers! Share your expertise on https://community.datastax.com/.

RE: Table not updating

Posted by "Durity, Sean R" <SE...@homedepot.com>.
Oh, I see it was clock drift in this case. Glad you found that out.

Sean Durity

From: Durity, Sean R <SE...@homedepot.com>
Sent: Tuesday, March 24, 2020 2:10 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] RE: Table not updating

I’m wondering about nulls. They are written as tombstones. So, it is an interesting question for a prepared statement where you are not binding all the variables. The driver or framework might be doing something you don’t expect.

Sean Durity

From: Sebastian Estevez <se...@datastax.com>>
Sent: Monday, March 23, 2020 9:02 PM
To: user@cassandra.apache.org<ma...@cassandra.apache.org>
Subject: [EXTERNAL] Re: Table not updating

I have seen cases where folks thought they were writing successfully to the database but were really hitting timeouts due to an unhandled future in their loading program. This may very well not be your issue but it's common enough that I thought I would mention it.

Hope you get to the bottom of it!


All the best,





Sebastián Estévez


On Mon, Mar 23, 2020 at 8:50 PM Jeff Jirsa <jj...@gmail.com>> wrote:
You need to see what's in that place, it could be:

1) Delete in the future (viewable with SELECT WRITETIME(column) ...). This could be clock skew or using the wrong resolution timestamps (millis vs micros)
2) Some form of corruption if you dont have compression + crc check chance. It's possible (but unlikely) that you can have a really broken data file that simulates a deletion marker. You may be able to find this with sstable2json (older versions) or sstabledump (3.0+)

sstabledump your data files that have the key (nodetool getendpoints, nodetool getsstables, sstabledump), look for something unusual.



On Mon, Mar 23, 2020 at 4:00 PM Oliver Herrmann <o....@gmail.com>> wrote:
Hello,
we are facing a strange issue in one of our Cassandra clusters.
We are using prepared statements to update a table with consistency local quorum. When updating some tables it happes very often that data values are not written to the database. When verifying the table using cqlsh (with consistency all) the row does not exist.
When using the prepared statements we do not bind values to all placeholder for data columns but I think this should not be a problem, right?
I checked system.log and debug.log for any hints but nothing is written into these log files.
It's only happening in one specific cluster. When running the same software in other clusters everything is working fine.

We are using Cassanda server version 3.11.1 and datastax cpp driver 2.13.0.

Any idea how to analyze/fix this problem?
Regards
Oliver


________________________________

The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.

________________________________

The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.

RE: Table not updating

Posted by IRONMAN Monttremblant <mo...@ironman.com>.
I'm receiving your messages, WHY?
I do not think I should...



JACYNTHE



Coordonnatrice, Service aux Athlètes

Coordinator, Athlete Services



[http://ironmanmonttremblantmedia.com/wp-content/uploads/2018/05/signatureIronman2018.jpg]


------------------- Original Message -------------------
From: Durity, Sean R;
Received: Tue Mar 24 2020 14:17:10 GMT-0400 (Eastern Daylight Time)
To: user@cassandra.apache.org;
Subject: RE: Table not updating

I’m wondering about nulls. They are written as tombstones. So, it is an interesting question for a prepared statement where you are not binding all the variables. The driver or framework might be doing something you don’t expect.

Sean Durity

From: Sebastian Estevez <se...@datastax.com>
Sent: Monday, March 23, 2020 9:02 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Table not updating

I have seen cases where folks thought they were writing successfully to the database but were really hitting timeouts due to an unhandled future in their loading program. This may very well not be your issue but it's common enough that I thought I would mention it.

Hope you get to the bottom of it!


All the best,





Sebastián Estévez


On Mon, Mar 23, 2020 at 8:50 PM Jeff Jirsa <jj...@gmail.com>> wrote:
You need to see what's in that place, it could be:

1) Delete in the future (viewable with SELECT WRITETIME(column) ...). This could be clock skew or using the wrong resolution timestamps (millis vs micros)
2) Some form of corruption if you dont have compression + crc check chance. It's possible (but unlikely) that you can have a really broken data file that simulates a deletion marker. You may be able to find this with sstable2json (older versions) or sstabledump (3.0+)

sstabledump your data files that have the key (nodetool getendpoints, nodetool getsstables, sstabledump), look for something unusual.



On Mon, Mar 23, 2020 at 4:00 PM Oliver Herrmann <o....@gmail.com>> wrote:
Hello,
we are facing a strange issue in one of our Cassandra clusters.
We are using prepared statements to update a table with consistency local quorum. When updating some tables it happes very often that data values are not written to the database. When verifying the table using cqlsh (with consistency all) the row does not exist.
When using the prepared statements we do not bind values to all placeholder for data columns but I think this should not be a problem, right?
I checked system.log and debug.log for any hints but nothing is written into these log files.
It's only happening in one specific cluster. When running the same software in other clusters everything is working fine.

We are using Cassanda server version 3.11.1 and datastax cpp driver 2.13.0.

Any idea how to analyze/fix this problem?
Regards
Oliver


________________________________

The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.

RE: Table not updating

Posted by "Durity, Sean R" <SE...@homedepot.com>.
I’m wondering about nulls. They are written as tombstones. So, it is an interesting question for a prepared statement where you are not binding all the variables. The driver or framework might be doing something you don’t expect.

Sean Durity

From: Sebastian Estevez <se...@datastax.com>
Sent: Monday, March 23, 2020 9:02 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Table not updating

I have seen cases where folks thought they were writing successfully to the database but were really hitting timeouts due to an unhandled future in their loading program. This may very well not be your issue but it's common enough that I thought I would mention it.

Hope you get to the bottom of it!


All the best,





Sebastián Estévez


On Mon, Mar 23, 2020 at 8:50 PM Jeff Jirsa <jj...@gmail.com>> wrote:
You need to see what's in that place, it could be:

1) Delete in the future (viewable with SELECT WRITETIME(column) ...). This could be clock skew or using the wrong resolution timestamps (millis vs micros)
2) Some form of corruption if you dont have compression + crc check chance. It's possible (but unlikely) that you can have a really broken data file that simulates a deletion marker. You may be able to find this with sstable2json (older versions) or sstabledump (3.0+)

sstabledump your data files that have the key (nodetool getendpoints, nodetool getsstables, sstabledump), look for something unusual.



On Mon, Mar 23, 2020 at 4:00 PM Oliver Herrmann <o....@gmail.com>> wrote:
Hello,
we are facing a strange issue in one of our Cassandra clusters.
We are using prepared statements to update a table with consistency local quorum. When updating some tables it happes very often that data values are not written to the database. When verifying the table using cqlsh (with consistency all) the row does not exist.
When using the prepared statements we do not bind values to all placeholder for data columns but I think this should not be a problem, right?
I checked system.log and debug.log for any hints but nothing is written into these log files.
It's only happening in one specific cluster. When running the same software in other clusters everything is working fine.

We are using Cassanda server version 3.11.1 and datastax cpp driver 2.13.0.

Any idea how to analyze/fix this problem?
Regards
Oliver


________________________________

The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.

Re: Table not updating

Posted by Sebastian Estevez <se...@datastax.com>.
I have seen cases where folks thought they were writing successfully to the
database but were really hitting timeouts due to an unhandled future in
their loading program. This may very well not be your issue but it's common
enough that I thought I would mention it.

Hope you get to the bottom of it!

All the best,



Sebastián Estévez


On Mon, Mar 23, 2020 at 8:50 PM Jeff Jirsa <jj...@gmail.com> wrote:

> You need to see what's in that place, it could be:
>
> 1) Delete in the future (viewable with SELECT WRITETIME(column) ...). This
> could be clock skew or using the wrong resolution timestamps (millis vs
> micros)
> 2) Some form of corruption if you dont have compression + crc check
> chance. It's possible (but unlikely) that you can have a really broken data
> file that simulates a deletion marker. You may be able to find this with
> sstable2json (older versions) or sstabledump (3.0+)
>
> sstabledump your data files that have the key (nodetool getendpoints,
> nodetool getsstables, sstabledump), look for something unusual.
>
>
>
> On Mon, Mar 23, 2020 at 4:00 PM Oliver Herrmann <o....@gmail.com>
> wrote:
>
>> Hello,
>>
>> we are facing a strange issue in one of our Cassandra clusters.
>> We are using prepared statements to update a table with consistency local
>> quorum. When updating some tables it happes very often that data values are
>> not written to the database. When verifying the table using cqlsh (with
>> consistency all) the row does not exist.
>> When using the prepared statements we do not bind values to all
>> placeholder for data columns but I think this should not be a problem,
>> right?
>>
>> I checked system.log and debug.log for any hints but nothing is written
>> into these log files.
>> It's only happening in one specific cluster. When running the same
>> software in other clusters everything is working fine.
>>
>> We are using Cassanda server version 3.11.1 and datastax cpp driver
>> 2.13.0.
>>
>> Any idea how to analyze/fix this problem?
>>
>> Regards
>> Oliver
>>
>>

Re: Table not updating

Posted by Jeff Jirsa <jj...@gmail.com>.
You need to see what's in that place, it could be:

1) Delete in the future (viewable with SELECT WRITETIME(column) ...). This
could be clock skew or using the wrong resolution timestamps (millis vs
micros)
2) Some form of corruption if you dont have compression + crc check chance.
It's possible (but unlikely) that you can have a really broken data file
that simulates a deletion marker. You may be able to find this with
sstable2json (older versions) or sstabledump (3.0+)

sstabledump your data files that have the key (nodetool getendpoints,
nodetool getsstables, sstabledump), look for something unusual.



On Mon, Mar 23, 2020 at 4:00 PM Oliver Herrmann <o....@gmail.com>
wrote:

> Hello,
>
> we are facing a strange issue in one of our Cassandra clusters.
> We are using prepared statements to update a table with consistency local
> quorum. When updating some tables it happes very often that data values are
> not written to the database. When verifying the table using cqlsh (with
> consistency all) the row does not exist.
> When using the prepared statements we do not bind values to all
> placeholder for data columns but I think this should not be a problem,
> right?
>
> I checked system.log and debug.log for any hints but nothing is written
> into these log files.
> It's only happening in one specific cluster. When running the same
> software in other clusters everything is working fine.
>
> We are using Cassanda server version 3.11.1 and datastax cpp driver 2.13.0.
>
> Any idea how to analyze/fix this problem?
>
> Regards
> Oliver
>
>