You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Soheil Pourbafrani <so...@gmail.com> on 2018/04/21 18:05:23 UTC

Cassandra doesn't insert all rows

I consume data from Kafka and insert them into Cassandra cluster using Java
API. The table has 4 keys including a timestamp based on millisecond. But
when executing the code, it just inserts 120 to 190 rows and ignores other
incoming data!

What parts can be the cause of the problem? Bad insert code in key fields
that overwrite data, improper cluster configuration,....?

Re: Cassandra doesn't insert all rows

Posted by Jeff Jirsa <jj...@gmail.com>.
Impossible to guess with that info, but maybe one of:

- “Wrong” consistency level for reads or writes
- Incorrect primary key definition (you’re overwriting data you don’t realize you’re overwriting)

Less likely:
- Broken cluster where hosts are flapping and you’re missing data on read
- using a version of Cassandra with bugs in short read protection

-- 
Jeff Jirsa


> On Apr 21, 2018, at 2:05 PM, Soheil Pourbafrani <so...@gmail.com> wrote:
> 
> I consume data from Kafka and insert them into Cassandra cluster using Java API. The table has 4 keys including a timestamp based on millisecond. But when executing the code, it just inserts 120 to 190 rows and ignores other incoming data!
> 
> What parts can be the cause of the problem? Bad insert code in key fields that overwrite data, improper cluster configuration,....?

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
For additional commands, e-mail: user-help@cassandra.apache.org


Re: Cassandra doesn't insert all rows

Posted by "dinesh.joshi@yahoo.com.INVALID" <di...@yahoo.com.INVALID>.
Soheil, 
As Jeff mentioned that you need to provide more information. There are no known issues that I can think of that would cause such behavior. It would be great if you could provide us with a reduced test case so we can try and reproduce this behavior or at least help you debug the issue better. Could you detail the version of Cassandra, the number of nodes, the keyspace definition, RF / CL, perhaps a bit of your client code that does the writes, did you get back any errors on the client or on the server side? These details would be helpful to further help you.
Thanks,
Dinesh 

    On Saturday, April 21, 2018, 11:06:12 AM PDT, Soheil Pourbafrani <so...@gmail.com> wrote:  
 
 I consume data from Kafka and insert them into Cassandra cluster using Java API. The table has 4 keys including a timestamp based on millisecond. But when executing the code, it just inserts 120 to 190 rows and ignores other incoming data!
What parts can be the cause of the problem? Bad insert code in key fields that overwrite data, improper cluster configuration,....?