You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Hervé Rivière <he...@zenika.com> on 2015/08/18 11:50:15 UTC

Null pointer exception after delete in a table with statics

Hello,





I have an issue with a <ErrorMessage code=0000 [Server error]
message="java.lang.NullPointerException"> when I query a table with static
fields (without where clause) with Cassandra 2.1.8 / 2 nodes clusters.



No more indication in the log :

ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,549 QueryMessage.java:132 -
Unexpected error during query

java.lang.NullPointerException: null

ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,550 ErrorMessage.java:251 -
Unexpected exception during request

java.lang.NullPointerException: null





The scenario was :

1) loading data inside the table with spark (~12 million rows)

2) Make some deletes with the primary keys and use the static fields to
keep a certain state for each partition.



The null pointer exception occurs when I query all the table after I made
some deletions.



I observed that :

- Before delete statement the table is perfectly readable

- It's repeatable  (I achieved to isolate ~20 delete statements that create
a null pointer exception  when they are executed by cqlsh)

- it occurs only  with some rows (nothing special in these rows compared to
others)

- Didn't succeed to repeat the problem with the problematic rows inside a
toy table

- repair/compact and scrub on each node before and after the deletes
statements didn't change anything (always the null pointer exception after
the delete)

- Maybe related with static columns ?



The table structure is :

CREATE TABLE my_table (

    pk1 text,

    pk2 text,

    ck1 timestamp,

    ck2 text,

    ck3 text,

    valuefield text,

    staticField1 text static,

    staticField2 text static,

    PRIMARY KEY ((pk1, pk2), ck1, ck2, ck3)

) WITH CLUSTERING ORDER BY (pk1 DESC, pk2 ASC, ck1 ASC)

    AND bloom_filter_fp_chance = 0.01

    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'

    AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}

    AND compression = {'sstable_compression':
'org.apache.cassandra.io.compress.LZ4Compressor'}

    AND dclocal_read_repair_chance = 0.1

    AND default_time_to_live = 0

    AND gc_grace_seconds = 0

    AND max_index_interval = 2048

    AND memtable_flush_period_in_ms = 0

    AND min_index_interval = 128

    AND read_repair_chance = 0.0

    AND speculative_retry = '99.0PERCENTILE';









Is someone already met this issue or has an idea to solve/investigate this
exceptions ?





Thank you





Regards





--

Hervé

RE: Null pointer exception after delete in a table with statics

Posted by Hervé Rivière <he...@zenika.com>.
Hello Doan,



Thank you for your answer !



In my spark job I changed the spark.cassandra.input.split.size
(spark.cassandra.input.fetch.size_in_rows isn’t recognize in my v. 1.2.3
spark-cassandra-connector)

from 8 000 to 200 (so that’s create a lot more tasks by node) but I still
have the null pointer exception (at the same row than before).



Actually my spark job do two thing : 1/ loading the table from another
Cassandra table. 2/ Update with specific rules the two static fields.



I noticed that there is no problem to make delete after step 1/ (when all
the static fields are null).



The null pointer exception occurs only after the step 2/ (where there are
some not null static in the table).



I will try to merge step 1 and 2 into one and therefore only make one
INSERT by row when I load the table and see what happen





--

Hervé





*De :* DuyHai Doan [mailto:doanduyhai@gmail.com]
*Envoyé :* mardi 18 août 2015 15:25
*À :* user@cassandra.apache.org
*Objet :* Re: Null pointer exception after delete in a table with statics



Weird, you issue makes me remember of
https://issues.apache.org/jira/browse/CASSANDRA-8502 but it seems that it
has been fixed since 2.1.6 and you're using 2.1.8



Can you try to reproduce it using small page with Spark
(spark.cassandra.input.fetch.size_in_rows)
?



On Tue, Aug 18, 2015 at 11:50 AM, Hervé Rivière <he...@zenika.com>
wrote:

Hello,





I have an issue with a <ErrorMessage code=0000 [Server error]
message="java.lang.NullPointerException"> when I query a table with static
fields (without where clause) with Cassandra 2.1.8 / 2 nodes clusters.



No more indication in the log :

ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,549 QueryMessage.java:132 -
Unexpected error during query

java.lang.NullPointerException: null

ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,550 ErrorMessage.java:251 -
Unexpected exception during request

java.lang.NullPointerException: null





The scenario was :

1) loading data inside the table with spark (~12 million rows)

2) Make some deletes with the primary keys and use the static fields to
keep a certain state for each partition.



The null pointer exception occurs when I query all the table after I made
some deletions.



I observed that :

- Before delete statement the table is perfectly readable

- It's repeatable  (I achieved to isolate ~20 delete statements that create
a null pointer exception  when they are executed by cqlsh)

- it occurs only  with some rows (nothing special in these rows compared to
others)

- Didn't succeed to repeat the problem with the problematic rows inside a
toy table

- repair/compact and scrub on each node before and after the deletes
statements didn't change anything (always the null pointer exception after
the delete)

- Maybe related with static columns ?



The table structure is :

CREATE TABLE my_table (

    pk1 text,

    pk2 text,

    ck1 timestamp,

    ck2 text,

    ck3 text,

    valuefield text,

    staticField1 text static,

    staticField2 text static,

    PRIMARY KEY ((pk1, pk2), ck1, ck2, ck3)

) WITH CLUSTERING ORDER BY (pk1 DESC, pk2 ASC, ck1 ASC)

    AND bloom_filter_fp_chance = 0.01

    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'

    AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}

    AND compression = {'sstable_compression':
'org.apache.cassandra.io.compress.LZ4Compressor'}

    AND dclocal_read_repair_chance = 0.1

    AND default_time_to_live = 0

    AND gc_grace_seconds = 0

    AND max_index_interval = 2048

    AND memtable_flush_period_in_ms = 0

    AND min_index_interval = 128

    AND read_repair_chance = 0.0

    AND speculative_retry = '99.0PERCENTILE';









Is someone already met this issue or has an idea to solve/investigate this
exceptions ?





Thank you





Regards





--

Hervé

Re: Null pointer exception after delete in a table with statics

Posted by DuyHai Doan <do...@gmail.com>.
Weird, you issue makes me remember of
https://issues.apache.org/jira/browse/CASSANDRA-8502 but it seems that it
has been fixed since 2.1.6 and you're using 2.1.8

Can you try to reproduce it using small page with Spark
(spark.cassandra.input.fetch.size_in_rows)
?

On Tue, Aug 18, 2015 at 11:50 AM, Hervé Rivière <he...@zenika.com>
wrote:

> Hello,
>
>
>
>
>
> I have an issue with a <ErrorMessage code=0000 [Server error]
> message="java.lang.NullPointerException"> when I query a table with static
> fields (without where clause) with Cassandra 2.1.8 / 2 nodes clusters.
>
>
>
> No more indication in the log :
>
> ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,549 QueryMessage.java:132
> - Unexpected error during query
>
> java.lang.NullPointerException: null
>
> ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,550 ErrorMessage.java:251
> - Unexpected exception during request
>
> java.lang.NullPointerException: null
>
>
>
>
>
> The scenario was :
>
> 1) loading data inside the table with spark (~12 million rows)
>
> 2) Make some deletes with the primary keys and use the static fields to
> keep a certain state for each partition.
>
>
>
> The null pointer exception occurs when I query all the table after I made
> some deletions.
>
>
>
> I observed that :
>
> - Before delete statement the table is perfectly readable
>
> - It's repeatable  (I achieved to isolate ~20 delete statements that
> create a null pointer exception  when they are executed by cqlsh)
>
> - it occurs only  with some rows (nothing special in these rows compared
> to others)
>
> - Didn't succeed to repeat the problem with the problematic rows inside a
> toy table
>
> - repair/compact and scrub on each node before and after the deletes
> statements didn't change anything (always the null pointer exception after
> the delete)
>
> - Maybe related with static columns ?
>
>
>
> The table structure is :
>
> CREATE TABLE my_table (
>
>     pk1 text,
>
>     pk2 text,
>
>     ck1 timestamp,
>
>     ck2 text,
>
>     ck3 text,
>
>     valuefield text,
>
>     staticField1 text static,
>
>     staticField2 text static,
>
>     PRIMARY KEY ((pk1, pk2), ck1, ck2, ck3)
>
> ) WITH CLUSTERING ORDER BY (pk1 DESC, pk2 ASC, ck1 ASC)
>
>     AND bloom_filter_fp_chance = 0.01
>
>     AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
>
>     AND compaction = {'class':
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
>
>     AND compression = {'sstable_compression':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>
>     AND dclocal_read_repair_chance = 0.1
>
>     AND default_time_to_live = 0
>
>     AND gc_grace_seconds = 0
>
>     AND max_index_interval = 2048
>
>     AND memtable_flush_period_in_ms = 0
>
>     AND min_index_interval = 128
>
>     AND read_repair_chance = 0.0
>
>     AND speculative_retry = '99.0PERCENTILE';
>
>
>
>
>
>
>
>
>
> Is someone already met this issue or has an idea to solve/investigate this
> exceptions ?
>
>
>
>
>
> Thank you
>
>
>
>
>
> Regards
>
>
>
>
>
> --
>
> Hervé
>

Re: Null pointer exception after delete in a table with statics

Posted by Sebastian Estevez <se...@datastax.com>.
Can you include your read code?
On Aug 18, 2015 5:50 AM, "Hervé Rivière" <he...@zenika.com> wrote:

> Hello,
>
>
>
>
>
> I have an issue with a <ErrorMessage code=0000 [Server error]
> message="java.lang.NullPointerException"> when I query a table with static
> fields (without where clause) with Cassandra 2.1.8 / 2 nodes clusters.
>
>
>
> No more indication in the log :
>
> ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,549 QueryMessage.java:132
> - Unexpected error during query
>
> java.lang.NullPointerException: null
>
> ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,550 ErrorMessage.java:251
> - Unexpected exception during request
>
> java.lang.NullPointerException: null
>
>
>
>
>
> The scenario was :
>
> 1) loading data inside the table with spark (~12 million rows)
>
> 2) Make some deletes with the primary keys and use the static fields to
> keep a certain state for each partition.
>
>
>
> The null pointer exception occurs when I query all the table after I made
> some deletions.
>
>
>
> I observed that :
>
> - Before delete statement the table is perfectly readable
>
> - It's repeatable  (I achieved to isolate ~20 delete statements that
> create a null pointer exception  when they are executed by cqlsh)
>
> - it occurs only  with some rows (nothing special in these rows compared
> to others)
>
> - Didn't succeed to repeat the problem with the problematic rows inside a
> toy table
>
> - repair/compact and scrub on each node before and after the deletes
> statements didn't change anything (always the null pointer exception after
> the delete)
>
> - Maybe related with static columns ?
>
>
>
> The table structure is :
>
> CREATE TABLE my_table (
>
>     pk1 text,
>
>     pk2 text,
>
>     ck1 timestamp,
>
>     ck2 text,
>
>     ck3 text,
>
>     valuefield text,
>
>     staticField1 text static,
>
>     staticField2 text static,
>
>     PRIMARY KEY ((pk1, pk2), ck1, ck2, ck3)
>
> ) WITH CLUSTERING ORDER BY (pk1 DESC, pk2 ASC, ck1 ASC)
>
>     AND bloom_filter_fp_chance = 0.01
>
>     AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
>
>     AND compaction = {'class':
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
>
>     AND compression = {'sstable_compression':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>
>     AND dclocal_read_repair_chance = 0.1
>
>     AND default_time_to_live = 0
>
>     AND gc_grace_seconds = 0
>
>     AND max_index_interval = 2048
>
>     AND memtable_flush_period_in_ms = 0
>
>     AND min_index_interval = 128
>
>     AND read_repair_chance = 0.0
>
>     AND speculative_retry = '99.0PERCENTILE';
>
>
>
>
>
>
>
>
>
> Is someone already met this issue or has an idea to solve/investigate this
> exceptions ?
>
>
>
>
>
> Thank you
>
>
>
>
>
> Regards
>
>
>
>
>
> --
>
> Hervé
>