You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Ozan Cicekci (JIRA)" <ji...@apache.org> on 2019/06/12 12:20:00 UTC

[jira] [Created] (FLINK-12820) Support ignoring null fields when writing to Cassandra

Ozan Cicekci created FLINK-12820:
------------------------------------

             Summary: Support ignoring null fields when writing to Cassandra
                 Key: FLINK-12820
                 URL: https://issues.apache.org/jira/browse/FLINK-12820
             Project: Flink
          Issue Type: Improvement
          Components: Connectors / Cassandra
    Affects Versions: 1.8.0
            Reporter: Ozan Cicekci


Currently, records which have null fields are written to their corresponding columns in Cassandra as null. Writing null is basically a 'delete' for Cassandra, it's useful if nulls should correspond to deletes in the data model, but nulls can also indicate a missing data or partial column update. In that case, we end up overwriting columns of existing record on Cassandra with nulls. 

 

I believe it's already possible to ignore null values for POJO's with mapper options, as documented here:

[https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/cassandra.html#cassandra-sink-example-for-streaming-pojo-data-type]

 

But this is not possible when using scala tuples or case classes. Perhaps with a Cassandra sink configuration flag, null values can be unset using below option for tuples and case classes.

[https://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/BoundStatement.html#unset-int-]

 

Here is the equivalent configuration in spark-cassandra-connector;

[https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md#globally-treating-all-nulls-as-unset]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)