You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Pablo Federigi <pa...@mercadolibre.com> on 2016/10/07 15:30:38 UTC

SaveToCassandra - how to handle failed inserts?

Hello

In the next example I'm using the method saveToCassandra from the
spark-cassandra connector

RDDJavaFunctions<Tuple2&lt;String, Integer>> dsJF1 =
CassandraJavaUtil.javaFunctions(result);
      dsJF1.writerBuilder("test_keyspace", "test",
              CassandraJavaUtil.mapTupleToRow(String.class, Integer.class))
              .withColumnSelector(CassandraJavaUtil.someColumns("column1",
"column2"))
              .saveToCassandra();

In the example above, we can suppose that result has 1000 records and just
one record fails when trying to write to cassandra (even after the Retry
Policy configuration).

I just want to know how to handle those failed records when the driver was
not able to write to cassandra (for example, due a timeout exception). It's
someway to log failed records?

Thanks,
Pablo



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SaveToCassandra-how-to-handle-failed-inserts-tp27865.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org