You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by George Webster <we...@gmail.com> on 2016/10/24 20:10:49 UTC

Question on write failures logs show Uncaught exception on thread Thread[MutationStage-1,5,main]

Hey cassandra users,

When performing writes I have hit an issue where the server is unable to
perform writes. The logs show:

WARN  [MutationStage-1] 2016-10-24 22:05:52,592
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
Thread[MutationStage-1,5,main]: {}
java.lang.IllegalArgumentException: Mutation of 16.011MiB is too large for
the maximum size of 16.000MiB
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262)
~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493)
~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396)
~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215)
~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:220)
~[apache-cassandra-3.9.jar:3.9]
at
org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.java:69)
~[apache-cassandra-3.9.jar:3.9]
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64)
~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_101]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
~[apache-cassandra-3.9.jar:3.9]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
[apache-cassandra-3.9.jar:3.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]


Looking around on google I found this guide
https://support.datastax.com/hc/en-us/articles/207267063-Mutation-of-x-bytes-is-too-large-for-the-maxiumum-size-of-y-
that states I can increase the commitlog_segment_size_in_mb to solve the
problem.

However, I wanted to ask if their are any drawbacks to doing so.

Thanks you for your guidance.

Respectfully,
George

Re: Question on write failures logs show Uncaught exception on thread Thread[MutationStage-1,5,main]

Posted by George Webster <we...@gmail.com>.
thank you that is quite helpful

On Mon, Oct 24, 2016 at 11:00 PM, Edward Capriolo <ed...@gmail.com>
wrote:

> The driver will enforce a max batch size of 65k.
> This is an issue in versions of cassandra like 2.1.X. There are control
> variables for the logged and unlogged batch size. You may also have to
> tweak your commitlog size as well.
>
> I demonstrate this here:
> https://github.com/edwardcapriolo/ec/blob/master/src/test/java/Base/batch/
> BigBatches2_2_6_tweeked.java
>
> Latest tick-tock version I tried worked out of the box.
>
> The only drawback of batches is potential JVM pressure. I did some some
> permutations of memory settings with the tests above. You can get a feel
> for rate + batch size and the jvm pressure it causes.
>
> On Mon, Oct 24, 2016 at 4:10 PM, George Webster <we...@gmail.com>
> wrote:
>
>> Hey cassandra users,
>>
>> When performing writes I have hit an issue where the server is unable to
>> perform writes. The logs show:
>>
>> WARN  [MutationStage-1] 2016-10-24 22:05:52,592
>> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on
>> thread Thread[MutationStage-1,5,main]: {}
>> java.lang.IllegalArgumentException: Mutation of 16.011MiB is too large
>> for the maximum size of 16.000MiB
>> at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:220)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.java:69)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[na:1.8.0_101]
>> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorSe
>> rvice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>> ~[apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorSe
>> rvice$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>> [apache-cassandra-3.9.jar:3.9]
>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
>> [apache-cassandra-3.9.jar:3.9]
>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>>
>>
>> Looking around on google I found this guide https://support.datastax
>> .com/hc/en-us/articles/207267063-Mutation-of-x-bytes-
>> is-too-large-for-the-maxiumum-size-of-y-
>> that states I can increase the commitlog_segment_size_in_mb to solve the
>> problem.
>>
>> However, I wanted to ask if their are any drawbacks to doing so.
>>
>> Thanks you for your guidance.
>>
>> Respectfully,
>> George
>>
>
>

Re: Question on write failures logs show Uncaught exception on thread Thread[MutationStage-1,5,main]

Posted by Edward Capriolo <ed...@gmail.com>.
The driver will enforce a max batch size of 65k.
This is an issue in versions of cassandra like 2.1.X. There are control
variables for the logged and unlogged batch size. You may also have to
tweak your commitlog size as well.

I demonstrate this here:
https://github.com/edwardcapriolo/ec/blob/master/src/test/java/Base/batch/BigBatches2_2_6_tweeked.java

Latest tick-tock version I tried worked out of the box.

The only drawback of batches is potential JVM pressure. I did some some
permutations of memory settings with the tests above. You can get a feel
for rate + batch size and the jvm pressure it causes.

On Mon, Oct 24, 2016 at 4:10 PM, George Webster <we...@gmail.com> wrote:

> Hey cassandra users,
>
> When performing writes I have hit an issue where the server is unable to
> perform writes. The logs show:
>
> WARN  [MutationStage-1] 2016-10-24 22:05:52,592
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
> Thread[MutationStage-1,5,main]: {}
> java.lang.IllegalArgumentException: Mutation of 16.011MiB is too large
> for the maximum size of 16.000MiB
> at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:262)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:220)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.db.MutationVerbHandler.doVerb(MutationVerbHandler.java:69)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64)
> ~[apache-cassandra-3.9.jar:3.9]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_101]
> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
> ice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> ~[apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
> ice$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
> [apache-cassandra-3.9.jar:3.9]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109)
> [apache-cassandra-3.9.jar:3.9]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
>
>
> Looking around on google I found this guide https://support.
> datastax.com/hc/en-us/articles/207267063-Mutation-
> of-x-bytes-is-too-large-for-the-maxiumum-size-of-y-
> that states I can increase the commitlog_segment_size_in_mb to solve the
> problem.
>
> However, I wanted to ask if their are any drawbacks to doing so.
>
> Thanks you for your guidance.
>
> Respectfully,
> George
>