You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jonathan Ellis (JIRA)" <ji...@apache.org> on 2014/06/12 22:31:03 UTC
[jira] [Resolved] (CASSANDRA-7385) sstableloader OutOfMemoryError:
Java heap space
[ https://issues.apache.org/jira/browse/CASSANDRA-7385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jonathan Ellis resolved CASSANDRA-7385.
---------------------------------------
Resolution: Not a Problem
Reproduced In: (was: 1.2.16)
There isn't a one-size fits all Xmx for sstableloader, but needing to increase it for large jobs is not a bug.
> sstableloader OutOfMemoryError: Java heap space
> -----------------------------------------------
>
> Key: CASSANDRA-7385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7385
> Project: Cassandra
> Issue Type: Bug
> Components: Tools
> Reporter: Mike Heffner
>
> We hit the following exception with sstableloader while attempting to bulk load about 100GB of SSTs. We are now employing this workaround before starting an sstableloader run:
> sed -i -e 's/-Xmx256M/-Xmx8G/g' /usr/bin/sstableloader
> {code}
> ERROR 19:25:45,060 Error in ThreadPoolExecutor
> java.lang.OutOfMemoryError: Java heap space
> at org.apache.cassandra.io.util.FastByteArrayOutputStream.expand(FastByteArrayOutputStream.java:104)
> at org.apache.cassandra.io.util.FastByteArrayOutputStream.write(FastByteArrayOutputStream.java:235)
> at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
> at org.apache.cassandra.io.compress.CompressionMetadata$ChunkSerializer.serialize(CompressionMetadata.java:412)
> at org.apache.cassandra.io.compress.CompressionMetadata$ChunkSerializer.serialize(CompressionMetadata.java:407)
> at org.apache.cassandra.streaming.compress.CompressionInfo$CompressionInfoSerializer.serialize(CompressionInfo.java:59)
> at org.apache.cassandra.streaming.compress.CompressionInfo$CompressionInfoSerializer.serialize(CompressionInfo.java:46)
> at org.apache.cassandra.streaming.PendingFile$PendingFileSerializer.serialize(PendingFile.java:142)
> at org.apache.cassandra.streaming.StreamHeader$StreamHeaderSerializer.serialize(StreamHeader.java:67)
> at org.apache.cassandra.streaming.StreamHeader$StreamHeaderSerializer.serialize(StreamHeader.java:58)
> at org.apache.cassandra.net.MessagingService.constructStreamHeader(MessagingService.java:782)
> at org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:65)
> at org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Streaming to /10.167.a.b:1" java.lang.OutOfMemoryError: Java heap space
> at org.apache.cassandra.io.util.FastByteArrayOutputStream.expand(FastByteArrayOutputStream.java:104)
> at org.apache.cassandra.io.util.FastByteArrayOutputStream.write(FastByteArrayOutputStream.java:235)
> at java.io.DataOutputStream.writeInt(DataOutputStream.java:199)
> at org.apache.cassandra.io.compress.CompressionMetadata$ChunkSerializer.serialize(CompressionMetadata.java:412)
> at org.apache.cassandra.io.compress.CompressionMetadata$ChunkSerializer.serialize(CompressionMetadata.java:407)
> at org.apache.cassandra.streaming.compress.CompressionInfo$CompressionInfoSerializer.serialize(CompressionInfo.java:59)
> at org.apache.cassandra.streaming.compress.CompressionInfo$CompressionInfoSerializer.serialize(CompressionInfo.java:46)
> at org.apache.cassandra.streaming.PendingFile$PendingFileSerializer.serialize(PendingFile.java:142)
> at org.apache.cassandra.streaming.StreamHeader$StreamHeaderSerializer.serialize(StreamHeader.java:67)
> at org.apache.cassandra.streaming.StreamHeader$StreamHeaderSerializer.serialize(StreamHeader.java:58)
> at org.apache.cassandra.net.MessagingService.constructStreamHeader(MessagingService.java:782)
> at org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:65)
> at org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
> at org.apache.cassandra.io.compress.CompressionMetadata.getChunksForSections(CompressionMetadata.java:210)
> at org.apache.cassandra.streaming.StreamOut.createPendingFiles(StreamOut.java:182)
> at org.apache.cassandra.streaming.StreamOut.transferSSTables(StreamOut.java:157)
> at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:145)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67)
> {code}
--
This message was sent by Atlassian JIRA
(v6.2#6252)