You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Jason Harvey (JIRA)" <ji...@apache.org> on 2012/12/12 11:41:21 UTC

[jira] [Created] (CASSANDRA-5059) 1.0.11 -> 1.1.7 upgrade results Bad file descriptor exception

Jason Harvey created CASSANDRA-5059:
---------------------------------------

             Summary: 1.0.11 -> 1.1.7 upgrade results Bad file descriptor exception
                 Key: CASSANDRA-5059
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059
             Project: Cassandra
          Issue Type: Bug
    Affects Versions: 1.1.7
         Environment: ubuntu
sun-java6 6.24-1build0.10.10.1
            Reporter: Jason Harvey


Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second:

{code}
 WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2
java.io.IOException: Bad file descriptor
        at sun.nio.ch.FileDispatcher.preClose0(Native Method)
        at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
        at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
        at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
        at java.io.FileInputStream.close(FileInputStream.java:258)
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
        at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
        at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
        at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
        at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
        at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132)
        at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112)
        at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300)
        at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
        at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347)
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209)
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144)
        at org.apache.cassandra.db.Table.getRow(Table.java:378)
        at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
        at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
        at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
{code}

The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade.

Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. Looks like the exception might have something to do with compression?

Verified that the service was not bumping into any open file descriptor limitations.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira