You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@solr.apache.org by Pravinkumar Bhagat <Pr...@hexaware.com.INVALID> on 2022/03/21 06:46:46 UTC

No Space left on the device exception

Environment: RHEL 3.10.0-957.12.2.el7.x86_64
java.​version:1.8.0_282
OpenJDK 64-Bit Server VM (25.282-b08) for linux-amd64 JRE (1.8.0_282-b08)



Description: Getting "No Space left on the device" in logs and it reaches full disk size for solr Master instance after couple of weeks.

Common Exceptions seen in logs below:-

1> Exception writing document id sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ver=1&ndx=sitecore_master_index to the index; possible analysis error.
2> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
3> java.io.IOException: No space left on device
4> org.apache.lucene.index.CorruptIndexException

Already increased disk size couple of times ,initially we were having 150 GB then we got the error so we increased it to 250 GB then again we got the same error and now its 500GB for Solr AWS EC2 Master Server having linux os. When solr goes out of space , we have seen excessive logging with log files data around 20 to 30 GB.
Every time when this happens we are also facing index data corrruption and needed to delete solr logs and indexes data folder and restart solr server followed by rebuilding whole indexes again fixes the problem.

Configuration:-

Sitecore 8.2 Update 7
Solr 6.6.5
stand alone replication setup having One Solr Master and 2 Slave instances
Solr hosted on AWS EC2 boxes having RHE linux OS with 4 processors and 16 GB RAM , heap size 3 GB [JVM Memory].

Master solr instance: disk space : 500 GB

root directory size[/tmp] : 81 GB

Slave1 solr instance: disk space: 250 GB
Slave2 solr instance: disk Space: 250 GB
Max size of biggest index is around 20 GB to 25 GB
Total 25,00000 documents
Daily Import Utility scheduler create/update approx. 50 to 200  items.

2022-03-10 14:06:11.537 ERROR (qtp232824863-2108) [   x:sitecore_master_index] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Exception writing document id sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ver=1&ndx=sitecore_master_index to the index; possible analysis error.
    at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:206)
    at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
    at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
    at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:979)
    at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1192)
    at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
    at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:261)
    at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)
    at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
    at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
    at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:534)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
    at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:749)
    at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:763)
    at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1567)
    at org.apache.solr.update.DirectUpdateHandler2.updateDocument(DirectUpdateHandler2.java:924)
    at org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:913)
    at org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:302)
    at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
    at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:194)
    ... 42 more
Caused by: java.io.IOException: No space left on device
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
    at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
    at java.nio.channels.Channels.writeFully(Channels.java:101)
    at java.nio.channels.Channels.access$000(Channels.java:61)
    at java.nio.channels.Channels$1.write(Channels.java:174)
    at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:419)
    at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
    at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)
    at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
    at org.apache.lucene.codecs.lucene50.ForUtil.writeBlock(ForUtil.java:175)
    at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:237)
    at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:141)
    at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:866)
    at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
    at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
    at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164)
    at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931)
    at org.apache.solr.update.SolrIndexWriter.merge(SolrIndexWriter.java:188)
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:661)

Please also find the attached log file for detail information. Please let us know for the possible solutions and let us know if any additional information needed.

Regards,
Pravinkumar Bhagat


This e-mail communication and any attachments to it are confidential and privileged to Hexaware and are strictly intended only for the personal and confidential use of the designated recipient(s) named above. If you are not the intended recipient of this message, you are hereby notified that any review, dissemination, distribution or copying of this message is strictly prohibited and may be unlawful.

Please notify the sender immediately and destroy all copies of this message along with all attachments thereto.

RE: No Space left on the device exception

Posted by Pravinkumar Bhagat <Pr...@hexaware.com.INVALID>.
Hi Jan,

Thanks you so much. Yes , you were correct we have checked snapshot folders and found that for some of the big size cores like sitecore_master_index it has created multiple snapshot folders having size in GB’s.

Could you please suggest what we can do to get rid of processes which are running the snapshot commands. Is there any configuration related to this in solrconfig.xml?
Regards,
Pravinkumar Bhagat

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows

________________________________
From: Jan Høydahl <ja...@cominvent.com>
Sent: Monday, March 21, 2022 8:04:52 PM
To: users@solr.apache.org <us...@solr.apache.org>
Subject: Re: No Space left on the device exception

CAUTION: This email is originated from outside the organization. Do not click the links or open the attachments unless you recognize the sender and know the content is safe.

Can you do an "ls -lR" of your core folder after it as been running for some days?
And perhaps do this a few days apart to show exactly which part of the file system that keeps growing.

I have sometimes seen additional index snapshot.NNNNNNNN folders taking up space, since some process was running snapshot commands.
Also, if you find ERROR or WARN log lines in Solr that could indicate early symptoms, that would be helpful.

Jan

> 21. mar. 2022 kl. 13:02 skrev Pravinkumar Bhagat <Pr...@hexaware.com.INVALID>:
>
> Hi Colvin,
>
> We do have autoCommit and autoSoftCommit enabled in our solrconfig.xml files whereas MergePolicy or DeletionPolicy is not being configured.
> Apart from commit point , could you please also suggest any other required configuration changes for solrconfig.xml?
>
> <updateHandler class="solr.DirectUpdateHandler2">
>    <updateLog>
>      <str name="dir">${solr.ulog.dir:}</str>
>      <int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
>    </updateLog>
>    <autoCommit>
>      <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>      <openSearcher>false</openSearcher>
>    </autoCommit>
>    <autoSoftCommit>
>      <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
>    </autoSoftCommit>
> </updateHandler>
>
> Regards,
> Pravinkumar Bhagat
> -----Original Message-----
> From: Colvin Cowie <co...@gmail.com>
> Sent: Monday, March 21, 2022 1:49 PM
> To: users@solr.apache.org
> Subject: Re: No Space left on the device exception
>
> CAUTION: This email is originated from outside the organization. Do not click the links or open the attachments unless you recognize the sender and know the content is safe.
>
> Hello,
>
> *> we have seen excessive logging with log files data around 20 to 30 GB...
> Every time when this happens we are also facing index data corrruption and needed to delete solr logs and indexes data folder *
>
> I assume you are talking about the transaction log (tlog) files rather than the application log (log4j) files?
> It sounds like you aren't committing your updates to the index, so the transaction logs will grow indefinitely.
>
> It's normal to use autoCommit, but you need to commit in one way or another
> anyway:
> https://solr.apache.org/guide/6_6/updatehandlers-in-solrconfig.html#UpdateHandlersinSolrConfig-Commits
>
> If you are using Solr as part of Sitecore, then you might need to contact Sitecore for support, since the problem may be coming from Sitecore's configuration/use of Solr.
>
> Colvin
>
>
> On Mon, 21 Mar 2022 at 06:47, Pravinkumar Bhagat <Pr...@hexaware.com.invalid> wrote:
>
>> Environment: RHEL 3.10.0-957.12.2.el7.x86_64
>> java.​version:1.8.0_282
>> OpenJDK 64-Bit Server VM (25.282-b08) for linux-amd64 JRE
>> (1.8.0_282-b08)
>>
>>
>>
>> Description: Getting "No Space left on the device" in logs and it
>> reaches full disk size for solr Master instance after couple of weeks.
>>
>> Common Exceptions seen in logs below:-
>>
>> 1> Exception writing document id
>> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ve
>> r=1&ndx=sitecore_master_index to the index; possible analysis error.
>> 2> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is
>> closed
>> 3> java.io.IOException: No space left on device
>> 4> org.apache.lucene.index.CorruptIndexException
>>
>> Already increased disk size couple of times ,initially we were having
>> 150 GB then we got the error so we increased it to 250 GB then again
>> we got the same error and now its 500GB for Solr AWS EC2 Master Server
>> having linux os. When solr goes out of space , we have seen excessive
>> logging with log files data around 20 to 30 GB.
>> Every time when this happens we are also facing index data corrruption
>> and needed to delete solr logs and indexes data folder and restart
>> solr server followed by rebuilding whole indexes again fixes the problem.
>>
>> Configuration:-
>>
>> Sitecore 8.2 Update 7
>> Solr 6.6.5
>> stand alone replication setup having One Solr Master and 2 Slave
>> instances Solr hosted on AWS EC2 boxes having RHE linux OS with 4
>> processors and 16 GB RAM , heap size 3 GB [JVM Memory].
>>
>> Master solr instance: disk space : 500 GB
>>
>> root directory size[/tmp] : 81 GB
>>
>> Slave1 solr instance: disk space: 250 GB
>> Slave2 solr instance: disk Space: 250 GB Max size of biggest index is
>> around 20 GB to 25 GB Total 25,00000 documents Daily Import Utility
>> scheduler create/update approx. 50 to 200  items.
>>
>> 2022-03-10 14:06:11.537 ERROR (qtp232824863-2108) [
>> x:sitecore_master_index] o.a.s.h.RequestHandlerBase
>> org.apache.solr.common.SolrException: Exception writing document id
>> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ve
>> r=1&ndx=sitecore_master_index to the index; possible analysis error.
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:206)
>>    at
>> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
>>    at
>> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>>    at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:979)
>>    at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1192)
>>    at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
>>    at
>> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:261)
>>    at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)
>>    at
>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
>>    at
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
>>    at
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>>    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
>>    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>>    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>>    at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>>    at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>>    at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>>    at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>>    at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>>    at
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>    at
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>>    at
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>>    at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>>    at
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>>    at
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>>    at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>>    at
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>>    at
>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>>    at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>>    at
>> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>>    at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>>    at org.eclipse.jetty.server.Server.handle(Server.java:534)
>>    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>>    at
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>>    at org.eclipse.jetty.io
>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>>    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>>    at org.eclipse.jetty.io
>> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>>    at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>>    at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>>    at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>>    at
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>>    at
>> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>>    at java.lang.Thread.run(Thread.java:748)
>> Caused by: org.apache.lucene.store.AlreadyClosedException: this
>> IndexWriter is closed
>>    at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:749)
>>    at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:763)
>>    at
>> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1567)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.updateDocument(DirectUpdateHandler2.java:924)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:913)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:302)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:194)
>>    ... 42 more
>> Caused by: java.io.IOException: No space left on device
>>    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>    at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
>>    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>>    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>>    at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
>>    at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
>>    at java.nio.channels.Channels.writeFully(Channels.java:101)
>>    at java.nio.channels.Channels.access$000(Channels.java:61)
>>    at java.nio.channels.Channels$1.write(Channels.java:174)
>>    at
>> org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:419)
>>    at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
>>    at
>> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>>    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>>    at
>> org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
>>    at
>> org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)
>>    at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
>>    at
>> org.apache.lucene.codecs.lucene50.ForUtil.writeBlock(ForUtil.java:175)
>>    at
>> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:237)
>>    at
>> org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:141)
>>    at
>> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:866)
>>    at
>> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
>>    at
>> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>>    at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164)
>>    at
>> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
>>    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
>>    at
>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356)
>>    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931)
>>    at
>> org.apache.solr.update.SolrIndexWriter.merge(SolrIndexWriter.java:188)
>>    at
>> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
>>    at
>> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concu
>> rrentMergeScheduler.java:661)
>>
>> Please also find the attached log file for detail information. Please
>> let us know for the possible solutions and let us know if any
>> additional information needed.
>>
>> Regards,
>> Pravinkumar Bhagat
>>
>>
>> This e-mail communication and any attachments to it are confidential
>> and privileged to Hexaware and are strictly intended only for the
>> personal and confidential use of the designated recipient(s) named
>> above. If you are not the intended recipient of this message, you are
>> hereby notified that any review, dissemination, distribution or
>> copying of this message is strictly prohibited and may be unlawful.
>>
>> Please notify the sender immediately and destroy all copies of this
>> message along with all attachments thereto.
>>
>
>
> This e-mail communication and any attachments to it are confidential and privileged to Hexaware and are strictly intended only for the personal and confidential use of the designated recipient(s) named above. If you are not the intended recipient of this message, you are hereby notified that any review, dissemination, distribution or copying of this message is strictly prohibited and may be unlawful.
>
> Please notify the sender immediately and destroy all copies of this message along with all attachments thereto.


Re: No Space left on the device exception

Posted by Jan Høydahl <ja...@cominvent.com>.
Can you do an "ls -lR" of your core folder after it as been running for some days?
And perhaps do this a few days apart to show exactly which part of the file system that keeps growing.

I have sometimes seen additional index snapshot.NNNNNNNN folders taking up space, since some process was running snapshot commands.
Also, if you find ERROR or WARN log lines in Solr that could indicate early symptoms, that would be helpful.

Jan

> 21. mar. 2022 kl. 13:02 skrev Pravinkumar Bhagat <Pr...@hexaware.com.INVALID>:
> 
> Hi Colvin,
> 
> We do have autoCommit and autoSoftCommit enabled in our solrconfig.xml files whereas MergePolicy or DeletionPolicy is not being configured.
> Apart from commit point , could you please also suggest any other required configuration changes for solrconfig.xml?
> 
> <updateHandler class="solr.DirectUpdateHandler2">
>    <updateLog>
>      <str name="dir">${solr.ulog.dir:}</str>
>      <int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
>    </updateLog>
>    <autoCommit>
>      <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>      <openSearcher>false</openSearcher>
>    </autoCommit>
>    <autoSoftCommit>
>      <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
>    </autoSoftCommit>
> </updateHandler>
> 
> Regards,
> Pravinkumar Bhagat
> -----Original Message-----
> From: Colvin Cowie <co...@gmail.com>
> Sent: Monday, March 21, 2022 1:49 PM
> To: users@solr.apache.org
> Subject: Re: No Space left on the device exception
> 
> CAUTION: This email is originated from outside the organization. Do not click the links or open the attachments unless you recognize the sender and know the content is safe.
> 
> Hello,
> 
> *> we have seen excessive logging with log files data around 20 to 30 GB...
> Every time when this happens we are also facing index data corrruption and needed to delete solr logs and indexes data folder *
> 
> I assume you are talking about the transaction log (tlog) files rather than the application log (log4j) files?
> It sounds like you aren't committing your updates to the index, so the transaction logs will grow indefinitely.
> 
> It's normal to use autoCommit, but you need to commit in one way or another
> anyway:
> https://solr.apache.org/guide/6_6/updatehandlers-in-solrconfig.html#UpdateHandlersinSolrConfig-Commits
> 
> If you are using Solr as part of Sitecore, then you might need to contact Sitecore for support, since the problem may be coming from Sitecore's configuration/use of Solr.
> 
> Colvin
> 
> 
> On Mon, 21 Mar 2022 at 06:47, Pravinkumar Bhagat <Pr...@hexaware.com.invalid> wrote:
> 
>> Environment: RHEL 3.10.0-957.12.2.el7.x86_64
>> java.​version:1.8.0_282
>> OpenJDK 64-Bit Server VM (25.282-b08) for linux-amd64 JRE
>> (1.8.0_282-b08)
>> 
>> 
>> 
>> Description: Getting "No Space left on the device" in logs and it
>> reaches full disk size for solr Master instance after couple of weeks.
>> 
>> Common Exceptions seen in logs below:-
>> 
>> 1> Exception writing document id
>> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ve
>> r=1&ndx=sitecore_master_index to the index; possible analysis error.
>> 2> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is
>> closed
>> 3> java.io.IOException: No space left on device
>> 4> org.apache.lucene.index.CorruptIndexException
>> 
>> Already increased disk size couple of times ,initially we were having
>> 150 GB then we got the error so we increased it to 250 GB then again
>> we got the same error and now its 500GB for Solr AWS EC2 Master Server
>> having linux os. When solr goes out of space , we have seen excessive
>> logging with log files data around 20 to 30 GB.
>> Every time when this happens we are also facing index data corrruption
>> and needed to delete solr logs and indexes data folder and restart
>> solr server followed by rebuilding whole indexes again fixes the problem.
>> 
>> Configuration:-
>> 
>> Sitecore 8.2 Update 7
>> Solr 6.6.5
>> stand alone replication setup having One Solr Master and 2 Slave
>> instances Solr hosted on AWS EC2 boxes having RHE linux OS with 4
>> processors and 16 GB RAM , heap size 3 GB [JVM Memory].
>> 
>> Master solr instance: disk space : 500 GB
>> 
>> root directory size[/tmp] : 81 GB
>> 
>> Slave1 solr instance: disk space: 250 GB
>> Slave2 solr instance: disk Space: 250 GB Max size of biggest index is
>> around 20 GB to 25 GB Total 25,00000 documents Daily Import Utility
>> scheduler create/update approx. 50 to 200  items.
>> 
>> 2022-03-10 14:06:11.537 ERROR (qtp232824863-2108) [
>> x:sitecore_master_index] o.a.s.h.RequestHandlerBase
>> org.apache.solr.common.SolrException: Exception writing document id
>> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ve
>> r=1&ndx=sitecore_master_index to the index; possible analysis error.
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:206)
>>    at
>> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
>>    at
>> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>>    at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:979)
>>    at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1192)
>>    at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
>>    at
>> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:261)
>>    at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)
>>    at
>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
>>    at
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
>>    at
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>>    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
>>    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>>    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>>    at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>>    at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>>    at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>>    at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>>    at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>>    at
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>    at
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>>    at
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>>    at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>>    at
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>>    at
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>>    at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>>    at
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>>    at
>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>>    at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>>    at
>> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>>    at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>>    at org.eclipse.jetty.server.Server.handle(Server.java:534)
>>    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>>    at
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>>    at org.eclipse.jetty.io
>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>>    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>>    at org.eclipse.jetty.io
>> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>>    at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>>    at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>>    at
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>>    at
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>>    at
>> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>>    at java.lang.Thread.run(Thread.java:748)
>> Caused by: org.apache.lucene.store.AlreadyClosedException: this
>> IndexWriter is closed
>>    at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:749)
>>    at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:763)
>>    at
>> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1567)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.updateDocument(DirectUpdateHandler2.java:924)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:913)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:302)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
>>    at
>> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:194)
>>    ... 42 more
>> Caused by: java.io.IOException: No space left on device
>>    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>    at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
>>    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>>    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>>    at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
>>    at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
>>    at java.nio.channels.Channels.writeFully(Channels.java:101)
>>    at java.nio.channels.Channels.access$000(Channels.java:61)
>>    at java.nio.channels.Channels$1.write(Channels.java:174)
>>    at
>> org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:419)
>>    at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
>>    at
>> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>>    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>>    at
>> org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
>>    at
>> org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)
>>    at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
>>    at
>> org.apache.lucene.codecs.lucene50.ForUtil.writeBlock(ForUtil.java:175)
>>    at
>> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:237)
>>    at
>> org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:141)
>>    at
>> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:866)
>>    at
>> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
>>    at
>> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>>    at
>> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164)
>>    at
>> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
>>    at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
>>    at
>> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356)
>>    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931)
>>    at
>> org.apache.solr.update.SolrIndexWriter.merge(SolrIndexWriter.java:188)
>>    at
>> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
>>    at
>> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concu
>> rrentMergeScheduler.java:661)
>> 
>> Please also find the attached log file for detail information. Please
>> let us know for the possible solutions and let us know if any
>> additional information needed.
>> 
>> Regards,
>> Pravinkumar Bhagat
>> 
>> 
>> This e-mail communication and any attachments to it are confidential
>> and privileged to Hexaware and are strictly intended only for the
>> personal and confidential use of the designated recipient(s) named
>> above. If you are not the intended recipient of this message, you are
>> hereby notified that any review, dissemination, distribution or
>> copying of this message is strictly prohibited and may be unlawful.
>> 
>> Please notify the sender immediately and destroy all copies of this
>> message along with all attachments thereto.
>> 
> 
> 
> This e-mail communication and any attachments to it are confidential and privileged to Hexaware and are strictly intended only for the personal and confidential use of the designated recipient(s) named above. If you are not the intended recipient of this message, you are hereby notified that any review, dissemination, distribution or copying of this message is strictly prohibited and may be unlawful.
> 
> Please notify the sender immediately and destroy all copies of this message along with all attachments thereto.


RE: No Space left on the device exception

Posted by Pravinkumar Bhagat <Pr...@hexaware.com.INVALID>.
Hi Colvin,

We do have autoCommit and autoSoftCommit enabled in our solrconfig.xml files whereas MergePolicy or DeletionPolicy is not being configured.
Apart from commit point , could you please also suggest any other required configuration changes for solrconfig.xml?

<updateHandler class="solr.DirectUpdateHandler2">
    <updateLog>
      <str name="dir">${solr.ulog.dir:}</str>
      <int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
    </updateLog>
    <autoCommit>
      <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
      <openSearcher>false</openSearcher>
    </autoCommit>
    <autoSoftCommit>
      <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
    </autoSoftCommit>
</updateHandler>

Regards,
Pravinkumar Bhagat
-----Original Message-----
From: Colvin Cowie <co...@gmail.com>
Sent: Monday, March 21, 2022 1:49 PM
To: users@solr.apache.org
Subject: Re: No Space left on the device exception

CAUTION: This email is originated from outside the organization. Do not click the links or open the attachments unless you recognize the sender and know the content is safe.

Hello,

*> we have seen excessive logging with log files data around 20 to 30 GB...
Every time when this happens we are also facing index data corrruption and needed to delete solr logs and indexes data folder *

I assume you are talking about the transaction log (tlog) files rather than the application log (log4j) files?
It sounds like you aren't committing your updates to the index, so the transaction logs will grow indefinitely.

It's normal to use autoCommit, but you need to commit in one way or another
anyway:
https://solr.apache.org/guide/6_6/updatehandlers-in-solrconfig.html#UpdateHandlersinSolrConfig-Commits

If you are using Solr as part of Sitecore, then you might need to contact Sitecore for support, since the problem may be coming from Sitecore's configuration/use of Solr.

Colvin


On Mon, 21 Mar 2022 at 06:47, Pravinkumar Bhagat <Pr...@hexaware.com.invalid> wrote:

> Environment: RHEL 3.10.0-957.12.2.el7.x86_64
> java.​version:1.8.0_282
> OpenJDK 64-Bit Server VM (25.282-b08) for linux-amd64 JRE
> (1.8.0_282-b08)
>
>
>
> Description: Getting "No Space left on the device" in logs and it
> reaches full disk size for solr Master instance after couple of weeks.
>
> Common Exceptions seen in logs below:-
>
> 1> Exception writing document id
> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ve
> r=1&ndx=sitecore_master_index to the index; possible analysis error.
> 2> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is
> closed
> 3> java.io.IOException: No space left on device
> 4> org.apache.lucene.index.CorruptIndexException
>
> Already increased disk size couple of times ,initially we were having
> 150 GB then we got the error so we increased it to 250 GB then again
> we got the same error and now its 500GB for Solr AWS EC2 Master Server
> having linux os. When solr goes out of space , we have seen excessive
> logging with log files data around 20 to 30 GB.
> Every time when this happens we are also facing index data corrruption
> and needed to delete solr logs and indexes data folder and restart
> solr server followed by rebuilding whole indexes again fixes the problem.
>
> Configuration:-
>
> Sitecore 8.2 Update 7
> Solr 6.6.5
> stand alone replication setup having One Solr Master and 2 Slave
> instances Solr hosted on AWS EC2 boxes having RHE linux OS with 4
> processors and 16 GB RAM , heap size 3 GB [JVM Memory].
>
> Master solr instance: disk space : 500 GB
>
> root directory size[/tmp] : 81 GB
>
> Slave1 solr instance: disk space: 250 GB
> Slave2 solr instance: disk Space: 250 GB Max size of biggest index is
> around 20 GB to 25 GB Total 25,00000 documents Daily Import Utility
> scheduler create/update approx. 50 to 200  items.
>
> 2022-03-10 14:06:11.537 ERROR (qtp232824863-2108) [
> x:sitecore_master_index] o.a.s.h.RequestHandlerBase
> org.apache.solr.common.SolrException: Exception writing document id
> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ve
> r=1&ndx=sitecore_master_index to the index; possible analysis error.
>     at
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:206)
>     at
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
>     at
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>     at
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:979)
>     at
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1192)
>     at
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
>     at
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:261)
>     at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)
>     at
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
>     at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
>     at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>     at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
>     at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>     at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>     at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>     at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>     at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>     at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>     at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>     at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>     at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>     at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>     at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>     at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>     at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>     at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>     at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>     at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>     at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>     at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     at org.eclipse.jetty.server.Server.handle(Server.java:534)
>     at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>     at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>     at org.eclipse.jetty.io
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>     at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>     at org.eclipse.jetty.io
> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>     at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>     at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>     at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>     at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>     at
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.lucene.store.AlreadyClosedException: this
> IndexWriter is closed
>     at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:749)
>     at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:763)
>     at
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1567)
>     at
> org.apache.solr.update.DirectUpdateHandler2.updateDocument(DirectUpdateHandler2.java:924)
>     at
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:913)
>     at
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:302)
>     at
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
>     at
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:194)
>     ... 42 more
> Caused by: java.io.IOException: No space left on device
>     at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>     at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
>     at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>     at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>     at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
>     at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
>     at java.nio.channels.Channels.writeFully(Channels.java:101)
>     at java.nio.channels.Channels.access$000(Channels.java:61)
>     at java.nio.channels.Channels$1.write(Channels.java:174)
>     at
> org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:419)
>     at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
>     at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>     at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>     at
> org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
>     at
> org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)
>     at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
>     at
> org.apache.lucene.codecs.lucene50.ForUtil.writeBlock(ForUtil.java:175)
>     at
> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:237)
>     at
> org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:141)
>     at
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:866)
>     at
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
>     at
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>     at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164)
>     at
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
>     at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
>     at
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356)
>     at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931)
>     at
> org.apache.solr.update.SolrIndexWriter.merge(SolrIndexWriter.java:188)
>     at
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
>     at
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concu
> rrentMergeScheduler.java:661)
>
> Please also find the attached log file for detail information. Please
> let us know for the possible solutions and let us know if any
> additional information needed.
>
> Regards,
> Pravinkumar Bhagat
>
>
> This e-mail communication and any attachments to it are confidential
> and privileged to Hexaware and are strictly intended only for the
> personal and confidential use of the designated recipient(s) named
> above. If you are not the intended recipient of this message, you are
> hereby notified that any review, dissemination, distribution or
> copying of this message is strictly prohibited and may be unlawful.
>
> Please notify the sender immediately and destroy all copies of this
> message along with all attachments thereto.
>


This e-mail communication and any attachments to it are confidential and privileged to Hexaware and are strictly intended only for the personal and confidential use of the designated recipient(s) named above. If you are not the intended recipient of this message, you are hereby notified that any review, dissemination, distribution or copying of this message is strictly prohibited and may be unlawful.

Please notify the sender immediately and destroy all copies of this message along with all attachments thereto.

Re: No Space left on the device exception

Posted by Colvin Cowie <co...@gmail.com>.
Hello,

*> we have seen excessive logging with log files data around 20 to 30 GB...
Every time when this happens we are also facing index data corrruption and
needed to delete solr logs and indexes data folder *

I assume you are talking about the transaction log (tlog) files rather than
the application log (log4j) files?
It sounds like you aren't committing your updates to the index, so the
transaction logs will grow indefinitely.

It's normal to use autoCommit, but you need to commit in one way or another
anyway:
https://solr.apache.org/guide/6_6/updatehandlers-in-solrconfig.html#UpdateHandlersinSolrConfig-Commits

If you are using Solr as part of Sitecore, then you might need to contact
Sitecore for support, since the problem may be coming from Sitecore's
configuration/use of Solr.

Colvin


On Mon, 21 Mar 2022 at 06:47, Pravinkumar Bhagat
<Pr...@hexaware.com.invalid> wrote:

> Environment: RHEL 3.10.0-957.12.2.el7.x86_64
> java.​version:1.8.0_282
> OpenJDK 64-Bit Server VM (25.282-b08) for linux-amd64 JRE (1.8.0_282-b08)
>
>
>
> Description: Getting "No Space left on the device" in logs and it reaches
> full disk size for solr Master instance after couple of weeks.
>
> Common Exceptions seen in logs below:-
>
> 1> Exception writing document id
> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ver=1&ndx=sitecore_master_index
> to the index; possible analysis error.
> 2> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is
> closed
> 3> java.io.IOException: No space left on device
> 4> org.apache.lucene.index.CorruptIndexException
>
> Already increased disk size couple of times ,initially we were having 150
> GB then we got the error so we increased it to 250 GB then again we got the
> same error and now its 500GB for Solr AWS EC2 Master Server having linux
> os. When solr goes out of space , we have seen excessive logging with log
> files data around 20 to 30 GB.
> Every time when this happens we are also facing index data corrruption and
> needed to delete solr logs and indexes data folder and restart solr server
> followed by rebuilding whole indexes again fixes the problem.
>
> Configuration:-
>
> Sitecore 8.2 Update 7
> Solr 6.6.5
> stand alone replication setup having One Solr Master and 2 Slave instances
> Solr hosted on AWS EC2 boxes having RHE linux OS with 4 processors and 16
> GB RAM , heap size 3 GB [JVM Memory].
>
> Master solr instance: disk space : 500 GB
>
> root directory size[/tmp] : 81 GB
>
> Slave1 solr instance: disk space: 250 GB
> Slave2 solr instance: disk Space: 250 GB
> Max size of biggest index is around 20 GB to 25 GB
> Total 25,00000 documents
> Daily Import Utility scheduler create/update approx. 50 to 200  items.
>
> 2022-03-10 14:06:11.537 ERROR (qtp232824863-2108) [
>  x:sitecore_master_index] o.a.s.h.RequestHandlerBase
> org.apache.solr.common.SolrException: Exception writing document id
> sitecore://master/{f7283eb0-515e-4cfc-b851-b398dfb9206c}?lang=fr-ca&ver=1&ndx=sitecore_master_index
> to the index; possible analysis error.
>     at
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:206)
>     at
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
>     at
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>     at
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:979)
>     at
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1192)
>     at
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
>     at
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:261)
>     at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)
>     at
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
>     at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
>     at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>     at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
>     at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>     at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>     at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>     at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>     at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>     at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>     at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>     at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>     at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>     at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>     at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>     at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>     at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>     at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>     at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>     at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>     at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>     at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>     at org.eclipse.jetty.server.Server.handle(Server.java:534)
>     at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>     at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>     at org.eclipse.jetty.io
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>     at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>     at org.eclipse.jetty.io
> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>     at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>     at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>     at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>     at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>     at
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.lucene.store.AlreadyClosedException: this
> IndexWriter is closed
>     at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:749)
>     at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:763)
>     at
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1567)
>     at
> org.apache.solr.update.DirectUpdateHandler2.updateDocument(DirectUpdateHandler2.java:924)
>     at
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:913)
>     at
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:302)
>     at
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
>     at
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:194)
>     ... 42 more
> Caused by: java.io.IOException: No space left on device
>     at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>     at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
>     at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>     at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>     at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
>     at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
>     at java.nio.channels.Channels.writeFully(Channels.java:101)
>     at java.nio.channels.Channels.access$000(Channels.java:61)
>     at java.nio.channels.Channels$1.write(Channels.java:174)
>     at
> org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:419)
>     at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
>     at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>     at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>     at
> org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:53)
>     at
> org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:73)
>     at org.apache.lucene.store.DataOutput.writeBytes(DataOutput.java:52)
>     at
> org.apache.lucene.codecs.lucene50.ForUtil.writeBlock(ForUtil.java:175)
>     at
> org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:237)
>     at
> org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:141)
>     at
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:866)
>     at
> org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344)
>     at
> org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
>     at
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164)
>     at
> org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:216)
>     at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:101)
>     at
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4356)
>     at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3931)
>     at
> org.apache.solr.update.SolrIndexWriter.merge(SolrIndexWriter.java:188)
>     at
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624)
>     at
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:661)
>
> Please also find the attached log file for detail information. Please let
> us know for the possible solutions and let us know if any additional
> information needed.
>
> Regards,
> Pravinkumar Bhagat
>
>
> This e-mail communication and any attachments to it are confidential and
> privileged to Hexaware and are strictly intended only for the personal and
> confidential use of the designated recipient(s) named above. If you are not
> the intended recipient of this message, you are hereby notified that any
> review, dissemination, distribution or copying of this message is strictly
> prohibited and may be unlawful.
>
> Please notify the sender immediately and destroy all copies of this
> message along with all attachments thereto.
>