You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by "soni.s" <so...@gmail.com> on 2012/08/06 10:09:38 UTC

error message in solr logs

Hi, we have a large lucene index base created using solr. Its split into 16
cores. Each core contains almost 10GB of indexes. We have deployed 8
instances of Solr hosting two cores each. The logic of identifying where the
document resides based on the document id, is built within the application.
There are other queries also which query all the cores on all the cores
accross solr instances because the query may not be based on document id. We
use SolrJ to connect to and query the indexes and get results.
We have more reads than writes overall. A document is inserted once and
updated a max of 2 times in a few days. But it could be potentially searched
10s of times in a day.

Lately we are noticing below exception in our solr logs. This happens
sometimes once or twice a day on a few cores.

SEVERE: org.apache.solr.common.SolrException: Invalid chunk header
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:72)
        at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
        at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
        at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
        at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
        at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
        at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
        at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:662)
Caused by: com.ctc.wstx.exc.WstxIOException: Invalid chunk header
        at
com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:548)
        at
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:604)
        at
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:660)
        at
com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:331)
        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:68)
        ... 17 more
Caused by: java.io.IOException: Invalid chunk header
        at
org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:133)
        at
org.apache.coyote.http11.InternalInputBuffer.doRead(InternalInputBuffer.java:710)
        at org.apache.coyote.Request.doRead(Request.java:428)
        at
org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:304)
        at
org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:405)
        at
org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:327)
        at
org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:193)
        at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)

The env consists of:

OS: Enterprise Linux 64 bit
Tomcat version: 6.0.26
solr version: 3.3.0
JDK: 1.6
Total number of solr documents: more than 20 Million.

Can someone please let me know what this is as googling around doesnt give
me much info. Overall i dont see much problem from the application's use but
i wanted to know what this error is and what could the impact be to the app
in future. Thanks for any help in advance.






--
View this message in context: http://lucene.472066.n3.nabble.com/error-message-in-solr-logs-tp3999328.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: error message in solr logs

Posted by Chris Hostetter <ho...@fucit.org>.
: Lately we are noticing below exception in our solr logs. This happens
: sometimes once or twice a day on a few cores.

the error you are seing here is a really low level HTTP communications 
error, below hte level of solr...

: Caused by: java.io.IOException: Invalid chunk header
:         at
: org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:133)
:         at
: org.apache.coyote.http11.InternalInputBuffer.doRead(InternalInputBuffer.java:710)
:         at org.apache.coyote.Request.doRead(Request.java:428)
:         at

"chunking" is a feature of HTTP that lets clients stream an arbitrary 
quantity of data w/o first computing and sending a Content-Length header, 
instead it can send smaller chunks of information prefaced by the length 
of the intividual chunks...

https://en.wikipedia.org/wiki/Chunked_transfer_encoding

This error suggest that your indexing client (the one sending Solr the 
XML) says it is using chunked encoding but is sending malformed chunk 
headers.


-Hoss

Re: error message in solr logs

Posted by "soni.s" <so...@gmail.com>.
Thanks for the reply Eric. But I am not very clear here because we have just
one part of app which adds to the index. And if the code is sending wrong
headers then it should do so for all records? Some parts of the code below.
we use the SolrJ API as i mentioned earlier :

.....
SolrInputDocument doc = new SolrInputDocument();

for (String indexField : listOfFieldNames) {
       doc.addField(indexField , 'valueForTheFieldFromAValueObject');
}
commonsHttpSolrServer.add(doc);
/// other code

what we do is load multiple instances of the CommonsHttpSolrServer connected
to a node:core  and cache them in memory and use them for searching/adding
indexes.

so could this reuse of server be causing a problem or am i missing something
here? 

Thanks in advance.



--
View this message in context: http://lucene.472066.n3.nabble.com/error-message-in-solr-logs-tp3999328p4000068.html
Sent from the Solr - User mailing list archive at Nabble.com.