You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Konstantin Nazarenkov (JIRA)" <ji...@apache.org> on 2014/07/24 12:50:39 UTC

[jira] [Updated] (LUCENE-5846) NPE during call IndexWriter.updateDocument(idTerm, doc);

     [ https://issues.apache.org/jira/browse/LUCENE-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Nazarenkov updated LUCENE-5846:
------------------------------------------

    Description: 
stack trace:
java.lang.NullPointerException
	at org.apache.lucene.document.Field.tokenStream(Field.java:552)
	at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:103)
	at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:455)
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1534)
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1507)
	at com.my.search.requesthandler.EnrichingUpdateHandler.updateTransitiveEntities(EnrichingUpdateHandler.java:108)
	at com.my.search.requesthandler.EnrichingUpdateHandler.addDoc(EnrichingUpdateHandler.java:68)
	at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
	at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
	at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:730)
	at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:557)
	at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
	at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
	at org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235)
	at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:512)
	at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
	at org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:365)
	at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:231)
	at org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:444)
	at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
	at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:464)

During debugging I found that all string field types has property tokenized=true, while default org.apache.lucene.document.StringField.TYPE_STORED has tokenized=false (in schema.xml I used type "string", not "text"). Wrong(as for me) field type was set in org.apache.lucene.document.DocumentStoredFieldVisitor.stringField(...)
line 69: final FieldType ft = new FieldType(TextField.TYPE_STORED);
Setting tokenized=true caused wrong(IMHO) flow in org.apache.lucene.document.Field.tokenStream(Analyzer analyzer) - condition !fieldType().tokenized() not passed.
Setting tokenized=false using reflection fixed problem, but it's a hack:)

Problem and it's root cause are my assumptions and could be just misconfiguration, so please re-check it

  was:
stack trace:
java.lang.NullPointerException
	at org.apache.lucene.document.Field.tokenStream(Field.java:552)
	at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:103)
	at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:455)
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1534)
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1507)
	at com.my.search.requesthandler.EnrichingUpdateHandler.updateTransitiveEntities(EnrichingUpdateHandler.java:108)
	at com.my.search.requesthandler.EnrichingUpdateHandler.addDoc(EnrichingUpdateHandler.java:68)
	at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
	at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
	at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:730)
	at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:557)
	at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
	at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
	at org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235)
	at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:512)
	at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
	at org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:365)
	at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:231)
	at org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:444)
	at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
	at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:464)

During debugging I found that all string field types has property tokenized=true, while default org.apache.lucene.document.StringField.TYPE_STORED has tokenized=false (in schema.xml I used type "string", not "text"). Wrong(as for me) field type was set in org.apache.lucene.document.DocumentStoredFieldVisitor.stringField(...)
line 69: final FieldType ft = new FieldType(TextField.TYPE_STORED);
Setting tokenized=true caused wrong(IMHO) flow in org.apache.lucene.document.Field.tokenStream(Analyzer analyzer) - condition !fieldType().tokenized() not passed.
Setting tokenized=false using reflection fixed problem, but it's a hack:)

Problem and it's root cause are my assumptions and could be just misconfiguration


> NPE during call IndexWriter.updateDocument(idTerm, doc);
> --------------------------------------------------------
>
>                 Key: LUCENE-5846
>                 URL: https://issues.apache.org/jira/browse/LUCENE-5846
>             Project: Lucene - Core
>          Issue Type: Bug
>    Affects Versions: 4.7.2
>            Reporter: Konstantin Nazarenkov
>
> stack trace:
> java.lang.NullPointerException
> 	at org.apache.lucene.document.Field.tokenStream(Field.java:552)
> 	at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:103)
> 	at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
> 	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
> 	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:455)
> 	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1534)
> 	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1507)
> 	at com.my.search.requesthandler.EnrichingUpdateHandler.updateTransitiveEntities(EnrichingUpdateHandler.java:108)
> 	at com.my.search.requesthandler.EnrichingUpdateHandler.addDoc(EnrichingUpdateHandler.java:68)
> 	at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> 	at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> 	at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:730)
> 	at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:557)
> 	at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
> 	at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
> 	at org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235)
> 	at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:512)
> 	at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
> 	at org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:365)
> 	at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:231)
> 	at org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:444)
> 	at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:485)
> 	at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:464)
> During debugging I found that all string field types has property tokenized=true, while default org.apache.lucene.document.StringField.TYPE_STORED has tokenized=false (in schema.xml I used type "string", not "text"). Wrong(as for me) field type was set in org.apache.lucene.document.DocumentStoredFieldVisitor.stringField(...)
> line 69: final FieldType ft = new FieldType(TextField.TYPE_STORED);
> Setting tokenized=true caused wrong(IMHO) flow in org.apache.lucene.document.Field.tokenStream(Analyzer analyzer) - condition !fieldType().tokenized() not passed.
> Setting tokenized=false using reflection fixed problem, but it's a hack:)
> Problem and it's root cause are my assumptions and could be just misconfiguration, so please re-check it



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org