You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Gusenbauer Stefan <gu...@eduhi.at> on 2005/02/24 14:50:43 UTC

ngramj

Does anyone know a good tutorial or the javadoc for ngramj because i 
need it for guessing the language of the documents which should be indexed?
thx
stefan


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: Not entire document being indexed?

Posted by Andrzej Bialecki <ab...@getopt.org>.
amigo@max3d.com wrote:
> Anyone else has any ideas why wouldn't the whole documents be indexed as 
> described below?
> 
> Or perhaps someone can enlighten me on how to use Luke to find out if 
> the whole document was indexed or not.
> I have not used Luke in such capacity before so not sure what to do or 
> look for?

Well, you could try to use the "Reconstruct & Edit" function - this will 
give you an idea what tokens ended up in the index, and which was the 
last one. In Luke 0.6, if the field is stored then you will see two tabs 
- one is for stored content, the other displays tokenized content where 
tokens are separated by commas. If the field was un-stored, then the 
only tab you will get will be the reconstructed content. In any case, 
just scroll down and check what are the last tokens.

You could also look for presence of some special terms that occur only 
at the end of that document, and check if they are present in the index.

There are really only few reasons why this might be happening:

* your extractor has a bug, or
* the max token limit is wrongly set, or
* the indexing process doesn't close the IndexWriter properly.


-- 
Best regards,
Andrzej Bialecki
  ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: Not entire document being indexed?

Posted by Andrzej Bialecki <ab...@getopt.org>.
Pasha Bizhan wrote:

> Also, 230Kb is not equal 20.000. Try to set  writer.maxFieldLength to 250
> 000.

maxFieldLength's unit is a token, not a character.

-- 
Best regards,
Andrzej Bialecki
  ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: Not entire document being indexed?

Posted by "amigo@max3d.com" <am...@max3d.com>.
Thanks Andrzej and Pasha for your prompt replies and suggestions.
I will try everything you have suggested and report back on the findings!

regards

-pedja



Pasha Bizhan said the following on 2/25/2005 6:32 PM:

>Hi, 
>
>whole document was indexed or not.
>
>Luke can help you to give an answer the question: does my index contain a
>correct data?
>
>Let do the following steps:
>- run Luke
>- open the index
>- find the specified document (document tab)
>- click "reconstruct and edit" button
>- select the field and look the original stored content of this field
>reconstructed from index
>
>Does this reconstructed content contain your last 2-3 paragraphs?
>
>Also, 230Kb is not equal 20.000. Try to set  writer.maxFieldLength to 250
>000.
>
>Pasha Bizhan
>http://lucenedotnet.com
>

RE: Not entire document being indexed?

Posted by Pasha Bizhan <fc...@ok.ru>.
Hi, 

> From: amigo@max3d.com [mailto:amigo@max3d.com] 

> Or perhaps someone can enlighten me on how to use Luke to find out if the
whole document was indexed or not.

Luke can help you to give an answer the question: does my index contain a
correct data?

Let do the following steps:
- run Luke
- open the index
- find the specified document (document tab)
- click "reconstruct and edit" button
- select the field and look the original stored content of this field
reconstructed from index

Does this reconstructed content contain your last 2-3 paragraphs?

Also, 230Kb is not equal 20.000. Try to set  writer.maxFieldLength to 250
000.

Pasha Bizhan
http://lucenedotnet.com

> > For example one document which is about 230KB in size when 
> converted 
> > to plain text, when indexed and later searched for a pharse in the 
> > last 2-3 paragraphs returns no hits, yet searching anything above 
> > those paragraphs works just fine. WordExtractor does convert the 
> > entire document to text, I've checked that.
> >
> > I've tried increasing the number of terms per field from default 
> > 10,000 to 20,000 with writer.maxFieldLength but that didnt make any 
> > difference, still cant find phrases from the last 2-3 paragraphs.
> >
> > Any ideas as to why this could be happening and how I could 
> rectify it?
> >
> >
> > thanks,
> >
> > -pedja


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: Not entire document being indexed?

Posted by "amigo@max3d.com" <am...@max3d.com>.
Anyone else has any ideas why wouldn't the whole documents be indexed as 
described below?

Or perhaps someone can enlighten me on how to use Luke to find out if 
the whole document was indexed or not.
I have not used Luke in such capacity before so not sure what to do or 
look for?

thanks

-pedja


amigo@max3d.com said the following on 2/24/2005 2:08 PM:

> Hi everyone
>
> I'm having a bizzare problem with a few of the documents here that do 
> not seem to get indexed entirely.
>
> I use textmining WordExtractor to convert M$ Word to plain text and 
> then index that text.
> For example one document which is about 230KB in size when converted 
> to plain text, when indexed and
> later searched for a pharse in the last 2-3 paragraphs returns no 
> hits, yet searching anything above those
> paragraphs works just fine. WordExtractor does convert the entire 
> document to text, I've checked that.
>
> I've tried increasing the number of terms per field from default 
> 10,000 to 20,000 with writer.maxFieldLength
> but that didnt make any difference, still cant find phrases from the 
> last 2-3 paragraphs.
>
> Any ideas as to why this could be happening and how I could rectify it?
>
>
> thanks,
>
> -pedja
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-user-help@jakarta.apache.org
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: Not entire document being indexed?

Posted by "amigo@max3d.com" <am...@max3d.com>.
Hi Otis

Thanks for the reply, what exactly should I be looking for with Luke?

What would setting the max value to maxInteger do? Is this some 
arbitrary value or...?


-pedja


Otis Gospodnetic said the following on 2/24/2005 2:24 PM:

>Use Luke to peek in your index and find out what really got indexed.
>You could also try the extreme case and set that max value to the max
>Integer.
>
>Otis
>
>--- "amigo@max3d.com" <am...@max3d.com> wrote:
>
>  
>
>>Hi everyone
>>
>>I'm having a bizzare problem with a few of the documents here that do
>>
>>not seem to get indexed entirely.
>>
>>I use textmining WordExtractor to convert M$ Word to plain text and
>>then 
>>index that text.
>>For example one document which is about 230KB in size when converted
>>to 
>>plain text, when indexed and
>>later searched for a pharse in the last 2-3 paragraphs returns no
>>hits, 
>>yet searching anything above those
>>paragraphs works just fine. WordExtractor does convert the entire 
>>document to text, I've checked that.
>>
>>I've tried increasing the number of terms per field from default
>>10,000 
>>to 20,000 with writer.maxFieldLength
>>but that didnt make any difference, still cant find phrases from the 
>>last 2-3 paragraphs.
>>
>>Any ideas as to why this could be happening and how I could rectify
>>it?
>>
>>
>>thanks,
>>
>>-pedja
>>
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
>>For additional commands, e-mail: lucene-user-help@jakarta.apache.org
>>
>>
>>    
>>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: lucene-user-help@jakarta.apache.org
>
>
>
>  
>

Re: Not entire document being indexed?

Posted by Otis Gospodnetic <ot...@yahoo.com>.
Use Luke to peek in your index and find out what really got indexed.
You could also try the extreme case and set that max value to the max
Integer.

Otis

--- "amigo@max3d.com" <am...@max3d.com> wrote:

> Hi everyone
> 
> I'm having a bizzare problem with a few of the documents here that do
> 
> not seem to get indexed entirely.
> 
> I use textmining WordExtractor to convert M$ Word to plain text and
> then 
> index that text.
> For example one document which is about 230KB in size when converted
> to 
> plain text, when indexed and
> later searched for a pharse in the last 2-3 paragraphs returns no
> hits, 
> yet searching anything above those
> paragraphs works just fine. WordExtractor does convert the entire 
> document to text, I've checked that.
> 
> I've tried increasing the number of terms per field from default
> 10,000 
> to 20,000 with writer.maxFieldLength
> but that didnt make any difference, still cant find phrases from the 
> last 2-3 paragraphs.
> 
> Any ideas as to why this could be happening and how I could rectify
> it?
> 
> 
> thanks,
> 
> -pedja
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: lucene-user-help@jakarta.apache.org
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Not entire document being indexed?

Posted by "amigo@max3d.com" <am...@max3d.com>.
Hi everyone

I'm having a bizzare problem with a few of the documents here that do 
not seem to get indexed entirely.

I use textmining WordExtractor to convert M$ Word to plain text and then 
index that text.
For example one document which is about 230KB in size when converted to 
plain text, when indexed and
later searched for a pharse in the last 2-3 paragraphs returns no hits, 
yet searching anything above those
paragraphs works just fine. WordExtractor does convert the entire 
document to text, I've checked that.

I've tried increasing the number of terms per field from default 10,000 
to 20,000 with writer.maxFieldLength
but that didnt make any difference, still cant find phrases from the 
last 2-3 paragraphs.

Any ideas as to why this could be happening and how I could rectify it?


thanks,

-pedja



---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: ngramj

Posted by "Kevin A. Burton" <bu...@newsmonster.org>.
petite_abeille wrote:

>
> On Feb 24, 2005, at 14:50, Gusenbauer Stefan wrote:
>
>> Does anyone know a good tutorial or the javadoc for ngramj because i 
>> need it for guessing the language of the documents which should be 
>> indexed?
>
>
> http://cvs.sourceforge.net/viewcvs.py/nutch/nutch/src/plugin/ 
> languageidentifier/

Wow.. interesting! Where'd this come from?

I actually wrote an implementation of NGram language categorization a 
while back. I'll have to check this out. I'm willing to bet mine's 
better though ;)

I was going to put it in Jakarta Commons...

Kevin

-- 

Use Rojo (RSS/Atom aggregator).  Visit http://rojo.com. Ask me for an 
invite!  Also see irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

If you're interested in RSS, Weblogs, Social Networking, etc... then you 
should work for Rojo!  If you recommend someone and we hire them you'll 
get a free iPod!
    
Kevin A. Burton, Location - San Francisco, CA
       AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org


Re: ngramj

Posted by petite_abeille <pe...@mac.com>.
On Feb 24, 2005, at 14:50, Gusenbauer Stefan wrote:

> Does anyone know a good tutorial or the javadoc for ngramj because i  
> need it for guessing the language of the documents which should be  
> indexed?

http://cvs.sourceforge.net/viewcvs.py/nutch/nutch/src/plugin/ 
languageidentifier/

Cheers

--
PA, Onnay Equitursay
http://alt.textdrive.com/


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org