You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Robert Muir (JIRA)" <ji...@apache.org> on 2015/11/30 04:09:10 UTC
[jira] [Created] (LUCENE-6913) Standard/Classic/UAX tokenizers
could be more ram efficient
Robert Muir created LUCENE-6913:
-----------------------------------
Summary: Standard/Classic/UAX tokenizers could be more ram efficient
Key: LUCENE-6913
URL: https://issues.apache.org/jira/browse/LUCENE-6913
Project: Lucene - Core
Issue Type: Improvement
Reporter: Robert Muir
These tokenizers map codepoints to character classes with the following datastructure (loaded in clinit):
{noformat}
private static char [] zzUnpackCMap(String packed) {
char [] map = new char[0x110000];
{noformat}
This requires 2MB RAM for each tokenizer class (in trunk 6MB if all 3 classes are loaded, in branch_5x 10MB since there are 2 additional backwards compat classes).
On the other hand, none of our tokenizers actually use a huge number of character classes, so {{char}} is overkill: e.g. this map can safely be a byte [] and we can save half the memory. Perhaps it could make these tokenizers faster too.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org