You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Mark Miller (JIRA)" <ji...@apache.org> on 2014/05/19 03:57:37 UTC

[jira] [Created] (SOLR-6089) When using the HDFS block cache, when a file is deleted, it's underlying data entries in the block cache are not removed, which is a problem with the global block cache option.

Mark Miller created SOLR-6089:
---------------------------------

             Summary: When using the HDFS block cache, when a file is deleted, it's underlying data entries in the block cache are not removed, which is a problem with the global block cache option.
                 Key: SOLR-6089
                 URL: https://issues.apache.org/jira/browse/SOLR-6089
             Project: Solr
          Issue Type: Bug
          Components: hdfs
            Reporter: Mark Miller
            Assignee: Mark Miller


Patrick Hunt noticed this. Without the global block cache, the block cache was not reused after a directory was closed. Now that it is reused when using the global cache, leaving the underlying entries presents a problem if that directory is created again because blocks from the previous directory may be read. This could happen when you remove a solrcore and recreate it with the same data directory (or a collection with the same name). I could only reproduce it easily using index merges (core admin) with the sequence: merge index, delete collection, create collection, merge index. Reads on the final merged index can look corrupt or queries may just return no results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org