You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Kevin Risden (JIRA)" <ji...@apache.org> on 2019/08/02 16:35:00 UTC

[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

    [ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899024#comment-16899024 ] 

Kevin Risden commented on SOLR-6305:
------------------------------------

Updated patch from [~bpasko] with commit message and CHANGES. Looking at committing soon.

> Ability to set the replication factor for index files created by HDFSDirectoryFactory
> -------------------------------------------------------------------------------------
>
>                 Key: SOLR-6305
>                 URL: https://issues.apache.org/jira/browse/SOLR-6305
>             Project: Solr
>          Issue Type: Improvement
>          Components: Hadoop Integration, hdfs
>         Environment: hadoop-2.2.0
>            Reporter: Timothy Potter
>            Assignee: Kevin Risden
>            Priority: Major
>         Attachments: 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, SOLR-6305.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different replication factor than the configured DFS default because it uses:     
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication factor for the Solr directories to a lower value than the default. I realize this might reduce the chance of data locality but since Solr cores each have their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the replication factor, but that's a one-time shot and doesn't affect new files created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent directory.
> Not sure if we need to go as far as allowing different replication factor per collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through this using the Configuration object but nothing jumped out at me ... and the implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
>     return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org