You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@metron.apache.org by "Dale Richardson (Jira)" <ji...@apache.org> on 2019/11/21 07:04:00 UTC

[jira] [Commented] (METRON-2312) Solr collection create/delete scripts assume they are not in a chrooted environment by default

    [ https://issues.apache.org/jira/browse/METRON-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979032#comment-16979032 ] 

Dale Richardson commented on METRON-2312:
-----------------------------------------

Will do in the future.  

> Solr collection create/delete scripts assume they are not in a chrooted environment by default
> ----------------------------------------------------------------------------------------------
>
>                 Key: METRON-2312
>                 URL: https://issues.apache.org/jira/browse/METRON-2312
>             Project: Metron
>          Issue Type: Bug
>            Reporter: Dale Richardson
>            Assignee: Dale Richardson
>            Priority: Major
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> When installing SOLR cloud, its been highly recommended to use the cluster zookeeper ensemble rather then installing your in own mini-zk cluster just for SOLR.  For the past several years, its been standard practice to use a chrooted / namespaced environment for storing solr information in zookeeper.   The practical effects of this is to need to prepend '/solr' to any zookeeper ensemble URLs.  The use of chrooted zookeeper configurations is the default in both lucidworks/HWX SOLR (from 4.0), and for Cloudera SOLR (not sure which version but for many years).  It has also been the documented recommendation for Apache SOLR Cloud since approximately version 6.6.
> End result is, if Metron is dealing with a SOLR cluster that has been installed or updated any time in the past couple of years, it is dealing with a SOLR configuration stored in a chrooted Zookeeper environment.  
> The problem is the Metron SOLR collection create/destroy scripts assume that we are not using a CHROOTed environment, and fail badly when the expected SOLR configuration is not present at the expected location in SOLR.  Buried in the readme is instruction on how to set modify the zookeeper environment variables before running the script to add chrooted address, and when the scripts are used by Ambari, they are called using the correct chrooted quorum URL, because there is a seperate configuration item that can be set to indicate the chroot zookeeper address for SOLR.
> Having just been burnt by this I think we should at least
>  # Cleanly catch the failure of the zkcli command in the collection scripts when it queries for zookeeper state that is not present
>  # If the zkcli error is caught, make a suggestion in the error message to check for a chrooted SOLR cloud zookeeper configuration.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)