You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@bigtop.apache.org by "Evans Ye (JIRA)" <ji...@apache.org> on 2015/09/10 20:10:46 UTC

[jira] [Comment Edited] (BIGTOP-1746) Introduce the concept of roles in bigtop cluster deployment

    [ https://issues.apache.org/jira/browse/BIGTOP-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739254#comment-14739254 ] 

Evans Ye edited comment on BIGTOP-1746 at 9/10/15 6:10 PM:
-----------------------------------------------------------

Hey [~vishnu] I figured out how this works. Adding one line into hiera.yaml to get config by hostname: 
{code}
root@bigtop2 puppet]# cat hiera.yaml
---
:yaml:
  :datadir: /etc/puppet/hieradata
:hierarchy:
  - "bigtop/%{fqdn}"
  - site
  - "bigtop/%{hadoop_hiera_ha_path}"
  - bigtop/cluster
{code}

Then we can have host specific role definitions in the following arrangement:
{code}
hieradata/
- bigtop/
  - bigtop1.docker.yaml
  - bigtop2.docker.yaml
  - bigtop3.docker.yaml
{code}
 and then ship the hieradata directory as configurations across the cluster

Is this what you designed for? 


was (Author: evans_ye):
Hey [~vishnu] I figured out how this works. Adding one line *"bigtop/%{fqdn}"* into hiera.yaml: 
{code}
root@bigtop2 puppet]# cat hiera.yaml
---
:yaml:
  :datadir: /etc/puppet/hieradata
:hierarchy:
  - "bigtop/%{fqdn}"
  - site
  - "bigtop/%{hadoop_hiera_ha_path}"
  - bigtop/cluster
{code}

Then we can have host specific role definitions in the following arrangement:
{code}
hieradata/
- bigtop/
  - bigtop1.docker.yaml
  - bigtop2.docker.yaml
  - bigtop3.docker.yaml
{code}
 and then ship the hieradata directory as configurations across the cluster

Is this what you designed for? 

> Introduce the concept of roles in bigtop cluster deployment
> -----------------------------------------------------------
>
>                 Key: BIGTOP-1746
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-1746
>             Project: Bigtop
>          Issue Type: New Feature
>          Components: deployment
>    Affects Versions: 0.8.0
>            Reporter: vishnu gajendran
>            Assignee: vishnu gajendran
>              Labels: features
>             Fix For: 1.1.0
>
>         Attachments: BIGTOP-1746.patch, BIGTOP-1746.patch, BIGTOP-1746.patch, BIGTOP-1746.patch, BIGTOP-1746.patch
>
>
> Currently, during cluster deployment, puppet categorizes nodes as head_node, worker_nodes, gateway_nodes, standy_node based on user specified info. This functionality gives user control over picking up a particular node as head_node, standy_node, gateway_node and rest others as worker_nodes. But, I woulld like to have more fine-grained control on which deamons should run on which node. For example, I do not want to run namenode, datanode on the same node. This functionality can be introduced with the concept of roles. Each node can be assigned a set of role. For example, Node A can be assigned ["namenode", "resourcemanager"] roles. Node B can be assigned ["datanode", "nodemanager"] and Node C can be assigned ["nodemanager", "hadoop-client"]. Now, each node will only run the specified daemons. Prerequisite for this kind of deployment is that each node should be given the necessary configurations that it needs to know. For example, each datanode should know which is the namenode etc... This functionality will allow users to customize the cluster deployment according to their needs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)