You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dlab.apache.org by bh...@apache.org on 2019/07/25 13:16:10 UTC

[incubator-dlab] branch v2.1.1 updated (fbeb333 -> 202f1bb)

This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a change to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git.


    from fbeb333  [DLAB-583]: added meta data fixes
     new 40ae3ab  README.md edited
     new 71d62f3  Readme.md updated
     new ad975af  README.md updated1
     new 75070be  README.md updated
     new 3a372dc  README.md updated
     new 478992f  README.md file updated
     new 6ea03ea  collapsing test
     new 285fe4a  collapsing test 1
     new dfe93ce  collapsing test 2
     new d11f535  collapsing test 3
     new e6f4f2c  collapsing test 3
     new d7d9ef4  collapsing test 4
     new 0386d13  collapsing test 5
     new b0ac05f  collapsing test 6
     new 268eaeb  collapsing test 7
     new 19547ae  collapsing test 8
     new b6b8d1d  collapsing test 8
     new d355e64  collapsing test final
     new b618185  AWS/Azure/GCP sections expanding added
     new 9b55dfe  contents updated
     new 4de43d9  contents fixed
     new 29aaf4e  contents fixed
     new 202f1bb  Azure expanding issues fixed

The 23 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 README.md | 582 +++++++++++++++++++++++++++++++++++---------------------------
 1 file changed, 333 insertions(+), 249 deletions(-)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 17/23: collapsing test 8

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit b6b8d1d94b1446c82ff889450683bf03a2d4a59e
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 18:11:12 2019 +0300

    collapsing test 8
---
 README.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index ce500f1..1cf8a21 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary>In Amazon cloud <i>(click to expand)</i></summary>
+<details><summary color="#FF7F50">In Amazon cloud</summary>
 
 Prerequisites:
 
@@ -325,7 +325,7 @@ Preparation steps for deployment:
 - Put SSH key file created through Amazon Console on the instance with the same name
 - Install Git and clone DLab repository</details>
 
-<details><summary>In Azure cloud <i>(click to expand)</i></summary>
+<details><summary color="#87CEEB">In Azure cloud</i></summary>
 
 Prerequisites:
 
@@ -338,7 +338,7 @@ Prerequisites:
 - Microsoft Graph
 - Windows Azure Service Management API</details>
 
-<details><summary>In Google cloud (GCP) <i style="color: grey">(click to expand)</i></summary>
+<details><summary color="#4169E1">In Google cloud (GCP)</i></summary>
 
 Prerequisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 02/23: Readme.md updated

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 71d62f33c5cd64b6c08cd2fb5e046d3860a41b50
Author: Mykola Bodnar1 <my...@epam.com>
AuthorDate: Wed Jun 5 13:27:42 2019 +0300

    Readme.md updated
---
 README.md | 124 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 110 insertions(+), 14 deletions(-)

diff --git a/README.md b/README.md
index d4f879f..ed30994 100644
--- a/README.md
+++ b/README.md
@@ -130,6 +130,7 @@ Creation of self-service node – is the first step for deploying DLab. SSN is a
 
 Elastic(Static) IP address is assigned to an SSN Node, so you are free to stop|start it and and SSN node's IP address won’t change.
 
+<<<<<<< HEAD
 ## Edge node
 
 Setting up Edge node is the first step that user is asked to do once logged into DLab. This node is used as proxy server and SSH gateway for the user. Through Edge node users can access Notebook via HTTP and SSH. Edge Node has a Squid HTTP web proxy pre-installed.
@@ -270,6 +271,9 @@ If you want to deploy DLab from inside of your AWS account, you can use the foll
 - Clone DLab repository and run deploy script.
 
 ## Structure of main DLab directory <a name="DLab_directory"></a>
+=======
+### Structure of main DLab directory <a name="DLab_directory"></a>
+>>>>>>> 84ff8aad0... README.md updated
 
 DLab’s SSN node main directory structure is as follows:
 
@@ -290,7 +294,7 @@ DLab’s SSN node main directory structure is as follows:
 -   webapp – contains all .jar files for DLab Web UI and back-end
     services.
 
-## Structure of log directory <a name="log_directory"></a>
+### Structure of log directory <a name="log_directory"></a>
 
 SSN node structure of log directory is as follows:
 
@@ -311,9 +315,36 @@ These directories contain the log files for each template and for DLab back-end
 -   selfservice.log – Self-Service log file;
 -   edge, notebook, dataengine, dataengine-service – contains logs of Python scripts.
 
+## Edge node
+
+Setting up Edge node is the first step that user is asked to do once logged into DLab. This node is used as proxy server and SSH gateway for the user. Through Edge node users can access Notebook via HTTP and SSH. Edge Node has a Squid HTTP web proxy pre-installed.
+
+## Notebook node
+
+The next step is setting up a Notebook node (or a Notebook server). It is a server with pre-installed applications and libraries for data processing, data cleaning and transformations, numerical simulations, statistical modeling, machine learning, etc. Following analytical tools are currently supported in DLab and can be installed on a Notebook node:
+
+-   Jupyter
+-   RStudio
+-   Apache Zeppelin
+-   TensorFlow + Jupyter
+-   Deep Learning + Jupyter
+
+Apache Spark is also installed for each of the analytical tools above.
+
+**Note:** terms 'Apache Zeppelin' and 'Apache Spark' hereinafter may be referred to as 'Zeppelin' and 'Spark' respectively or may have original reference.
+
+## Data engine cluster
+
+After deploying Notebook node, user can create one of the cluster for it:
+-   Data engine - Spark standalone cluster
+-   Data engine service - cloud managed cluster platform (EMR for AWS or Dataproc for GCP)
+That simplifies running big data frameworks, such as Apache Hadoop and Apache Spark to process and analyze vast amounts of data. Adding cluster is not mandatory and is only needed in case additional computational resources are required for job execution.
+----------------------
+# DLab Deployment <a name="DLab_Deployment"></a>
+
 ## Self-Service Node <a name="Self_Service_Node"></a>
 
-### Create
+### Preparing environment for DLab deployment <a name="Env_for_DLab"></a>
 
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
@@ -386,18 +417,78 @@ Prerequisites:
 }
 ```
 
+<<<<<<< HEAD
 >>>>>>> eb92433f3... README.md edited
+=======
+Preparation steps for deployment:
+
+- Create an EC2 instance with the following settings:
+    - The instance should have access to Internet in order to install required prerequisites
+    - The instance should have access to further DLab installation
+    - AMI - Ubuntu 16.04
+    - IAM role with [policy](#AWS_SSN_policy) should be assigned to the instance
+- Put SSH key file created through Amazon Console on the instance
+- Install Git and clone DLab repository
+
+#### In Azure cloud
+
+Prerequisites:
+
+- IAM user with Contributor permissions.
+- Service principal and JSON based auth file with clientId, clientSecret and tenantId.
+
+**Note:** The following permissions should be assigned to the service principal:
+
+- Windows Azure Active Directory
+- Microsoft Graph
+- Windows Azure Service Management API
+
+#### In Google cloud (GCP)
+
+Prerequisites:
+
+- Service account and JSON auth file for it. In order to get JSON auth file, Key should be created for service account through Google cloud console.
+- Google Cloud Storage JSON API should be enabled
+
+Preparation steps for deployment:
+
+- Create an VM instance with the following settings:
+    - The instance should have access to Internet in order to install required prerequisites
+    - Boot disk OS Image - Ubuntu 16.04
+- Generate SSH key pair and rename private key with .pem extension
+- Put JSON auth file created through Google cloud console to users home directory
+- Install Git and clone DLab repository
+
+### Executing deployment script
+
+>>>>>>> 84ff8aad0... README.md updated
 To build SSN node, following steps should be executed:
 
-1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed.
-2.  Go to *dlab* directory.
-3.  Execute following script:
+- Connect to the instance via SSH and run the following commands:
+
 ```
-/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab_test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXXXX --aws_region us-west-2 --conf_os_family debian --conf_cloud_provider aws --aws_vpc_id vpc-xxxxx --aws_subnet_id subnet-xxxxx --aws_security_groups_ids sg-xxxxx,sg-xxxx --key_path /root/ --conf_key_name Test --conf_tag_resource_id dlab --aws_account_id xxxxxxxx --aws_billing_bucket billing_bucket --aws_report_path /billing/direct [...]
+sudo su
+apt-get update
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
+add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+apt-get update
+apt-cache policy docker-ce
+apt-get install -y docker-ce=17.06.2~ce-0~ubuntu
+usermod -a -G docker *username*
+apt-get install python-pip
+pip install fabric==1.14.0
 ```
+- Go to *dlab* directory
+- Run deployment script:
 
 This python script will build front-end and back-end part of DLab, create SSN docker image and run Docker container for creating SSN node.
 
+#### In Amazon cloud
+
+```
+/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name XXXXXX --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXXXX --aws_region xx-xxxxx-x --conf_os_family debian --conf_cloud_provider aws --aws_vpc_id vpc-xxxxx --aws_subnet_id subnet-xxxxx --aws_security_groups_ids sg-xxxxx,sg-xxxx --key_path /path/to/key/ --conf_key_name key_name --conf_tag_resource_id dlab --aws_account_id xxxxxxxx --aws_billing_bucket billing_bucket --aws_report_path /billi [...]
+```
+
 List of parameters for SSN node deployment:
 
 | Parameter                 | Description/Value                                                                       |
@@ -444,6 +535,7 @@ After SSN node deployment following AWS resources will be created:
 #### In Azure cloud
 
 <<<<<<< HEAD
+<<<<<<< HEAD
 =======
 Prerequisites:
 
@@ -464,12 +556,12 @@ To build SSN node, following steps should be executed:
 3.  To have working billing functionality please review Billing configuration note and use proper parameters for SSN node deployment
 4.  To use Data Lake Store please review Azure Data Lake usage pre-requisites note and use proper parameters for SSN node deployment
 5.  Execute following deploy_dlab.py script:
+=======
+>>>>>>> 84ff8aad0... README.md updated
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab_test --azure_region westus2 --conf_os_family debian --conf_cloud_provider azure --azure_vpc_name vpc-test --azure_subnet_name subnet-test --azure_security_group_name sg-test1,sg-test2 --key_path /root/ --conf_key_name Test --azure_auth_path /dir/file.json  --action create
 ```
 
-This python script will build front-end and back-end part of DLab, create SSN docker image and run Docker container for creating SSN node.
-
 List of parameters for SSN node deployment:
 
 | Parameter                         | Description/Value                                                                       |
@@ -511,6 +603,9 @@ To know azure\_offer\_number open [Azure Portal](https://portal.azure.com), go t
 Please see [RateCard API](https://msdn.microsoft.com/en-us/library/mt219004.aspx) to get more details about azure\_offer\_number,
 azure\_currency, azure\_locale, azure\_region_info. These DLab deploy properties correspond to RateCard API request parameters.
 
+To have working billing functionality please review Billing configuration note and use proper parameters for SSN node deployment
+To use Data Lake Store please review Azure Data Lake usage pre-requisites note and use proper parameters for SSN node deployment
+
 **Note:** Azure Data Lake usage pre-requisites:
 
 1. Configure application in Azure portal and grant proper permissions to it.
@@ -536,6 +631,7 @@ After SSN node deployment following Azure resources will be created:
 #### In Google cloud (GCP)
 
 <<<<<<< HEAD
+<<<<<<< HEAD
 =======
 Prerequisites:
 
@@ -548,12 +644,12 @@ To build SSN node, following steps should be executed:
 1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed.
 2.  Go to *dlab* directory.
 3.  Execute following script:
+=======
+>>>>>>> 84ff8aad0... README.md updated
 ```
-/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab --gcp_region us-west1 --gcp_zone us-west1-a --conf_os_family debian --conf_cloud_provider gcp --key_path /key/path/ --conf_key_name key_name --gcp_ssn_instance_size n1-standard-1 --gcp_project_id project_id --gcp_service_account_path /path/to/auth/file.json --action create
+/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xxx-xxxxx-x --conf_os_family debian --conf_cloud_provider gcp --key_path /path/to/key/ --conf_key_name key_name --gcp_ssn_instance_size n1-standard-1 --gcp_project_id project_id --gcp_service_account_path /path/to/auth/file.json --action create
 ```
 
-This python script will build front-end and back-end part of DLab, create SSN docker image and run Docker container for creating SSN node.
-
 List of parameters for SSN node deployment:
 
 | Parameter                    | Description/Value                                                                       |
@@ -586,14 +682,14 @@ After SSN node deployment following GCP resources will be created:
 -   Bucket – its name will be \<service\_base\_name\>-ssn-bucket. This bucket will contain necessary dependencies and configuration files for Notebook nodes (such as .jar files, YARN configuration, etc.)
 -   Bucket for for collaboration between Dlab users. Its name will be \<service\_base\_name\>-shared-bucket
 
-### Terminate
+### Terminating Self-Service Node
 
 Terminating SSN node will also remove all nodes and components related to it. Basically, terminating Self-service node will terminate all DLab’s infrastructure.
 Example of command for terminating DLab environment:
 
 #### In Amazon
 ```
-/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXX --aws_region us-west-2 --key_path /root/ --conf_key_name Test --conf_os_family debian --conf_cloud_provider aws --action terminate
+/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXX --aws_region xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider aws --action terminate
 ```
 List of parameters for SSN node termination:
 
@@ -630,7 +726,7 @@ List of parameters for SSN node termination:
 
 #### In Google cloud
 ```
-/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --gcp_project_id project_id --conf_service_base_name dlab --gcp_region us-west1 --gcp_zone us-west1-a --key_path /root/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider gcp --gcp_service_account_path /path/to/auth/file.json --action terminate
+/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --gcp_project_id project_id --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider gcp --gcp_service_account_path /path/to/auth/file.json --action terminate
 ```
 List of parameters for SSN node termination:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 01/23: README.md edited

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 40ae3ab80302f7f688ec8c8d9613190af6b1c18a
Author: Mykola Bodnar1 <my...@epam.com>
AuthorDate: Tue Jun 4 18:25:15 2019 +0300

    README.md edited
---
 README.md | 183 ++++++++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 136 insertions(+), 47 deletions(-)

diff --git a/README.md b/README.md
index 88d49de..d4f879f 100644
--- a/README.md
+++ b/README.md
@@ -250,7 +250,7 @@ If you want to deploy DLab from inside of your AWS account, you can use the foll
 
 - Create an EC2 instance with the following settings:
     - Shape of the instance shouldn't be less than t2.medium
-    - The instance should have access to Internet in order to install required prerequisites 
+    - The instance should have access to Internet in order to install required prerequisites
     - The instance should have access to further DLab installation
     - AMI - Ubuntu 16.04
     - IAM role with [policy](#AWS_SSN_policy) should be assigned to the instance
@@ -264,6 +264,7 @@ If you want to deploy DLab from inside of your AWS account, you can use the foll
     apt-cache policy docker-ce
     apt-get install -y docker-ce=17.06.2~ce-0~ubuntu
     usermod -a -G docker ubuntu
+    apt-get install python-pip
     pip install fabric==1.14.0
 ```
 - Clone DLab repository and run deploy script.
@@ -314,11 +315,78 @@ These directories contain the log files for each template and for DLab back-end
 
 ### Create
 
-Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud. 
+Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
 #### In Amazon cloud
 
+<<<<<<< HEAD
+=======
+Prerequisites:
+
+ - SSH key for EC2 instances. This key could be created through Amazon Console.
+ - IAM user
+ - AWS access key ID and secret access key
+ - The following permissions should be assigned for IAM user:
+ <a name="AWS_SSN_policy"></a>
+```
+{
+	"Version": "2012-10-17",
+	"Statement": [
+		{
+			"Action": [
+				"iam:ListRoles",
+				"iam:CreateRole",
+				"iam:CreateInstanceProfile",
+				"iam:PutRolePolicy",
+				"iam:AddRoleToInstanceProfile",
+				"iam:PassRole",
+				"iam:GetInstanceProfile",
+				"iam:ListInstanceProfilesForRole",
+				"iam:RemoveRoleFromInstanceProfile",
+				"iam:DeleteInstanceProfile"
+			],
+			"Effect": "Allow",
+			"Resource": "*"
+		},
+		{
+			"Action": [
+				"ec2:DescribeImages",
+				"ec2:CreateTags",
+				"ec2:DescribeRouteTables",
+				"ec2:CreateRouteTable",
+				"ec2:AssociateRouteTable",
+				"ec2:DescribeVpcEndpoints",
+				"ec2:CreateVpcEndpoint",
+				"ec2:ModifyVpcEndpoint",
+				"ec2:DescribeInstances",
+				"ec2:RunInstances",
+				"ec2:DescribeAddresses",
+				"ec2:AllocateAddress",
+				"ec2:DescribeInstances",
+				"ec2:AssociateAddress",
+				"ec2:DisassociateAddress",
+				"ec2:ReleaseAddress",
+				"ec2:TerminateInstances"
+			],
+			"Effect": "Allow",
+			"Resource": "*"
+		},
+		{
+			"Action": [
+				"s3:ListAllMyBuckets",
+				"s3:CreateBucket",
+				"s3:PutBucketTagging",
+				"s3:GetBucketTagging"
+			],
+			"Effect": "Allow",
+			"Resource": "*"
+		}
+	]
+}
+```
+
+>>>>>>> eb92433f3... README.md edited
 To build SSN node, following steps should be executed:
 
 1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed.
@@ -339,7 +407,7 @@ List of parameters for SSN node deployment:
 | aws\_secret\_access\_key  | AWS user secret access key                                                              |
 | aws\_region               | AWS region                                                                              |
 | conf\_os\_family          | Name of the Linux distributive family, which is supported by DLab (Debian/RedHat)       |
-| conf\_cloud\_provider     | Name of the cloud provider, which is supported by DLab (AWS) 
+| conf\_cloud\_provider     | Name of the cloud provider, which is supported by DLab (AWS)
 | conf\_duo\_vpc\_enable    | "true" - for installing DLab into two Virtual Private Clouds (VPCs) or "false" - for installing DLab into one VPC. Also this parameter isn't required when deploy DLab in one VPC
 | aws\_vpc\_id              | ID of the VPC                                                   |
 | aws\_subnet\_id           | ID of the public subnet                                                                 |
@@ -375,6 +443,20 @@ After SSN node deployment following AWS resources will be created:
 
 #### In Azure cloud
 
+<<<<<<< HEAD
+=======
+Prerequisites:
+
+- IAM user with Contributor permissions.
+- Service principal and JSON based auth file with clientId, clientSecret and tenantId.
+
+**Note:** The following permissions should be assigned to the service principal:
+
+- Windows Azure Active Directory
+- Microsoft Graph
+- Windows Azure Service Management API
+
+>>>>>>> eb92433f3... README.md edited
 To build SSN node, following steps should be executed:
 
 1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed
@@ -435,9 +517,9 @@ azure\_currency, azure\_locale, azure\_region_info. These DLab deploy properties
 - Open *Azure Active Directory* tab, then *App registrations* and click *New application registration*
 - Fill in ui form with the following parameters *Name* - put name of the new application, *Application type* - select Native, *Sign-on URL* put any valid url as it will be updated later
 - Grant proper permissions to the application. Select the application you just created on *App registration* view, then click *Required permissions*, then *Add->Select an API-> In search field type MicrosoftAzureQueryService* and press *Select*, then check the box *Have full access to the Azure Data Lake service* and save the changes. Repeat the same actions for *Windows Azure Active Directory* API (available on *Required permissions->Add->Select an API*) and the box *Sign in and read us [...]
-- Get *Application ID* from application properties  it will be used as azure_application_id for deploy_dlap.py script 
+- Get *Application ID* from application properties  it will be used as azure_application_id for deploy_dlap.py script
 2. Usage of Data Lake resource predicts shared folder where all users can write or read any data. To manage access to this folder please create ot use existing group in Active Directory. All users from this group will have RW access to the shared folder. Put ID(in Active Directory) of the group as *azure_ad_group_id* parameter to deploy_dlab.py script
-3. After execution of deploy_dlab.py script go to the application created in step 1 and change *Redirect URIs* value to the https://SSN_HOSTNAME/ where SSN_HOSTNAME - SSN node hostname 
+3. After execution of deploy_dlab.py script go to the application created in step 1 and change *Redirect URIs* value to the https://SSN_HOSTNAME/ where SSN_HOSTNAME - SSN node hostname
 
 After SSN node deployment following Azure resources will be created:
 
@@ -449,10 +531,18 @@ After SSN node deployment following Azure resources will be created:
 -   Virtual network and Subnet (if they have not been specified) for SSN and EDGE nodes
 -   Storage account and blob container for necessary further dependencies and configuration files for Notebook nodes (such as .jar files, YARN configuration, etc.)
 -   Storage account and blob container for collaboration between Dlab users
--   If support of Data Lake is enabled: Data Lake and shared directory will be created 
+-   If support of Data Lake is enabled: Data Lake and shared directory will be created
 
 #### In Google cloud (GCP)
 
+<<<<<<< HEAD
+=======
+Prerequisites:
+
+- IAM user
+- Service account and JSON auth file for it. In order to get JSON auth file, Key should be created for service account through Google cloud console.
+
+>>>>>>> eb92433f3... README.md edited
 To build SSN node, following steps should be executed:
 
 1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed.
@@ -608,7 +698,7 @@ The following Azure resources will be created:
 -   Security Groups for all further user's master nodes of data engine cluster
 -   Security Groups for all further user's slave nodes of data engine cluster
 -   User's private subnet. All further nodes (Notebooks, data engine clusters) will be provisioned in different subnet than SSN.
--   User's storage account and blob container 
+-   User's storage account and blob container
 
 List of parameters for Edge node creation:
 
@@ -635,7 +725,7 @@ The following GCP resources will be created:
 -   Security Groups for all further user's master nodes of data engine cluster
 -   Security Groups for all further user's slave nodes of data engine cluster
 -   User's private subnet. All further nodes (Notebooks, data engine clusters) will be provisioned in different subnet than SSN.
--   User's bucket 
+-   User's bucket
 
 List of parameters for Edge node creation:
 
@@ -993,7 +1083,7 @@ List of parameters for Notebook node to **get list** of available libraries:
   "pip2": {"requests": "N/A", "configparser": "N/A"},
   "pip3": {"configparser": "N/A"},
   "r_pkg": {"rmarkdown": "1.5"},
-  "others": {"Keras": "N/A"} 
+  "others": {"Keras": "N/A"}
 }
 ```
 
@@ -1269,7 +1359,7 @@ List of parameters for Dataengine-service node to **get list** of available libr
   "pip2": {"requests": "N/A", "configparser": "N/A"},
   "pip3": {"configparser": "N/A"},
   "r_pkg": {"rmarkdown": "1.5"},
-  "others": {"Keras": "N/A"} 
+  "others": {"Keras": "N/A"}
 }
 ```
 
@@ -1475,7 +1565,7 @@ List of parameters for Dataengine node to **get list** of available libraries:
   "pip2": {"requests": "N/A", "configparser": "N/A"},
   "pip3": {"configparser": "N/A"},
   "r_pkg": {"rmarkdown": "1.5"},
-  "others": {"Keras": "N/A"} 
+  "others": {"Keras": "N/A"}
 }
 ```
 
@@ -1708,7 +1798,7 @@ To deploy Gitlab server, set all needed parameters in ```gitlab.ini``` and run s
 
 **Note:** Terminate process uses ```node_name``` to find instance.
 
-**Note:** GitLab wouldn't be terminated with all environment termination process. 
+**Note:** GitLab wouldn't be terminated with all environment termination process.
 
 ## Troubleshooting <a name="Troubleshooting"></a>
 
@@ -1843,12 +1933,12 @@ Some class names may have endings like Aws or Azure(e.g. ComputationalResourceAw
 
 #### Security service
 
-Security service is REST based service for user authentication against LDAP/LDAP + AWS/Azure OAuth2 depending on module configuration and cloud provider. 
-LDAP only provides with authentication end point that allows to verify authenticity of users against LDAP instance. 
+Security service is REST based service for user authentication against LDAP/LDAP + AWS/Azure OAuth2 depending on module configuration and cloud provider.
+LDAP only provides with authentication end point that allows to verify authenticity of users against LDAP instance.
 If you use AWS cloud provider LDAP + AWS authentication could be useful as it allows to combine LDAP authentication and verification if user has any role in AWS account
 
 DLab provides OAuth2(client credentials and authorization code flow) security authorization mechanism for Azure users. This kind of authentication is required when you are going to use Data Lake. If Data Lake is not enabled you have two options LDAP or OAuth2
-If OAuth2 is in use security-service validates user's permissions to configured permission scope(resource in Azure). 
+If OAuth2 is in use security-service validates user's permissions to configured permission scope(resource in Azure).
 If Data Lake is enabled default permission scope(can be configured manually after deploy DLab) is Data Lake Store account so only if user has any role in scope of Data Lake Store Account resource he/she will be allowed to log in
 If Data Lake is disabled but Azure OAuth2 is in use default permission scope will be Resource Group where DLab is created and only users who have any roles in the resource group will be allowed to log in.
 
@@ -1867,7 +1957,7 @@ Sources are located in dlab/services/self-service/src/main/resources/webapp
 | Home page (list of resources) | HomeComponent<br>nested several main components like ResourcesGrid for notebooks data rendering and filtering, using custom MultiSelectDropdown component;<br>multiple modal dialogs components used for new instances creation, displaying detailed info and actions confirmation. |
 | Health Status page            | HealthStatusComponent<br>*HealthStatusGridComponent* displays list of instances, their types, statutes, ID’s and uses *healthStatusService* for handling main actions. |
 | Help pages                    | Static pages that contains information and instructions on how to access Notebook Server and generate SSH key pair. Includes only *NavbarComponent*. |
-| Error page                    | Simple static page letting users know that opened page does not exist. Includes only *NavbarComponent*. | 
+| Error page                    | Simple static page letting users know that opened page does not exist. Includes only *NavbarComponent*. |
 | Reporting page                | ReportingComponent<br>ReportingGridComponent displays billing detailed info with built-in filtering and DateRangePicker component for custom range filtering;<br>uses *BillingReportService* for handling main actions and exports report data to .csv file. |
 
 ## How to setup local development environment <a name="setup_local_environment"></a>
@@ -1931,7 +2021,7 @@ mongo:
 *Unix*
 
 ```
-ln -s ../../infrastructure-provisioning/src/ssn/templates/ssn.yml ssn.yml 
+ln -s ../../infrastructure-provisioning/src/ssn/templates/ssn.yml ssn.yml
 ```
 
 *Windows*
@@ -1978,7 +2068,7 @@ export * from './(aws|azure).dictionary';
 npm run build.prod
 ```
 
-### Prepare HTTPS prerequisites 
+### Prepare HTTPS prerequisites
 
 To enable a SSL connection the web server should have a Digital Certificate.
 To create a server certificate, follow these steps:
@@ -1993,7 +2083,7 @@ To create a server certificate, follow these steps:
 
 Please find below set of commands to create certificate, depending on OS.
 
-#### Create Unix/Ubuntu server certificate 
+#### Create Unix/Ubuntu server certificate
 
 Pay attention that the last command has to be executed with administrative permissions.
 ```
@@ -2387,7 +2477,7 @@ Also depending on customization, there might be differences in attributes config
 
 **CN** vs **UID**.
 
-The relation between users and groups also varies from vendor to vendor. 
+The relation between users and groups also varies from vendor to vendor.
 
 For example, in Open LDAP the group object may contain set (from 0 to many) attributes **"memberuid"** with values equal to user`s attribute **“uid”**.
 
@@ -2396,8 +2486,8 @@ On a group size there is attribute **"member"** (from 0 to many values) and its
 
 
 To fit the unified way of LDAP usage, we introduced configuration file with set of properties and customized scripts (python and JavaScript based).
-On backend side, all valuable attributes are further collected and passed to these scripts. 
-To apply some customization it is required to update a few properties in **security.yml** and customize the scripts. 
+On backend side, all valuable attributes are further collected and passed to these scripts.
+To apply some customization it is required to update a few properties in **security.yml** and customize the scripts.
 
 
 ### Properties overview
@@ -2417,14 +2507,14 @@ Additional parameters that are populated during deployment and may be changed in
 - **ldapConnectionConfig.ldapHost: ldap host**
 - **ldapConnectionConfig.ldapPort: ldap port**
 - **ldapConnectionConfig.credentials: ldap credentials**
- 
+
 ### Scripts overview
 
 There are 3 scripts in security.yml:
-- **userLookUp** (python based)    - responsible for user lookup in LDap and returns additional user`s attributes; 
+- **userLookUp** (python based)    - responsible for user lookup in LDap and returns additional user`s attributes;
 - **userInfo** (python based)      - enriches user with additional data;
 - **groupInfo** (javascript based) – responsible for mapping between users and groups;
- 
+
 ### Script structure
 
 The scripts above were created to flexibly manage user`s security configuration. They all are part of **security.yml** configuration. All scripts have following structure:
@@ -2439,14 +2529,14 @@ The scripts above were created to flexibly manage user`s security configuration.
     - **searchResultProcessor:**
       - **language**
       - **code**
-     
-Major properties are: 
+
+Major properties are:
 - **attributes**             - list of attributes that will be retrieved from LDAP (-name, -cn, -uid, -member, etc);
-- **filter**               - the filter, based on which the object will be retrieved from LDAP; 
+- **filter**               - the filter, based on which the object will be retrieved from LDAP;
 - **searchResultProcessor**    - optional. If only LDAP object attributes retrieving is required, this property should be empty. For example, “userLookup” script only retrieves list of "attributes". Otherwise, code customization (like user enrichment, user to groups matching, etc.) should be added into sub-properties below:
-  - **language**                - the script language - "python" or "JavaScript" 
+  - **language**                - the script language - "python" or "JavaScript"
   - **code**                    - the script code.
-     
+
 
 ### "userLookUp" script
 
@@ -2463,34 +2553,34 @@ Script code:
     expirationTimeMsec: 600000
     scope: SUBTREE
     attributes:
-      - cn 
+      - cn
       - gidNumber
       - mail
       - memberOf
     timeLimit: 0
     base: ou=users,ou=alxn,dc=alexion,dc=cloud
     filter: "(&(objectCategory=person)(objectClass=user)(mail=%mail%))"
-    
+
 In the example above, the user login passed from GUI is a mail (**ldapSearchAttribute: mail**) and based on the filer (**filter: "(&(objectCategory=person)(objectClass=user)(mail=%mail%))")** so, the service would search user by its **“mail”**.
 If corresponding users are found - the script will return additional user`s attributes:
   - cn
   - gidNumber
   - mail
   - memberOf
-   
+
 User`s authentication into LDAP would be done for DN with following template **ldapBindTemplate: 'cn=%s,ou=users,ou=alxn,dc=alexion,dc=cloud'**, where CN is attribute retrieved by  **“userLookUp”** script.
 
 ## Azure OAuth2 Authentication <a name="Azure_OAuth2_Authentication"></a>
-DLab supports OAuth2 authentication that is configured automatically in Security Service and Self Service after DLab deployment. 
-Please see explanation details about configuration parameters for Self Service and Security Service below. 
-DLab supports client credentials(username + password) and authorization code flow for authentication. 
+DLab supports OAuth2 authentication that is configured automatically in Security Service and Self Service after DLab deployment.
+Please see explanation details about configuration parameters for Self Service and Security Service below.
+DLab supports client credentials(username + password) and authorization code flow for authentication.
 
 
 ### Azure OAuth2 Self Service configuration
 
     azureLoginConfiguration:
         useLdap: false
-        tenant: xxxx-xxxx-xxxx-xxxx 
+        tenant: xxxx-xxxx-xxxx-xxxx
         authority: https://login.microsoftonline.com/
         clientId: xxxx-xxxx-xxxx-xxxx
         redirectUrl: https://dlab.azure.cloudapp.azure.com/
@@ -2499,7 +2589,7 @@ DLab supports client credentials(username + password) and authorization code flo
         silent: true
         loginPage: https://dlab.azure.cloudapp.azure.com/
         maxSessionDurabilityMilliseconds: 288000000
-        
+
 where:
 - **useLdap** - defines if LDAP authentication is enabled(true/false). If false Azure OAuth2 takes place with configuration properties below
 - **tenant** - tenant id of your company
@@ -2508,25 +2598,25 @@ where:
 - **redirectUrl** - redirect URL to DLab application after try to login to Azure using OAuth2
 - **responseMode** - defines how Azure sends authorization code or error information to DLab during log in procedure
 - **prompt** - defines kind of prompt during Oauth2 login
-- **silent** - defines if DLab tries to log in user without interaction(true/false), if false DLab tries to login user with configured prompt 
+- **silent** - defines if DLab tries to log in user without interaction(true/false), if false DLab tries to login user with configured prompt
 - **loginPage** - start page of DLab application
-- **maxSessionDurabilityMilliseconds** - max user session durability. user will be asked to login after this period of time and when he/she creates ot starts notebook/cluster. This operation is needed to update refresh_token that is used by notebooks to access Data Lake Store 
-        
+- **maxSessionDurabilityMilliseconds** - max user session durability. user will be asked to login after this period of time and when he/she creates ot starts notebook/cluster. This operation is needed to update refresh_token that is used by notebooks to access Data Lake Store
+
 To get more info about *responseMode*, *prompt* parameters please visit [Authorize access to web applications using OAuth 2.0 and Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-protocols-oauth-code)
-        
-   
+
+
 ### Azure OAuth2 Security Service configuration
 
     azureLoginConfiguration:
         useLdap: false
-        tenant: xxxx-xxxx-xxxx-xxxx 
+        tenant: xxxx-xxxx-xxxx-xxxx
         authority: https://login.microsoftonline.com/
         clientId: xxxx-xxxx-xxxx-xxxx
         redirectUrl: https://dlab.azure.cloudapp.azure.com/
-        validatePermissionScope: true 
+        validatePermissionScope: true
         permissionScope: subscriptions/xxxx-xxxx-xxxx-xxxx/resourceGroups/xxxx-xxxx/providers/Microsoft.DataLakeStore/accounts/xxxx/providers/Microsoft.Authorization/
         managementApiAuthFile: /dlab/keys/azure_authentication.json
-        
+
 where:
 - **useLdap** - defines if LDAP authentication is enabled(true/false). If false Azure OAuth2 takes place with configuration properties below
 - **tenant** - tenant id of your company
@@ -2536,4 +2626,3 @@ where:
 - **validatePermissionScope** - defines(true/false) if user's permissions should be validated to resource that is provided in permissionScope parameter. User will be logged in onlu in case he/she has any role in resource IAM described with permissionScope parameter
 - **permissionScope** - describes Azure resource where user should have any role to pass authentication. If user has no role in resource IAM he/she will not be logged in  
 - **managementApiAuthFile** - authentication file that is used to query Microsoft Graph API to check user roles in resource described in permissionScope  
-


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 09/23: collapsing test 2

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit dfe93cec14d3ee7a1fc45ac3d4d770dc0944fe75
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:15:05 2019 +0300

    collapsing test 2
---
 README.md | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/README.md b/README.md
index 8687300..d51dc33 100644
--- a/README.md
+++ b/README.md
@@ -338,7 +338,7 @@ Prerequisites:
 - Microsoft Graph
 - Windows Azure Service Management API</details>
 
-<details><summary>In Google cloud (GCP)
+<details><summary>In Google cloud (GCP)</summary>
 
 Prerequisites:
 
@@ -352,7 +352,7 @@ Preparation steps for deployment:
     - Boot disk OS Image - Ubuntu 16.04
 - Generate SSH key pair and rename private key with .pem extension
 - Put JSON auth file created through Google cloud console to users home directory
-- Install Git and clone DLab repository</summary>
+- Install Git and clone DLab repository</details>
 
 ### Executing deployment script
 
@@ -377,7 +377,7 @@ pip install fabric==1.14.0
 
 This python script will build front-end and back-end part of DLab, create SSN docker image and run Docker container for creating SSN node.
 
-#### In Amazon cloud
+<details><summary>In Amazon cloud</summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXXXX --aws_region xx-xxxxx-x --conf_os_family debian --conf_cloud_provider aws --aws_vpc_id vpc-xxxxx --aws_subnet_id subnet-xxxxx --aws_security_groups_ids sg-xxxxx,sg-xxxx --key_path /path/to/key/ --conf_key_name key_name --conf_tag_resource_id dlab --aws_account_id xxxxxxxx --aws_billing_bucket billing_bucket --aws_report_path /bi [...]
@@ -424,9 +424,9 @@ After SSN node deployment following AWS resources will be created:
 -   Security Group for SSN node (if it was specified, script will attach the provided one)
 -   VPC, Subnet (if they have not been specified) for SSN and EDGE nodes
 -   S3 bucket – its name will be \<service\_base\_name\>-ssn-bucket. This bucket will contain necessary dependencies and configuration files for Notebook nodes (such as .jar files, YARN configuration, etc.)
--   S3 bucket for for collaboration between Dlab users. Its name will be \<service\_base\_name\>-shared-bucket
+-   S3 bucket for for collaboration between Dlab users. Its name will be \<service\_base\_name\>-shared-bucket</details>
 
-#### In Azure cloud
+<details><summary>In Azure cloud</summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab_test --azure_region westus2 --conf_os_family debian --conf_cloud_provider azure --azure_vpc_name vpc-test --azure_subnet_name subnet-test --azure_security_group_name sg-test1,sg-test2 --key_path /root/ --conf_key_name Test --azure_auth_path /dir/file.json  --action create
@@ -497,9 +497,9 @@ After SSN node deployment following Azure resources will be created:
 -   Virtual network and Subnet (if they have not been specified) for SSN and EDGE nodes
 -   Storage account and blob container for necessary further dependencies and configuration files for Notebook nodes (such as .jar files, YARN configuration, etc.)
 -   Storage account and blob container for collaboration between Dlab users
--   If support of Data Lake is enabled: Data Lake and shared directory will be created
+-   If support of Data Lake is enabled: Data Lake and shared directory will be created</details>
 
-#### In Google cloud (GCP)
+<details><summary>In Google cloud (GCP)</summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xxx-xxxxx-x --conf_os_family debian --conf_cloud_provider gcp --key_path /path/to/key/ --conf_key_name key_name --gcp_ssn_instance_size n1-standard-1 --gcp_project_id project_id --gcp_service_account_path /path/to/auth/file.json --action create
@@ -535,7 +535,7 @@ After SSN node deployment following GCP resources will be created:
 -   Security Groups for SSN node (if it was specified, script will attach the provided one)
 -   VPC, Subnet (if they have not been specified) for SSN and EDGE nodes
 -   Bucket – its name will be \<service\_base\_name\>-ssn-bucket. This bucket will contain necessary dependencies and configuration files for Notebook nodes (such as .jar files, YARN configuration, etc.)
--   Bucket for for collaboration between Dlab users. Its name will be \<service\_base\_name\>-shared-bucket
+-   Bucket for for collaboration between Dlab users. Its name will be \<service\_base\_name\>-shared-bucket</details>
 
 ### Terminating Self-Service Node
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 11/23: collapsing test 3

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit e6f4f2c4cac7e2850eb17f3cf6db2a2363e666de
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:19:23 2019 +0300

    collapsing test 3
---
 README.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/README.md b/README.md
index ee75a59..922ecb5 100644
--- a/README.md
+++ b/README.md
@@ -560,6 +560,7 @@ List of parameters for SSN node termination:
 | conf\_cloud\_provider      | Name of the cloud provider, which is supported by DLab (AWS)                       |
 | action                     | terminate                                                                          |
 </details>
+
 <details><summary>In Azure</summary>
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --azure_vpc_name vpc-test --azure_resource_group_name resource-group-test --azure_region westus2 --key_path /root/ --conf_key_name Test --conf_os_family debian --conf_cloud_provider azure --azure_auth_path /dir/file.json --action terminate
@@ -598,6 +599,7 @@ List of parameters for SSN node termination:
 | gcp\_project\_id             | ID of GCP project                                                                       |
 | action                       | In case of SSN node termination, this parameter should be set to “terminate”            |
 </details>
+
 ## Edge Node <a name="Edge_Node"></a>
 
 Gateway node (or an Edge node) is an instance(virtual machine) provisioned in a public subnet. It serves as an entry point for accessing user’s personal analytical environment. It is created by an end-user, whose public key will be uploaded there. Only via Edge node, DLab user can access such application resources as notebook servers and dataengine clusters. Also, Edge Node is used to setup SOCKS proxy to access notebook servers via Web UI and SSH. Elastic(Static) IP address is assigned  [...]


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 15/23: collapsing test 7

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 268eaeba24be509cb7a1e0e262018c08a88868b0
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 18:01:57 2019 +0300

    collapsing test 7
---
 README.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index a3449cd..fc48af2 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary>In Amazon cloud <i>(click to expand)<i></summary>
+<details><summary>In Amazon cloud <i>(click to expand)</i></summary>
 
 Prerequisites:
 
@@ -325,7 +325,7 @@ Preparation steps for deployment:
 - Put SSH key file created through Amazon Console on the instance with the same name
 - Install Git and clone DLab repository</details>
 
-<details><summary>In Azure cloud</summary>
+<details><summary>In Azure cloud <i>(click to expand)</i></summary>
 
 Prerequisites:
 
@@ -338,7 +338,7 @@ Prerequisites:
 - Microsoft Graph
 - Windows Azure Service Management API</details>
 
-<details><summary>In Google cloud (GCP)</summary>
+<details><summary>In Google cloud (GCP) <i>(click to expand)</i></summary>
 
 Prerequisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 04/23: README.md updated

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 75070be7e7eb3ef4e8db0aa8345e922e6aa1e127
Author: Mykola Bodnar1 <my...@epam.com>
AuthorDate: Wed Jun 5 13:37:56 2019 +0300

    README.md updated
---
 README.md | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index fa4d4d5..2facf9c 100644
--- a/README.md
+++ b/README.md
@@ -496,7 +496,7 @@ This python script will build front-end and back-end part of DLab, create SSN do
 #### In Amazon cloud
 
 ```
-/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name XXXXXX --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXXXX --aws_region xx-xxxxx-x --conf_os_family debian --conf_cloud_provider aws --aws_vpc_id vpc-xxxxx --aws_subnet_id subnet-xxxxx --aws_security_groups_ids sg-xxxxx,sg-xxxx --key_path /path/to/key/ --conf_key_name key_name --conf_tag_resource_id dlab --aws_account_id xxxxxxxx --aws_billing_bucket billing_bucket --aws_report_path /billi [...]
+/usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXXXX --aws_region xx-xxxxx-x --conf_os_family debian --conf_cloud_provider aws --aws_vpc_id vpc-xxxxx --aws_subnet_id subnet-xxxxx --aws_security_groups_ids sg-xxxxx,sg-xxxx --key_path /path/to/key/ --conf_key_name key_name --conf_tag_resource_id dlab --aws_account_id xxxxxxxx --aws_billing_bucket billing_bucket --aws_report_path /bi [...]
 ```
 
 List of parameters for SSN node deployment:
@@ -616,8 +616,9 @@ To know azure\_offer\_number open [Azure Portal](https://portal.azure.com), go t
 Please see [RateCard API](https://msdn.microsoft.com/en-us/library/mt219004.aspx) to get more details about azure\_offer\_number,
 azure\_currency, azure\_locale, azure\_region_info. These DLab deploy properties correspond to RateCard API request parameters.
 
-To have working billing functionality please review Billing configuration note and use proper parameters for SSN node deployment
-To use Data Lake Store please review Azure Data Lake usage pre-requisites note and use proper parameters for SSN node deployment
+To have working billing functionality please review Billing configuration note and use proper parameters for SSN node deployment.
+
+To use Data Lake Store please review Azure Data Lake usage pre-requisites note and use proper parameters for SSN node deployment.
 
 **Note:** Azure Data Lake usage pre-requisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 19/23: AWS/Azure/GCP sections expanding added

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit b618185fcd550efaa8366886ba9aa1a4a7af2f7f
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Wed Jul 3 14:47:51 2019 +0300

    AWS/Azure/GCP sections expanding added
---
 README.md | 154 +++++++++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 97 insertions(+), 57 deletions(-)

diff --git a/README.md b/README.md
index cec502f..6df8314 100644
--- a/README.md
+++ b/README.md
@@ -325,7 +325,7 @@ Preparation steps for deployment:
 - Put SSH key file created through Amazon Console on the instance with the same name
 - Install Git and clone DLab repository</details>
 
-<details><summary>In Azure cloud</i></summary>
+<details><summary>In Azure cloud <i>(click to expand)</i></summary>
 
 Prerequisites:
 
@@ -338,7 +338,7 @@ Prerequisites:
 - Microsoft Graph
 - Windows Azure Service Management API</details>
 
-<details><summary>In Google cloud (GCP)</i></summary>
+<details><summary>In Google cloud (GCP) <i>(click to expand)</i></summary>
 
 Prerequisites:
 
@@ -377,7 +377,7 @@ pip install fabric==1.14.0
 
 This python script will build front-end and back-end part of DLab, create SSN docker image and run Docker container for creating SSN node.
 
-<details><summary>In Amazon cloud</summary>
+<details><summary>In Amazon cloud <i>(click to expand)</i></summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXXXX --aws_region xx-xxxxx-x --conf_os_family debian --conf_cloud_provider aws --aws_vpc_id vpc-xxxxx --aws_subnet_id subnet-xxxxx --aws_security_groups_ids sg-xxxxx,sg-xxxx --key_path /path/to/key/ --conf_key_name key_name --conf_tag_resource_id dlab --aws_account_id xxxxxxxx --aws_billing_bucket billing_bucket --aws_report_path /bi [...]
@@ -426,7 +426,7 @@ After SSN node deployment following AWS resources will be created:
 -   S3 bucket – its name will be \<service\_base\_name\>-ssn-bucket. This bucket will contain necessary dependencies and configuration files for Notebook nodes (such as .jar files, YARN configuration, etc.)
 -   S3 bucket for for collaboration between Dlab users. Its name will be \<service\_base\_name\>-shared-bucket</details>
 
-<details><summary>In Azure cloud</summary>
+<details><summary>In Azure cloud <i>(click to expand)</i></summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab_test --azure_region westus2 --conf_os_family debian --conf_cloud_provider azure --azure_vpc_name vpc-test --azure_subnet_name subnet-test --azure_security_group_name sg-test1,sg-test2 --key_path /root/ --conf_key_name Test --azure_auth_path /dir/file.json  --action create
@@ -499,7 +499,7 @@ After SSN node deployment following Azure resources will be created:
 -   Storage account and blob container for collaboration between Dlab users
 -   If support of Data Lake is enabled: Data Lake and shared directory will be created</details>
 
-<details><summary>In Google cloud (GCP)</summary>
+<details><summary>In Google cloud (GCP) <i>(click to expand)</i></summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xxx-xxxxx-x --conf_os_family debian --conf_cloud_provider gcp --key_path /path/to/key/ --conf_key_name key_name --gcp_ssn_instance_size n1-standard-1 --gcp_project_id project_id --gcp_service_account_path /path/to/auth/file.json --action create
@@ -542,7 +542,7 @@ After SSN node deployment following GCP resources will be created:
 Terminating SSN node will also remove all nodes and components related to it. Basically, terminating Self-service node will terminate all DLab’s infrastructure.
 Example of command for terminating DLab environment:
 
-<details><summary>In Amazon</summary>
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXX --aws_region xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider aws --action terminate
@@ -562,7 +562,7 @@ List of parameters for SSN node termination:
 | action                     | terminate                                                                          |
 </details>
 
-<details><summary>In Azure</summary>
+<details><summary>In Azure <i>(click to expand)</i></summary>
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --azure_vpc_name vpc-test --azure_resource_group_name resource-group-test --azure_region westus2 --key_path /root/ --conf_key_name Test --conf_os_family debian --conf_cloud_provider azure --azure_auth_path /dir/file.json --action terminate
 ```
@@ -581,7 +581,7 @@ List of parameters for SSN node termination:
 | action                     | terminate                                                                          |
 </details>
 
-<details><summary>In Google cloud</summary>
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --gcp_project_id project_id --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider gcp --gcp_service_account_path /path/to/auth/file.json --action terminate
@@ -610,7 +610,7 @@ Gateway node (or an Edge node) is an instance(virtual machine) provisioned in a
 
 In order to create Edge node using DLab Web UI – login and, click on the button “Upload” (Depending on authorization provider that was chosen on deployment stage, user may be taken from [LDAP](#LDAP_Authentication) or from [Azure AD (Oauth2)](#Azure_OAuth2_Authentication)). Choose user’s SSH public key and after that click on the button “Create”. Edge node will be deployed and corresponding instance (virtual machine) will be started.
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 The following AWS resources will be created:
 -   Edge EC2 instance
@@ -640,8 +640,9 @@ List of parameters for Edge node creation:
 | aws\_private\_subnet\_prefix   | Prefix of the private subnet                                                      |
 | conf\_tag\_resource\_id        | The name of tag for billing reports                                               |
 | action                         | create                                                                            |
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 The following Azure resources will be created:
 -   Edge virtual machine
@@ -668,8 +669,9 @@ List of parameters for Edge node creation:
 | azure\_vpc\_name               | Name of Azure Virtual network where all infrastructure is being deployed          |
 | azure\_subnet\_name            | Name of the Azure public subnet where Edge will be deployed                       |
 | action                         | create                                                                            |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 The following GCP resources will be created:
 -   Edge VM instance
@@ -696,12 +698,13 @@ List of parameters for Edge node creation:
 | gcp\_subnet\_name              | Name of the Azure public subnet where Edge will be deployed                       |
 | gcp\_project\_id               | ID of GCP project                                                                 |
 | action                         | create                                                                            |
+</details>
 
 ### Start/Stop <a name=""></a>
 
 To start/stop Edge node, click on the button which looks like a cycle on the top right corner, then click on the button which is located in “Action” field and in the drop-down menu click on the appropriate action.
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 List of parameters for Edge node starting/stopping:
 
@@ -712,8 +715,9 @@ List of parameters for Edge node starting/stopping:
 | edge\_user\_name          | Name of the user                                             |
 | aws\_region               | AWS region where infrastructure was deployed                 |
 | action                    | start/stop                                                   |
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 List of parameters for Edge node starting:
 
@@ -735,8 +739,9 @@ List of parameters for Edge node stopping:
 | edge\_user\_name             | Name of the user                                                          |
 | azure\_resource\_group\_name | Name of the resource group where all DLAb resources are being provisioned |
 | action                       | stop                                                                      |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 List of parameters for Edge node starting/stopping:
 
@@ -749,6 +754,7 @@ List of parameters for Edge node starting/stopping:
 | gcp\_zone                      | GCP zone where infrastructure was deployed                                        |
 | gcp\_project\_id               | ID of GCP project                                                                 |
 | action                         | start/stop                                                                        |
+</details>
 
 ### Recreate <a name=""></a>
 
@@ -758,7 +764,7 @@ If Edge node was removed for some reason, to re-create it, click on the status b
 
 List of parameters for Edge node recreation:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                  | Description/Value                                                                 |
 |----------------------------|-----------------------------------------------------------------------------------|
@@ -774,8 +780,9 @@ List of parameters for Edge node recreation:
 | edge\_elastic\_ip          | AWS Elastic IP address which was associated to Edge node                          |
 | conf\_tag\_resource\_id    | The name of tag for billing reports                                               |
 | action                     | Create                                                                            |
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                    | Description/Value                                                                 |
 |------------------------------|-----------------------------------------------------------------------------------|
@@ -789,8 +796,10 @@ List of parameters for Edge node recreation:
 | azure\_resource\_group\_name | Name of the resource group where all DLAb resources are being provisioned         |
 | azure\_subnet\_name          | Name of the Azure public subnet where Edge was deployed                           |
 | action                       | Create                                                                            |
+</details>
+
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
-#### In Google cloud
 | Parameter                  | Description/Value                                                                     |
 |--------------------------------|-----------------------------------------------------------------------------------|
 | conf\_resource                 | edge                                                                              |
@@ -804,6 +813,7 @@ List of parameters for Edge node recreation:
 | gcp\_subnet\_name              | Name of the Azure public subnet where Edge will be deployed                       |
 | gcp\_project\_id               | ID of GCP project                                                                 |
 | action                         | create                                                                            |
+</details>
 
 ## Notebook node <a name="Notebook_node"></a>
 
@@ -815,7 +825,7 @@ To create Notebook node, click on the “Create new” button. Then, in drop-dow
 
 List of parameters for Notebook node creation:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                     | Description/Value                                                                 |
 |-------------------------------|-----------------------------------------------------------------------------------|
@@ -833,8 +843,9 @@ List of parameters for Notebook node creation:
 | action                        | Create                                                                            |
 
 **Note:** For format of git_creds see "Manage git credentials" lower.
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                       | Description/Value                                                                 |
 |---------------------------------|-----------------------------------------------------------------------------------|
@@ -850,8 +861,9 @@ List of parameters for Notebook node creation:
 | application                     | Type of the notebook template (jupyter/rstudio/zeppelin/tensor/deeplearning)      |
 | git\_creds                      | User git credentials in JSON format                                               |
 | action                          | Create                                                                            |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                     | Description/Value                                                                 |
 |-------------------------------|-----------------------------------------------------------------------------------|
@@ -868,6 +880,7 @@ List of parameters for Notebook node creation:
 | application                   | Type of the notebook template (jupyter/rstudio/zeppelin/tensor/deeplearning)      |
 | git\_creds                    | User git credentials in JSON format                                               |
 | action                        | Create                                                                            |
+</details>
 
 ### Stop
 
@@ -875,7 +888,7 @@ In order to stop Notebook node, click on the “gear” button in Actions column
 
 List of parameters for Notebook node stopping:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                            |
 |---------------------------|--------------------------------------------------------------|
@@ -886,8 +899,9 @@ List of parameters for Notebook node stopping:
 | notebook\_instance\_name  | Name of the Notebook instance to terminate                   |
 | aws\_region               | AWS region where infrastructure was deployed                 |
 | action                    | Stop                                                         |
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                       | Description/Value                                                                 |
 |---------------------------------|-----------------------------------------------------------------------------------|
@@ -898,8 +912,9 @@ List of parameters for Notebook node stopping:
 | notebook\_instance\_name        | Name of the Notebook instance to terminate                                        |
 | azure\_resource\_group\_name    | Name of the resource group where all DLAb resources are being provisioned         |
 | action                          | Stop                                                                              |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                            |
 |---------------------------|--------------------------------------------------------------|
@@ -912,6 +927,7 @@ List of parameters for Notebook node stopping:
 | gcp\_zone                 | GCP zone where infrastructure was deployed                   |
 | gcp\_project\_id          | ID of GCP project                                            |
 | action                    | Stop                                                         |
+</details>
 
 ### Start
 
@@ -919,7 +935,7 @@ In order to start Notebook node, click on the button, which looks like gear in 
 
 List of parameters for Notebook node start:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                            |
 |---------------------------|--------------------------------------------------------------|
@@ -933,8 +949,9 @@ List of parameters for Notebook node start:
 | action                    | start                                                        |
 
 **Note:** For format of git_creds see "Manage git credentials" lower.
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                       | Description/Value                                                                 |
 |---------------------------------|-----------------------------------------------------------------------------------|
@@ -947,8 +964,9 @@ List of parameters for Notebook node start:
 | azure\_region                   | Azure region where infrastructure was deployed                                    |
 | git\_creds                      | User git credentials in JSON format                                               |
 | action                          | start                                                                             |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                            |
 |---------------------------|--------------------------------------------------------------|
@@ -962,6 +980,7 @@ List of parameters for Notebook node start:
 | gcp\_project\_id          | ID of GCP project                                            |
 | git\_creds                | User git credentials in JSON format                          |
 | action                    | Stop                                                         |
+</details>
 
 ### Terminate
 
@@ -969,7 +988,7 @@ In order to terminate Notebook node, click on the button, which looks like gear
 
 List of parameters for Notebook node termination:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                            |
 |---------------------------|--------------------------------------------------------------|
@@ -982,8 +1001,9 @@ List of parameters for Notebook node termination:
 | action                    | terminate                                                         |
 
 **Note:** If terminate action is called, all connected data engine clusters will be removed.
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                       | Description/Value                                                                 |
 |---------------------------------|-----------------------------------------------------------------------------------|
@@ -993,8 +1013,9 @@ List of parameters for Notebook node termination:
 | notebook\_instance\_name        | Name of the Notebook instance to terminate                                        |
 | azure\_resource\_group\_name    | Name of the resource group where all DLAb resources are being provisioned         |
 | action                          | terminate                                                                         |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                            |
 |---------------------------|--------------------------------------------------------------|
@@ -1007,12 +1028,13 @@ List of parameters for Notebook node termination:
 | gcp\_project\_id          | ID of GCP project                                            |
 | git\_creds                | User git credentials in JSON format                          |
 | action                    | Stop                                                         |
+</details>
 
 ### List/Install additional libraries
 
 In order to list available libraries (OS/Python2/Python3/R/Others) on Notebook node, click on the button, which looks like gear in “Action” field. Then in drop-down menu choose “Manage libraries” action.
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 List of parameters for Notebook node to **get list** of available libraries:
 
@@ -1072,8 +1094,9 @@ List of parameters for Notebook node to **install** additional libraries:
   ...
 }
 ```
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 List of parameters for Notebook node to **get list** of available libraries:
 
@@ -1101,9 +1124,9 @@ List of parameters for Notebook node to **install** additional libraries:
 | application                   | Type of the notebook template (jupyter/rstudio/zeppelin/tensor/deeplearning)         |
 | libs                          | List of additional libraries in JSON format with type (os_pkg/pip2/pip3/r_pkg/others)|
 | action                        | lib_install                                                                          |
+</details>
 
-
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 List of parameters for Notebook node to **get list** of available libraries:
 
@@ -1133,12 +1156,13 @@ List of parameters for Notebook node to **install** additional libraries:
 | application                   | Type of the notebook template (jupyter/rstudio/zeppelin/tensor/deeplearning)         |
 | libs                          | List of additional libraries in JSON format with type (os_pkg/pip2/pip3/r_pkg/others)|
 | action                        | lib_install                                                                          |
+</details>
 
 ### Manage git credentials
 
 In order to manage git credentials on Notebook node, click on the button “Git credentials”. Then in menu you can add or edit existing credentials.
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 List of parameters for Notebook node to **manage git credentials**:
 
@@ -1170,8 +1194,9 @@ List of parameters for Notebook node to **manage git credentials**:
 **Note:** Leave "hostname" field empty to apply login/password by default for all services.
 
 **Note:** Also your can use "Personal access tokens" against passwords.
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                     | Description/Value                                                                 |
 |-------------------------------|-----------------------------------------------------------------------------------|
@@ -1183,8 +1208,9 @@ List of parameters for Notebook node to **manage git credentials**:
 | azure\_resource\_group\_name  | Name of the resource group where all DLAb resources are being provisioned         |
 | git\_creds                    | User git credentials in JSON format                                               |
 | action                        | git\_creds                                                                        |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                     | Description/Value                                                                 |
 |-------------------------------|-----------------------------------------------------------------------------------|
@@ -1198,6 +1224,7 @@ List of parameters for Notebook node to **manage git credentials**:
 | notebook\_instance\_name      | Name of the Notebook instance to terminate                                        |
 | git\_creds                    | User git credentials in JSON format                                               |
 | action                        | git\_creds                                                                        |
+</details>
 
 ## Dataengine-service cluster <a name="Dataengine-service cluster"></a>
 
@@ -1209,7 +1236,7 @@ To create dataengine-service cluster click on the “gear” button in Actions c
 
 List of parameters for dataengine-service cluster creation:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                   | Description/Value                                                        |
 |-----------------------------|--------------------------------------------------------------------------|
@@ -1228,8 +1255,9 @@ List of parameters for dataengine-service cluster creation:
 | action                      | create                                                                   |
 
 **Note:** If “Spot instances” is enabled, dataengine-service Slave nodes will be created as EC2 Spot instances.
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                       | Description/Value                                                        |
 |---------------------------------|--------------------------------------------------------------------------|
@@ -1250,6 +1278,7 @@ List of parameters for dataengine-service cluster creation:
 | gcp\_zone                       | GCP zone name                                                            |
 | conf\_tag\_resource\_id         | The name of tag for billing reports                                      |
 | action                          | create                                                                   |
+</details>
 
 ### Terminate
 
@@ -1257,7 +1286,7 @@ In order to terminate dataengine-service cluster, click on “x” button which
 
 List of parameters for dataengine-service cluster termination:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                                   |
 |---------------------------|---------------------------------------------------------------------|
@@ -1269,8 +1298,9 @@ List of parameters for dataengine-service cluster termination:
 | notebook\_instance\_name  | Name of the Notebook instance which dataengine-service is linked to |
 | aws\_region               | AWS region where infrastructure was deployed                        |
 | action                    | Terminate                                                           |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                 | Description/Value                                                   |
 |---------------------------|---------------------------------------------------------------------|
@@ -1284,12 +1314,13 @@ List of parameters for dataengine-service cluster termination:
 | gcp\_zone                 | GCP zone name                                                       |
 | dataproc\_cluster\_name   | Dataproc cluster name                                               |
 | action                    | Terminate                                                           |
+</details>
 
 ### List/Install additional libraries
 
 In order to list available libraries (OS/Python2/Python3/R/Others) on Dataengine-service, click on the button, which looks like gear in “Action” field. Then in drop-down menu choose “Manage libraries” action.
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 List of parameters for Dataengine-service node to **get list** of available libraries:
 
@@ -1347,8 +1378,9 @@ List of parameters for Dataengine-service to **install** additional libraries:
   ...
 }
 ```
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 List of parameters for Dataengine-service node to **get list** of available libraries:
 
@@ -1375,7 +1407,7 @@ List of parameters for Dataengine-service node to **install** additional librari
 | gcp\_region                   | GCP region name                                                                   |
 | gcp\_zone                     | GCP zone name                                                                     |
 | action                        | lib_install                                                                       |
-
+</details>
 
 ## Dataengine cluster <a name="Dataengine cluster"></a>
 
@@ -1387,7 +1419,7 @@ To create Spark standalone cluster click on the “gear” button in Actions col
 
 List of parameters for dataengine cluster creation:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                      | Description/Value                                                                 |
 |--------------------------------|-----------------------------------------------------------------------------------|
@@ -1402,8 +1434,9 @@ List of parameters for dataengine cluster creation:
 | aws\_dataengine\_master\_size  | Size of master node                                                               |
 | aws\_dataengine\_slave\_size   | Size of slave node                                                                |
 | action                         | create                                                                            |
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                      | Description/Value                                                                 |
 |--------------------------------|-----------------------------------------------------------------------------------|
@@ -1421,8 +1454,9 @@ List of parameters for dataengine cluster creation:
 | azure\_resource\_group\_name   | Name of the resource group where all DLAb resources are being provisioned         |
 | azure\_subnet\_name            | Name of the Azure public subnet where Edge was deployed                           |
 | action                         | create                                                                            |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                    | Description/Value                                                                 |
 |------------------------------|-----------------------------------------------------------------------------------|
@@ -1441,7 +1475,7 @@ List of parameters for dataengine cluster creation:
 | gcp\_zone                    | GCP zone name                                                                     |
 | edge\_user\_name             | Value that previously was used when Edge being provisioned                        |
 | action                       | create                                                                            |
-
+</details>
 
 ### Terminate
 
@@ -1449,7 +1483,7 @@ In order to terminate dataengine cluster, click on “x” button which is locat
 
 List of parameters for dataengine cluster termination:
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 | Parameter                    | Description/Value                                                        |
 |------------------------------|--------------------------------------------------------------------------|
@@ -1461,8 +1495,9 @@ List of parameters for dataengine cluster termination:
 | computational\_name          | Name of cluster                                                          |
 | aws\_region                  | AWS region where infrastructure was deployed                             |
 | action                       | Terminate                                                                |
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 | Parameter                    | Description/Value                                                        |
 |------------------------------|--------------------------------------------------------------------------|
@@ -1475,8 +1510,9 @@ List of parameters for dataengine cluster termination:
 | azure\_region                | Azure region where infrastructure was deployed                           |
 | azure\_resource\_group\_name | Name of the resource group where all DLAb resources are being provisioned|
 | action                       | Terminate                                                                |
+</details>
 
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 | Parameter                    | Description/Value                                                        |
 |------------------------------|--------------------------------------------------------------------------|
@@ -1490,12 +1526,13 @@ List of parameters for dataengine cluster termination:
 | gcp\_region                  | GCP region where infrastructure was deployed                             |
 | gcp\_zone                    | GCP zone name                                                            |
 | action                       | Terminate                                                                |
+</details>
 
 ### List/Install additional libraries
 
 In order to list available libraries (OS/Python2/Python3/R/Others) on Dataengine, click on the button, which looks like gear in “Action” field. Then in drop-down menu choose “Manage libraries” action.
 
-#### In Amazon
+<details><summary>In Amazon <i>(click to expand)</i></summary>
 
 List of parameters for Dataengine node to **get list** of available libraries:
 
@@ -1552,8 +1589,9 @@ List of parameters for Dataengine node to **install** additional libraries:
   ...
 }
 ```
+</details>
 
-#### In Azure
+<details><summary>In Azure <i>(click to expand)</i></summary>
 
 List of parameters for Dataengine node to **get list** of available libraries:
 
@@ -1580,9 +1618,9 @@ List of parameters for Dataengine node to **install** additional libraries:
 | computational\_id             | Name of cluster                                                                   |
 | application                   | Type of the notebook template (jupyter/rstudio/zeppelin/tensor/deeplearning)      |
 | action                        | lib_install                                                                       |
+</details>
 
-
-#### In Google cloud
+<details><summary>In Google cloud <i>(click to expand)</i></summary>
 
 List of parameters for Dataengine node to **get list** of available libraries:
 
@@ -1611,7 +1649,7 @@ List of parameters for Dataengine node to **install** additional libraries:
 | gcp\_zone                     | GCP zone name                                                                     |
 | computational\_id             | Name of cluster                                                                   |
 | action                        | lib_install                                                                       |
-
+</details>
 
 ## Configuration files <a name="Configuration_files"></a>
 
@@ -1656,7 +1694,7 @@ To use your own certificate please do the following:
 
 ## Billing report <a name="Billing_Report"></a>
 
-### AWS
+<details><summary>AWS <i>(click to expand)</i></summary>
 
 Billing module is implemented as a separate jar file and can be running in the follow modes:
 
@@ -1686,8 +1724,9 @@ If you want billing to work as a separate process from the Self-Service use foll
 ```
 java -cp /opt/dlab/webapp/lib/billing/billing-aws.x.y.jar com.epam.dlab.BillingScheduler --conf /opt/dlab/conf/billing.yml
 ```
+</details>
 
-### Azure
+<details><summary>Azure <i>(click to expand)</i></summary>
 
 Billing module is implemented as a separate jar file and can be running in the follow modes:
 
@@ -1698,6 +1737,7 @@ If you want to start billing module as a separate process use the following comm
 ```
 java -jar /opt/dlab/webapp/lib/billing/billing-azure.x.y.jar /opt/dlab/conf/billing.yml
 ```
+</details>
 
 ## Backup and Restore <a name="Backup_and_Restore"></a>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 14/23: collapsing test 6

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit b0ac05ffa022172fa3e7b5a33244b2b7172f147c
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:54:10 2019 +0300

    collapsing test 6
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 0ea3329..a3449cd 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary>In Amazon cloud (_click to expand_)</summary>
+<details><summary>In Amazon cloud <i>(click to expand)<i></summary>
 
 Prerequisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 06/23: README.md file updated

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 478992f084b18444ea33fdf9f956e940598c5f06
Author: Mykola Bodnar1 <my...@epam.com>
AuthorDate: Wed Jun 5 16:46:37 2019 +0300

    README.md file updated
---
 README.md | 206 --------------------------------------------------------------
 1 file changed, 206 deletions(-)

diff --git a/README.md b/README.md
index 13504ab..29e5814 100644
--- a/README.md
+++ b/README.md
@@ -130,154 +130,7 @@ Creation of self-service node – is the first step for deploying DLab. SSN is a
 
 Elastic(Static) IP address is assigned to an SSN Node, so you are free to stop|start it and and SSN node's IP address won’t change.
 
-<<<<<<< HEAD
-<<<<<<< HEAD
-## Edge node
-
-Setting up Edge node is the first step that user is asked to do once logged into DLab. This node is used as proxy server and SSH gateway for the user. Through Edge node users can access Notebook via HTTP and SSH. Edge Node has a Squid HTTP web proxy pre-installed.
-
-## Notebook node
-
-The next step is setting up a Notebook node (or a Notebook server). It is a server with pre-installed applications and libraries for data processing, data cleaning and transformations, numerical simulations, statistical modeling, machine learning, etc. Following analytical tools are currently supported in DLab and can be installed on a Notebook node:
-
--   Jupyter
--   RStudio
--   Apache Zeppelin
--   TensorFlow + Jupyter
--   Deep Learning + Jupyter
-
-Apache Spark is also installed for each of the analytical tools above.
-
-**Note:** terms 'Apache Zeppelin' and 'Apache Spark' hereinafter may be referred to as 'Zeppelin' and 'Spark' respectively or may have original reference.
-
-## Data engine cluster
-
-After deploying Notebook node, user can create one of the cluster for it:
--   Data engine - Spark standalone cluster
--   Data engine service - cloud managed cluster platform (EMR for AWS or Dataproc for GCP)
-That simplifies running big data frameworks, such as Apache Hadoop and Apache Spark to process and analyze vast amounts of data. Adding cluster is not mandatory and is only needed in case additional computational resources are required for job execution.
-----------------------
-# DLab Deployment <a name="DLab_Deployment"></a>
-
-## Prerequisites<a name="Prerequisites"></a>
-#### In Amazon cloud
-Prerequisites:
- 
- - SSH key for EC2 instances. This key could be created through Amazon Console.
- - IAM user
- - AWS access key ID and secret access key
- - The following permissions should be assigned for IAM user:
- <a name="AWS_SSN_policy"></a>
-```
-{
-	"Version": "2012-10-17",
-	"Statement": [
-		{
-			"Action": [
-				"iam:ListRoles",
-				"iam:CreateRole",
-				"iam:CreateInstanceProfile",
-				"iam:PutRolePolicy",
-				"iam:AddRoleToInstanceProfile",
-				"iam:PassRole",
-				"iam:GetInstanceProfile",
-				"iam:ListInstanceProfilesForRole"
-				"iam:RemoveRoleFromInstanceProfile",
-				"iam:DeleteInstanceProfile",
-				"iam:TagRole"
-			],
-			"Effect": "Allow",
-			"Resource": "*"
-		},
-		{
-			"Action": [
-				"ec2:DescribeImages",
-				"ec2:CreateTags",
-				"ec2:DescribeRouteTables",
-				"ec2:CreateRouteTable",
-				"ec2:AssociateRouteTable",
-				"ec2:DescribeVpcEndpoints",
-				"ec2:CreateVpcEndpoint",
-				"ec2:ModifyVpcEndpoint",
-				"ec2:DescribeInstances",
-				"ec2:RunInstances",
-				"ec2:DescribeAddresses",
-				"ec2:AllocateAddress",
-				"ec2:DescribeInstances",
-				"ec2:AssociateAddress",
-				"ec2:DisassociateAddress",
-				"ec2:ReleaseAddress",
-				"ec2:TerminateInstances"
-			],
-			"Effect": "Allow",
-			"Resource": "*"
-		},
-		{
-			"Action": [
-				"s3:ListAllMyBuckets",
-				"s3:CreateBucket",
-				"s3:PutBucketTagging",
-				"s3:GetBucketTagging"
-			],
-			"Effect": "Allow",
-			"Resource": "*"
-		}
-	]
-}
-```
-
-#### In Azure cloud
-
-Prerequisites:
-
-- IAM user with Contributor permissions.
-- Service principal and JSON based auth file with clientId, clientSecret and tenantId. 
-
-**Note:** The following permissions should be assigned to the service principal:
-
-- Windows Azure Active Directory
-- Microsoft Graph
-- Windows Azure Service Management API
-
-#### In Google cloud (GCP)
-
-Prerequisites:
-
-- IAM user
-- Service account and JSON auth file for it. In order to get JSON auth file, Key should be created for service account through Google cloud console.
-## Preparing environment for DLab deployment <a name="Env_for_DLab"></a>
-
-#### In Amazon cloud
-If you want to deploy DLab from inside of your AWS account, you can use the following instruction:
-
-- Create an EC2 instance with the following settings:
-    - Shape of the instance shouldn't be less than t2.medium
-    - The instance should have access to Internet in order to install required prerequisites
-    - The instance should have access to further DLab installation
-    - AMI - Ubuntu 16.04
-    - IAM role with [policy](#AWS_SSN_policy) should be assigned to the instance
-- Connect to the instance via SSH and run the following commands:
-```
-    sudo su
-    apt-get update
-    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
-    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
-    apt-get update
-    apt-cache policy docker-ce
-    apt-get install -y docker-ce=17.06.2~ce-0~ubuntu
-    usermod -a -G docker ubuntu
-    apt-get install python-pip
-    pip install fabric==1.14.0
-```
-- Clone DLab repository and run deploy script.
-
-## Structure of main DLab directory <a name="DLab_directory"></a>
-=======
 ### Structure of main DLab directory <a name="DLab_directory"></a>
->>>>>>> 84ff8aad0... README.md updated
-=======
-### Structure of main DLab directory <a name="DLab_directory"></a>
->>>>>>> 84ff8aad0... README.md updated
 
 DLab’s SSN node main directory structure is as follows:
 
@@ -355,8 +208,6 @@ For each cloud provider, prerequisites are different.
 
 #### In Amazon cloud
 
-<<<<<<< HEAD
-=======
 Prerequisites:
 
  - SSH key for EC2 instances. This key could be created through Amazon Console.
@@ -421,12 +272,6 @@ Prerequisites:
 }
 ```
 
-<<<<<<< HEAD
-<<<<<<< HEAD
->>>>>>> eb92433f3... README.md edited
-=======
-=======
->>>>>>> 84ff8aad0... README.md updated
 Preparation steps for deployment:
 
 - Create an EC2 instance with the following settings:
@@ -468,10 +313,6 @@ Preparation steps for deployment:
 
 ### Executing deployment script
 
-<<<<<<< HEAD
->>>>>>> 84ff8aad0... README.md updated
-=======
->>>>>>> 84ff8aad0... README.md updated
 To build SSN node, following steps should be executed:
 
 - Connect to the instance via SSH and run the following commands:
@@ -544,33 +385,6 @@ After SSN node deployment following AWS resources will be created:
 
 #### In Azure cloud
 
-<<<<<<< HEAD
-<<<<<<< HEAD
-<<<<<<< HEAD
-=======
-Prerequisites:
-
-- IAM user with Contributor permissions.
-- Service principal and JSON based auth file with clientId, clientSecret and tenantId.
-
-**Note:** The following permissions should be assigned to the service principal:
-
-- Windows Azure Active Directory
-- Microsoft Graph
-- Windows Azure Service Management API
-
->>>>>>> eb92433f3... README.md edited
-To build SSN node, following steps should be executed:
-
-1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed
-2.  Go to *dlab* directory
-3.  To have working billing functionality please review Billing configuration note and use proper parameters for SSN node deployment
-4.  To use Data Lake Store please review Azure Data Lake usage pre-requisites note and use proper parameters for SSN node deployment
-5.  Execute following deploy_dlab.py script:
-=======
->>>>>>> 84ff8aad0... README.md updated
-=======
->>>>>>> 84ff8aad0... README.md updated
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab_test --azure_region westus2 --conf_os_family debian --conf_cloud_provider azure --azure_vpc_name vpc-test --azure_subnet_name subnet-test --azure_security_group_name sg-test1,sg-test2 --key_path /root/ --conf_key_name Test --azure_auth_path /dir/file.json  --action create
 ```
@@ -644,25 +458,6 @@ After SSN node deployment following Azure resources will be created:
 
 #### In Google cloud (GCP)
 
-<<<<<<< HEAD
-<<<<<<< HEAD
-<<<<<<< HEAD
-=======
-Prerequisites:
-
-- IAM user
-- Service account and JSON auth file for it. In order to get JSON auth file, Key should be created for service account through Google cloud console.
-
->>>>>>> eb92433f3... README.md edited
-To build SSN node, following steps should be executed:
-
-1.  Clone Git repository and make sure that all following [pre-requisites](#Pre-requisites) are installed.
-2.  Go to *dlab* directory.
-3.  Execute following script:
-=======
->>>>>>> 84ff8aad0... README.md updated
-=======
->>>>>>> 84ff8aad0... README.md updated
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xxx-xxxxx-x --conf_os_family debian --conf_cloud_provider gcp --key_path /path/to/key/ --conf_key_name key_name --gcp_ssn_instance_size n1-standard-1 --gcp_project_id project_id --gcp_service_account_path /path/to/auth/file.json --action create
 ```
@@ -760,7 +555,6 @@ List of parameters for SSN node termination:
 | gcp\_project\_id             | ID of GCP project                                                                       |
 | action                       | In case of SSN node termination, this parameter should be set to “terminate”            |
 
-
 ## Edge Node <a name="Edge_Node"></a>
 
 Gateway node (or an Edge node) is an instance(virtual machine) provisioned in a public subnet. It serves as an entry point for accessing user’s personal analytical environment. It is created by an end-user, whose public key will be uploaded there. Only via Edge node, DLab user can access such application resources as notebook servers and dataengine clusters. Also, Edge Node is used to setup SOCKS proxy to access notebook servers via Web UI and SSH. Elastic(Static) IP address is assigned  [...]


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 10/23: collapsing test 3

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit d11f535df8a21e8b9ef3a24009d74a9e467a5aa4
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:18:41 2019 +0300

    collapsing test 3
---
 README.md | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/README.md b/README.md
index d51dc33..ee75a59 100644
--- a/README.md
+++ b/README.md
@@ -542,7 +542,7 @@ After SSN node deployment following GCP resources will be created:
 Terminating SSN node will also remove all nodes and components related to it. Basically, terminating Self-service node will terminate all DLab’s infrastructure.
 Example of command for terminating DLab environment:
 
-#### In Amazon
+<details><summary>In Amazon</summary>
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXX --aws_region xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider aws --action terminate
 ```
@@ -559,8 +559,8 @@ List of parameters for SSN node termination:
 | conf\_os\_family           | Name of the Linux distributive family, which is supported by DLab (Debian/RedHat)  |
 | conf\_cloud\_provider      | Name of the cloud provider, which is supported by DLab (AWS)                       |
 | action                     | terminate                                                                          |
-
-#### In Azure
+</details>
+<details><summary>In Azure</summary>
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --azure_vpc_name vpc-test --azure_resource_group_name resource-group-test --azure_region westus2 --key_path /root/ --conf_key_name Test --conf_os_family debian --conf_cloud_provider azure --azure_auth_path /dir/file.json --action terminate
 ```
@@ -578,8 +578,8 @@ List of parameters for SSN node termination:
 | azure\_auth\_path          | Full path to auth json file                                                        |
 | action                     | terminate                                                                          |
 
-
-#### In Google cloud
+</details>
+<details><summary>In Google cloud</summary>
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --gcp_project_id project_id --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider gcp --gcp_service_account_path /path/to/auth/file.json --action terminate
 ```
@@ -597,7 +597,7 @@ List of parameters for SSN node termination:
 | gcp\_service\_account\_path  | Full path to auth json file                                                             |
 | gcp\_project\_id             | ID of GCP project                                                                       |
 | action                       | In case of SSN node termination, this parameter should be set to “terminate”            |
-
+</details>
 ## Edge Node <a name="Edge_Node"></a>
 
 Gateway node (or an Edge node) is an instance(virtual machine) provisioned in a public subnet. It serves as an entry point for accessing user’s personal analytical environment. It is created by an end-user, whose public key will be uploaded there. Only via Edge node, DLab user can access such application resources as notebook servers and dataengine clusters. Also, Edge Node is used to setup SOCKS proxy to access notebook servers via Web UI and SSH. Elastic(Static) IP address is assigned  [...]


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 08/23: collapsing test 1

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 285fe4ac987660a1a282120327ca9281f829ba02
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:12:22 2019 +0300

    collapsing test 1
---
 README.md | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/README.md b/README.md
index e98a971..8687300 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary>#### In Amazon cloud</summary>
+<details><summary>In Amazon cloud</summary>
 
 Prerequisites:
 
@@ -325,7 +325,7 @@ Preparation steps for deployment:
 - Put SSH key file created through Amazon Console on the instance with the same name
 - Install Git and clone DLab repository</details>
 
-#### In Azure cloud
+<details><summary>In Azure cloud</summary>
 
 Prerequisites:
 
@@ -336,9 +336,9 @@ Prerequisites:
 
 - Windows Azure Active Directory
 - Microsoft Graph
-- Windows Azure Service Management API
+- Windows Azure Service Management API</details>
 
-#### In Google cloud (GCP)
+<details><summary>In Google cloud (GCP)
 
 Prerequisites:
 
@@ -352,7 +352,7 @@ Preparation steps for deployment:
     - Boot disk OS Image - Ubuntu 16.04
 - Generate SSH key pair and rename private key with .pem extension
 - Put JSON auth file created through Google cloud console to users home directory
-- Install Git and clone DLab repository
+- Install Git and clone DLab repository</summary>
 
 ### Executing deployment script
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 23/23: Azure expanding issues fixed

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 202f1bb70338153b3817065188cc4381c480adb4
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Thu Jul 4 10:19:42 2019 +0300

    Azure expanding issues fixed
---
 README.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.md b/README.md
index 0895303..fdad3d0 100644
--- a/README.md
+++ b/README.md
@@ -563,6 +563,7 @@ List of parameters for SSN node termination:
 </details>
 
 <details><summary>In Azure <i>(click to expand)</i></summary>
+
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --azure_vpc_name vpc-test --azure_resource_group_name resource-group-test --azure_region westus2 --key_path /root/ --conf_key_name Test --conf_os_family debian --conf_cloud_provider azure --azure_auth_path /dir/file.json --action terminate
 ```


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 05/23: README.md updated

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 3a372dca59405dd32d6810550c5555f860fb0cf7
Author: Mykola Bodnar1 <my...@epam.com>
AuthorDate: Wed Jun 5 15:04:01 2019 +0300

    README.md updated
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 2facf9c..13504ab 100644
--- a/README.md
+++ b/README.md
@@ -434,7 +434,7 @@ Preparation steps for deployment:
     - The instance should have access to further DLab installation
     - AMI - Ubuntu 16.04
     - IAM role with [policy](#AWS_SSN_policy) should be assigned to the instance
-- Put SSH key file created through Amazon Console on the instance
+- Put SSH key file created through Amazon Console on the instance with the same name
 - Install Git and clone DLab repository
 
 #### In Azure cloud


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 13/23: collapsing test 5

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 0386d13af42a13c69a58b9c50bbfafb411355b44
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:53:13 2019 +0300

    collapsing test 5
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 1146088..0ea3329 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary>In Amazon cloud (*click to expand*)</summary>
+<details><summary>In Amazon cloud (_click to expand_)</summary>
 
 Prerequisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 16/23: collapsing test 8

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 19547ae96fac4901fd5f496cb2a6130f75e9173e
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 18:03:43 2019 +0300

    collapsing test 8
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index fc48af2..ce500f1 100644
--- a/README.md
+++ b/README.md
@@ -338,7 +338,7 @@ Prerequisites:
 - Microsoft Graph
 - Windows Azure Service Management API</details>
 
-<details><summary>In Google cloud (GCP) <i>(click to expand)</i></summary>
+<details><summary>In Google cloud (GCP) <i style="color: grey">(click to expand)</i></summary>
 
 Prerequisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 03/23: README.md updated1

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit ad975af173f52e3edb2151c95589d115af74de0f
Author: Mykola Bodnar1 <my...@epam.com>
AuthorDate: Wed Jun 5 13:27:42 2019 +0300

    README.md updated1
---
 README.md | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/README.md b/README.md
index ed30994..fa4d4d5 100644
--- a/README.md
+++ b/README.md
@@ -131,6 +131,7 @@ Creation of self-service node – is the first step for deploying DLab. SSN is a
 Elastic(Static) IP address is assigned to an SSN Node, so you are free to stop|start it and and SSN node's IP address won’t change.
 
 <<<<<<< HEAD
+<<<<<<< HEAD
 ## Edge node
 
 Setting up Edge node is the first step that user is asked to do once logged into DLab. This node is used as proxy server and SSH gateway for the user. Through Edge node users can access Notebook via HTTP and SSH. Edge Node has a Squid HTTP web proxy pre-installed.
@@ -274,6 +275,9 @@ If you want to deploy DLab from inside of your AWS account, you can use the foll
 =======
 ### Structure of main DLab directory <a name="DLab_directory"></a>
 >>>>>>> 84ff8aad0... README.md updated
+=======
+### Structure of main DLab directory <a name="DLab_directory"></a>
+>>>>>>> 84ff8aad0... README.md updated
 
 DLab’s SSN node main directory structure is as follows:
 
@@ -418,8 +422,11 @@ Prerequisites:
 ```
 
 <<<<<<< HEAD
+<<<<<<< HEAD
 >>>>>>> eb92433f3... README.md edited
 =======
+=======
+>>>>>>> 84ff8aad0... README.md updated
 Preparation steps for deployment:
 
 - Create an EC2 instance with the following settings:
@@ -461,6 +468,9 @@ Preparation steps for deployment:
 
 ### Executing deployment script
 
+<<<<<<< HEAD
+>>>>>>> 84ff8aad0... README.md updated
+=======
 >>>>>>> 84ff8aad0... README.md updated
 To build SSN node, following steps should be executed:
 
@@ -536,6 +546,7 @@ After SSN node deployment following AWS resources will be created:
 
 <<<<<<< HEAD
 <<<<<<< HEAD
+<<<<<<< HEAD
 =======
 Prerequisites:
 
@@ -558,6 +569,8 @@ To build SSN node, following steps should be executed:
 5.  Execute following deploy_dlab.py script:
 =======
 >>>>>>> 84ff8aad0... README.md updated
+=======
+>>>>>>> 84ff8aad0... README.md updated
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab_test --azure_region westus2 --conf_os_family debian --conf_cloud_provider azure --azure_vpc_name vpc-test --azure_subnet_name subnet-test --azure_security_group_name sg-test1,sg-test2 --key_path /root/ --conf_key_name Test --azure_auth_path /dir/file.json  --action create
 ```
@@ -632,6 +645,7 @@ After SSN node deployment following Azure resources will be created:
 
 <<<<<<< HEAD
 <<<<<<< HEAD
+<<<<<<< HEAD
 =======
 Prerequisites:
 
@@ -646,6 +660,8 @@ To build SSN node, following steps should be executed:
 3.  Execute following script:
 =======
 >>>>>>> 84ff8aad0... README.md updated
+=======
+>>>>>>> 84ff8aad0... README.md updated
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xxx-xxxxx-x --conf_os_family debian --conf_cloud_provider gcp --key_path /path/to/key/ --conf_key_name key_name --gcp_ssn_instance_size n1-standard-1 --gcp_project_id project_id --gcp_service_account_path /path/to/auth/file.json --action create
 ```


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 07/23: collapsing test

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 6ea03eac2e188c19ede94af2547da88f1fa83bb2
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:10:18 2019 +0300

    collapsing test
---
 README.md | 205 +++++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 124 insertions(+), 81 deletions(-)

diff --git a/README.md b/README.md
index 29e5814..e98a971 100644
--- a/README.md
+++ b/README.md
@@ -130,6 +130,33 @@ Creation of self-service node – is the first step for deploying DLab. SSN is a
 
 Elastic(Static) IP address is assigned to an SSN Node, so you are free to stop|start it and and SSN node's IP address won’t change.
 
+## Edge node
+
+Setting up Edge node is the first step that user is asked to do once logged into DLab. This node is used as proxy server and SSH gateway for the user. Through Edge node users can access Notebook via HTTP and SSH. Edge Node has a Squid HTTP web proxy pre-installed.
+
+## Notebook node
+
+The next step is setting up a Notebook node (or a Notebook server). It is a server with pre-installed applications and libraries for data processing, data cleaning and transformations, numerical simulations, statistical modeling, machine learning, etc. Following analytical tools are currently supported in DLab and can be installed on a Notebook node:
+
+-   Jupyter
+-   RStudio
+-   Apache Zeppelin
+-   TensorFlow + Jupyter
+-   Deep Learning + Jupyter
+
+Apache Spark is also installed for each of the analytical tools above.
+
+**Note:** terms 'Apache Zeppelin' and 'Apache Spark' hereinafter may be referred to as 'Zeppelin' and 'Spark' respectively or may have original reference.
+
+## Data engine cluster
+
+After deploying Notebook node, user can create one of the cluster for it:
+-   Data engine - Spark standalone cluster
+-   Data engine service - cloud managed cluster platform (EMR for AWS or Dataproc for GCP)
+That simplifies running big data frameworks, such as Apache Hadoop and Apache Spark to process and analyze vast amounts of data. Adding cluster is not mandatory and is only needed in case additional computational resources are required for job execution.
+----------------------
+# DLab Deployment <a name="DLab_Deployment"></a>
+
 ### Structure of main DLab directory <a name="DLab_directory"></a>
 
 DLab’s SSN node main directory structure is as follows:
@@ -172,33 +199,6 @@ These directories contain the log files for each template and for DLab back-end
 -   selfservice.log – Self-Service log file;
 -   edge, notebook, dataengine, dataengine-service – contains logs of Python scripts.
 
-## Edge node
-
-Setting up Edge node is the first step that user is asked to do once logged into DLab. This node is used as proxy server and SSH gateway for the user. Through Edge node users can access Notebook via HTTP and SSH. Edge Node has a Squid HTTP web proxy pre-installed.
-
-## Notebook node
-
-The next step is setting up a Notebook node (or a Notebook server). It is a server with pre-installed applications and libraries for data processing, data cleaning and transformations, numerical simulations, statistical modeling, machine learning, etc. Following analytical tools are currently supported in DLab and can be installed on a Notebook node:
-
--   Jupyter
--   RStudio
--   Apache Zeppelin
--   TensorFlow + Jupyter
--   Deep Learning + Jupyter
-
-Apache Spark is also installed for each of the analytical tools above.
-
-**Note:** terms 'Apache Zeppelin' and 'Apache Spark' hereinafter may be referred to as 'Zeppelin' and 'Spark' respectively or may have original reference.
-
-## Data engine cluster
-
-After deploying Notebook node, user can create one of the cluster for it:
--   Data engine - Spark standalone cluster
--   Data engine service - cloud managed cluster platform (EMR for AWS or Dataproc for GCP)
-That simplifies running big data frameworks, such as Apache Hadoop and Apache Spark to process and analyze vast amounts of data. Adding cluster is not mandatory and is only needed in case additional computational resources are required for job execution.
-----------------------
-# DLab Deployment <a name="DLab_Deployment"></a>
-
 ## Self-Service Node <a name="Self_Service_Node"></a>
 
 ### Preparing environment for DLab deployment <a name="Env_for_DLab"></a>
@@ -206,69 +206,112 @@ That simplifies running big data frameworks, such as Apache Hadoop and Apache Sp
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-#### In Amazon cloud
+<details><summary>#### In Amazon cloud</summary>
 
 Prerequisites:
 
  - SSH key for EC2 instances. This key could be created through Amazon Console.
  - IAM user
  - AWS access key ID and secret access key
+ - VPC ID
+ - Subnet ID
  - The following permissions should be assigned for IAM user:
  <a name="AWS_SSN_policy"></a>
 ```
 {
-	"Version": "2012-10-17",
-	"Statement": [
-		{
-			"Action": [
-				"iam:ListRoles",
-				"iam:CreateRole",
-				"iam:CreateInstanceProfile",
-				"iam:PutRolePolicy",
-				"iam:AddRoleToInstanceProfile",
-				"iam:PassRole",
-				"iam:GetInstanceProfile",
-				"iam:ListInstanceProfilesForRole",
-				"iam:RemoveRoleFromInstanceProfile",
-				"iam:DeleteInstanceProfile"
-			],
-			"Effect": "Allow",
-			"Resource": "*"
-		},
-		{
-			"Action": [
-				"ec2:DescribeImages",
-				"ec2:CreateTags",
-				"ec2:DescribeRouteTables",
-				"ec2:CreateRouteTable",
-				"ec2:AssociateRouteTable",
-				"ec2:DescribeVpcEndpoints",
-				"ec2:CreateVpcEndpoint",
-				"ec2:ModifyVpcEndpoint",
-				"ec2:DescribeInstances",
-				"ec2:RunInstances",
-				"ec2:DescribeAddresses",
-				"ec2:AllocateAddress",
-				"ec2:DescribeInstances",
-				"ec2:AssociateAddress",
-				"ec2:DisassociateAddress",
-				"ec2:ReleaseAddress",
-				"ec2:TerminateInstances"
-			],
-			"Effect": "Allow",
-			"Resource": "*"
-		},
-		{
-			"Action": [
-				"s3:ListAllMyBuckets",
-				"s3:CreateBucket",
-				"s3:PutBucketTagging",
-				"s3:GetBucketTagging"
-			],
-			"Effect": "Allow",
-			"Resource": "*"
-		}
-	]
+    "Version": "2012-10-17",
+    "Statement": [
+        {
+            "Action": [
+                "iam:CreatePolicy",
+                "iam:AttachRolePolicy",
+                "iam:DetachRolePolicy",
+                "iam:DeletePolicy",
+                "iam:DeleteRolePolicy",
+                "iam:GetRolePolicy",
+                "iam:GetPolicy",
+                "iam:GetUser",
+                "iam:ListUsers",
+                "iam:ListAccessKeys",
+                "iam:ListUserPolicies",
+                "iam:ListAttachedRolePolicies",
+                "iam:ListPolicies",
+                "iam:ListRolePolicies",
+                "iam:ListRoles",
+                "iam:CreateRole",
+                "iam:CreateInstanceProfile",
+                "iam:PutRolePolicy",
+                "iam:AddRoleToInstanceProfile",
+                "iam:PassRole",
+                "iam:GetInstanceProfile",
+                "iam:ListInstanceProfilesForRole",
+                "iam:RemoveRoleFromInstanceProfile",
+                "iam:DeleteInstanceProfile",
+                "iam:ListInstanceProfiles",
+                "iam:DeleteRole",
+                "iam:GetRole"
+            ],
+            "Effect": "Allow",
+            "Resource": "*"
+        },
+        {
+            "Action": [
+                "ec2:AuthorizeSecurityGroupEgress",
+                "ec2:AuthorizeSecurityGroupIngress",
+                "ec2:DeleteRouteTable",
+                "ec2:DeleteSubnet",
+                "ec2:DeleteTags",
+                "ec2:DescribeSubnets",
+                "ec2:DescribeVpcs",
+                "ec2:DescribeInstanceStatus",
+                "ec2:ModifyInstanceAttribute",
+                "ec2:RevokeSecurityGroupIngress",
+                "ec2:DescribeImages",
+                "ec2:CreateTags",
+                "ec2:DescribeRouteTables",
+                "ec2:CreateRouteTable",
+                "ec2:AssociateRouteTable",
+                "ec2:DescribeVpcEndpoints",
+                "ec2:CreateVpcEndpoint",
+                "ec2:ModifyVpcEndpoint",
+                "ec2:DescribeInstances",
+                "ec2:RunInstances",
+                "ec2:DescribeAddresses",
+                "ec2:AllocateAddress",
+                "ec2:AssociateAddress",
+                "ec2:DisassociateAddress",
+                "ec2:ReleaseAddress",
+                "ec2:TerminateInstances",
+                "ec2:AuthorizeSecurityGroupIngress",
+                "ec2:AuthorizeSecurityGroupEgress",
+                "ec2:DescribeSecurityGroups",
+                "ec2:CreateSecurityGroup",
+                "ec2:DeleteSecurityGroup",
+                "ec2:RevokeSecurityGroupEgress"
+                
+            ],
+            "Effect": "Allow",
+            "Resource": "*"
+        },
+        {
+            "Action": [
+                "s3:GetBucketLocation",
+                "s3:PutBucketPolicy",
+                "s3:GetBucketPolicy",
+                "s3:DeleteBucket",
+                "s3:DeleteObject",
+                "s3:GetObject",
+                "s3:ListBucket",
+                "s3:PutEncryptionConfiguration"
+                "s3:ListAllMyBuckets",
+                "s3:CreateBucket",
+                "s3:PutBucketTagging",
+                "s3:GetBucketTagging"
+            ],
+            "Effect": "Allow",
+            "Resource": "*"
+        }
+    ]
 }
 ```
 
@@ -280,7 +323,7 @@ Preparation steps for deployment:
     - AMI - Ubuntu 16.04
     - IAM role with [policy](#AWS_SSN_policy) should be assigned to the instance
 - Put SSH key file created through Amazon Console on the instance with the same name
-- Install Git and clone DLab repository
+- Install Git and clone DLab repository</details>
 
 #### In Azure cloud
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 20/23: contents updated

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 9b55dfe4a4e704695cee540edde68126092fd358
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Wed Jul 3 15:45:40 2019 +0300

    contents updated
---
 README.md | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index 6df8314..eb148d1 100644
--- a/README.md
+++ b/README.md
@@ -15,14 +15,12 @@ CONTENTS
 
 [DLab Deployment](#DLab_Deployment)
 
-&nbsp; &nbsp; &nbsp; &nbsp; [Prerequisites](#Prerequisites)
-
-&nbsp; &nbsp; &nbsp; &nbsp; [Preparing environment for DLab deployment](#Env_for_DLab)
-
 &nbsp; &nbsp; &nbsp; &nbsp; [Structure of main DLab directory](#DLab_directory)
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Structure of log directory](#log_directory)
 
+&nbsp; &nbsp; &nbsp; &nbsp; [Preparing environment for DLab deployment](#Env_for_DLab)
+
 &nbsp; &nbsp; &nbsp; &nbsp; [Self-Service Node](#Self_Service_Node)
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Edge Node](#Edge_Node)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 22/23: contents fixed

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 29aaf4e183a1323a9a15ecf167338d6486baf4c8
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Wed Jul 3 16:02:57 2019 +0300

    contents fixed
---
 README.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index 5582450..0895303 100644
--- a/README.md
+++ b/README.md
@@ -27,9 +27,9 @@ CONTENTS
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Notebook node](#Notebook_node)
 
-&nbsp; &nbsp; &nbsp; &nbsp; [Dataengine-service cluster](#Dataengine-service cluster)
+&nbsp; &nbsp; &nbsp; &nbsp; [Dataengine-service cluster](#Dataengine-service_cluster)
 
-&nbsp; &nbsp; &nbsp; &nbsp; [Dataengine cluster](#Dataengine cluster)
+&nbsp; &nbsp; &nbsp; &nbsp; [Dataengine cluster](#Dataengine_cluster)
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Configuration files](#Configuration_files)
 
@@ -1226,7 +1226,7 @@ List of parameters for Notebook node to **manage git credentials**:
 | action                        | git\_creds                                                                        |
 </details>
 
-## Dataengine-service cluster <a name="Dataengine-service cluster"></a>
+## Dataengine-service cluster <a name="Dataengine-service_cluster"></a>
 
 Dataengine-service is a cluster provided by cloud as a service (EMR on AWS) can be created if more computational resources are needed for executing analytical algorithms and models, triggered from analytical tools. Jobs execution will be scaled to a cluster mode increasing the performance and decreasing execution time.
 
@@ -1409,7 +1409,7 @@ List of parameters for Dataengine-service node to **install** additional librari
 | action                        | lib_install                                                                       |
 </details>
 
-## Dataengine cluster <a name="Dataengine cluster"></a>
+## Dataengine cluster <a name="Dataengine_cluster"></a>
 
 Dataengine is cluster based on Standalone Spark framework can be created if more computational resources are needed for executing analytical algorithms, but without additional expenses for cloud provided service.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 12/23: collapsing test 4

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit d7d9ef417f5470231db9bc56897e93b8949d0534
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Tue Jul 2 17:51:36 2019 +0300

    collapsing test 4
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 922ecb5..1146088 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary>In Amazon cloud</summary>
+<details><summary>In Amazon cloud (*click to expand*)</summary>
 
 Prerequisites:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 18/23: collapsing test final

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit d355e6471fdab4a74b3e438c7a1109adf7d9f4ff
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Wed Jul 3 11:19:47 2019 +0300

    collapsing test final
---
 README.md | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index 1cf8a21..cec502f 100644
--- a/README.md
+++ b/README.md
@@ -206,7 +206,7 @@ These directories contain the log files for each template and for DLab back-end
 Deployment of DLab starts from creating Self-Service(SSN) node. DLab can be deployed in AWS, Azure and Google cloud.
 For each cloud provider, prerequisites are different.
 
-<details><summary color="#FF7F50">In Amazon cloud</summary>
+<details><summary>In Amazon cloud <i>(click to expand)</i></summary>
 
 Prerequisites:
 
@@ -325,7 +325,7 @@ Preparation steps for deployment:
 - Put SSH key file created through Amazon Console on the instance with the same name
 - Install Git and clone DLab repository</details>
 
-<details><summary color="#87CEEB">In Azure cloud</i></summary>
+<details><summary>In Azure cloud</i></summary>
 
 Prerequisites:
 
@@ -338,7 +338,7 @@ Prerequisites:
 - Microsoft Graph
 - Windows Azure Service Management API</details>
 
-<details><summary color="#4169E1">In Google cloud (GCP)</i></summary>
+<details><summary>In Google cloud (GCP)</i></summary>
 
 Prerequisites:
 
@@ -543,6 +543,7 @@ Terminating SSN node will also remove all nodes and components related to it. Ba
 Example of command for terminating DLab environment:
 
 <details><summary>In Amazon</summary>
+
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --conf_service_base_name dlab-test --aws_access_key XXXXXXX --aws_secret_access_key XXXXXXXX --aws_region xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider aws --action terminate
 ```
@@ -578,9 +579,10 @@ List of parameters for SSN node termination:
 | conf\_key\_name            | Name of the uploaded SSH key file (without “.pem” extension)                       |
 | azure\_auth\_path          | Full path to auth json file                                                        |
 | action                     | terminate                                                                          |
-
 </details>
+
 <details><summary>In Google cloud</summary>
+
 ```
 /usr/bin/python infrastructure-provisioning/scripts/deploy_dlab.py --gcp_project_id project_id --conf_service_base_name dlab-test --gcp_region xx-xxxxx --gcp_zone xx-xxxxx-x --key_path /path/to/key/ --conf_key_name key_name --conf_os_family debian --conf_cloud_provider gcp --gcp_service_account_path /path/to/auth/file.json --action terminate
 ```


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org


[incubator-dlab] 21/23: contents fixed

Posted by bh...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

bhliva pushed a commit to branch v2.1.1
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git

commit 4de43d9958ee49660110a89d5a6dbd5001cad9a9
Author: Mykola_Bodnar1 <bo...@gmail.com>
AuthorDate: Wed Jul 3 16:00:24 2019 +0300

    contents fixed
---
 README.md | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index eb148d1..5582450 100644
--- a/README.md
+++ b/README.md
@@ -27,7 +27,9 @@ CONTENTS
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Notebook node](#Notebook_node)
 
-&nbsp; &nbsp; &nbsp; &nbsp; [EMR cluster](#EMR_cluster)
+&nbsp; &nbsp; &nbsp; &nbsp; [Dataengine-service cluster](#Dataengine-service cluster)
+
+&nbsp; &nbsp; &nbsp; &nbsp; [Dataengine cluster](#Dataengine cluster)
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Configuration files](#Configuration_files)
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org