You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dlab.apache.org by of...@apache.org on 2020/04/24 11:28:33 UTC

[incubator-dlab] branch develop updated: [DLAB-1698]: Updated user guide according to release 2.3 (#692)

This is an automated email from the ASF dual-hosted git repository.

ofuks pushed a commit to branch develop
in repository https://gitbox.apache.org/repos/asf/incubator-dlab.git


The following commit(s) were added to refs/heads/develop by this push:
     new 915a149  [DLAB-1698]: Updated user guide according to release 2.3 (#692)
915a149 is described below

commit 915a149412017087ffcec1b1a19b0f3c483e62b9
Author: viravit <vi...@epam.com>
AuthorDate: Fri Apr 24 14:28:27 2020 +0300

    [DLAB-1698]: Updated user guide according to release 2.3 (#692)
    
    [DLAB-1698]: Updated user guide according to release 2.3
---
 USER_GUIDE.md                                      | 476 ++++++++++-----------
 doc/billing_filter.png                             | Bin 18705 -> 41447 bytes
 doc/billing_page.png                               | Bin 18577 -> 264721 bytes
 doc/bin_icon.png                                   | Bin 0 -> 4379 bytes
 doc/computational_scheduler.png                    | Bin 33900 -> 35577 bytes
 doc/computational_scheduler_create.png             | Bin 3893 -> 3277 bytes
 doc/connect_endpoint.png                           | Bin 0 -> 202030 bytes
 doc/create_notebook_from_ami.png                   | Bin 27066 -> 35594 bytes
 doc/dataproc_create.png                            | Bin 0 -> 129369 bytes
 doc/delete_btn.png                                 | Bin 0 -> 4155 bytes
 doc/delete_group.png                               | Bin 34704 -> 45914 bytes
 doc/emr_creating.png                               | Bin 37126 -> 43196 bytes
 doc/emr_terminate_confirm.png                      | Bin 1760535 -> 14257 bytes
 doc/endpoint_list.png                              | Bin 0 -> 181738 bytes
 doc/environment_management.png                     | Bin 66404 -> 90301 bytes
 doc/git_creds_window.png                           | Bin 7190623 -> 25654 bytes
 doc/git_creds_window2.png                          | Bin 5946035 -> 26511 bytes
 doc/main_page.png                                  | Bin 4746590 -> 35533 bytes
 doc/main_page2.png                                 | Bin 8157879 -> 49611 bytes
 doc/main_page3.png                                 | Bin 8157879 -> 48735 bytes
 doc/main_page_filter.png                           | Bin 62679 -> 79991 bytes
 doc/manage_env_confirm.png                         | Bin 10464 -> 14049 bytes
 doc/manage_environment.png                         | Bin 21334 -> 18263 bytes
 doc/manage_role.png                                | Bin 108456 -> 28068 bytes
 doc/managemanage_resource_actions.png              | Bin 4997 -> 4976 bytes
 doc/notebook_create.png                            | Bin 41323 -> 33033 bytes
 doc/notebook_info.png                              | Bin 157517 -> 42371 bytes
 doc/notebook_libs_status.png                       | Bin 50720 -> 59233 bytes
 doc/notebook_scheduler.png                         | Bin 36928 -> 39368 bytes
 doc/notebook_terminated.png                        | Bin 38113 -> 56038 bytes
 doc/notebook_terminating.png                       | Bin 39506 -> 56292 bytes
 doc/pen_icon.png                                   | Bin 0 -> 4171 bytes
 doc/project_menu.png                               | Bin 0 -> 86667 bytes
 doc/project_view.png                               | Bin 0 -> 234276 bytes
 doc/roles.png                                      | Bin 0 -> 198223 bytes
 doc/scheduler_by_inactivity.png                    | Bin 0 -> 22076 bytes
 doc/spark_stop_confirm.png                         | Bin 10920 -> 12767 bytes
 doc/upload_or_generate_user_key.png                | Bin 17078 -> 37302 bytes
 .../resources-grid/resources-grid.component.ts     |   1 +
 39 files changed, 234 insertions(+), 243 deletions(-)

diff --git a/USER_GUIDE.md b/USER_GUIDE.md
index d821a93..78876fc 100644
--- a/USER_GUIDE.md
+++ b/USER_GUIDE.md
@@ -10,7 +10,7 @@ DLab is an essential toolset for analytics. It is a self-service Web Console, us
 
 [Login](#login)
 
-[Setup a Gateway/Edge node](#setup_edge_node)
+[Create project](#setup_edge_node)
 
 [Setting up analytical environment and managing computational power](#setup_environmen)
 
@@ -26,35 +26,34 @@ DLab is an essential toolset for analytics. It is a self-service Web Console, us
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Deploy Computational resource](#computational_deploy)
 
-&nbsp; &nbsp; &nbsp; &nbsp; [Stop Apache Spark cluster](#spark_stop)
+&nbsp; &nbsp; &nbsp; &nbsp; [Stop Standalone Apache Spark cluster](#spark_stop)
 
 &nbsp; &nbsp; &nbsp; &nbsp; [Terminate Computational resource](#computational_terminate)
 
+&nbsp; &nbsp; &nbsp; &nbsp; [Scheduler](#scheduler)
+
 &nbsp; &nbsp; &nbsp; &nbsp; [Collaboration space](#collaboration_space)
 
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Manage Git credentials](#git_creds)
 
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Git UI tool (ungit)](#git_ui)
 
-[DLab Health Status Page](#health_page)
+[Administration](#administration)
 
-&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Backup](#backup)
+&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Manage roles](#manage_roles)
 
-&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Manage environment](#manage_environment)
+&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Project management](#project_management)
 
-&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Manage roles](#manage_roles)
+&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Environment management](#environment_management)
 
-&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [SSN monitor](#ssn_monitor)
+&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Multiple Cloud endpoints](#multiple_cloud_endpoints)
 
-[DLab billing report](#billing_page)
+&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [Manage DLab quotas](#manage_dlab_quotas)
 
-[DLab Environment Management Page](#environment_management)
+[DLab billing report](#billing_page)
 
 [Web UI filters](#filter)
 
-[Scheduler](#scheduler)
-
-[Key reupload](#key_reupload)
 
 ---------
 # Login <a name="login"></a>
@@ -65,6 +64,9 @@ DLab Web Application authenticates users against:
 
 -   OpenLdap;
 -   Cloud Identity and Access Management service user validation;
+-   KeyCloak integration for seamless SSO experience *;
+
+    * NOTE: in case has been installed and configured to use SSO, please click on "Login with SSO" and use your corporate credentials
 
 | Login error messages               | Reason                                                                           |
 |------------------------------------|----------------------------------------------------------------------------------|
@@ -76,7 +78,7 @@ DLab Web Application authenticates users against:
 
 To stop working with DLab - click on Log Out link at the top right corner of DLab.
 
-After login user will see warning in case of exceeding quota or close to this limit.
+After login user sees warning in case of exceeding quota or close to this limit.
 
 <p align="center" class="facebox-popup"> 
     <img src="doc/exceeded quota.png" alt="Exceeded quota" width="400">
@@ -87,38 +89,35 @@ After login user will see warning in case of exceeding quota or close to this li
 </p>
 
 ----------------------------------
-# Setup a Gateway/Edge node <a name="setup_edge_node"></a>
+# Create project <a name="setup_edge_node"></a>
 
-When you log into DLab Web Application, the first thing you will have to setup is a Gateway Node, or an “Edge” Node.
+When you log into DLab Web interface, the first thing you need to do is to create a new project.
 
-To do this click on “Upload” button on “Create initial infrastructure”, select your personal public key and hit “Create” button or click on "Generate" button on “Create initial infrastructure” and save your private key.
+To do this click on “Upload” button on “Projects” page, select your personal public key (or click on "Generate" button), endpoint, group, 'Use shared image' select enable or disable and hit “Create” button. Do not forget to save your private key.
 
 <p align="center" class="facebox-popup"> 
-    <img src="doc/upload_or_generate_user_key.png" alt="Upload or generate user key" width="400">
+    <img src="doc/upload_or_generate_user_key.png" alt="Upload or generate user key" width="100%">
 </p>
 
-Please note that you need to have a key pair combination (public and private key) to work with DLab. To figure out how to create public and private key, please click on “Where can I get public key?” on “Create initial infrastructure” dialog. DLab build-in wiki page will guide Windows, MasOS and Linux on how to generate SSH key pairs quickly.
+Please note, that you need to have a key pair combination (public and private key) to work with DLab. To figure out how to create public and private key, please click on “Where can I get public key?” on “Projects” page. DLab build-in wiki page guides Windows, MasOS and Linux on how to generate SSH key pairs quickly.
 
-After you hit "Create" or "Generate" button, creation of Edge node will start. This process is a one-time operation for each Data Scientist and it might take up-to 10 minutes for DLab to setup initial infrastructure for you. During this process, you will see following popup in your browser:
+Creation of Project starts after hitting "Create" button. This process is a one-time operation for each Data Scientist and it might take up-to 10 minutes for DLab to setup initial infrastructure for you. During this process project is in status "Creating".
 
-<p align="center"> 
-    <img src="doc/loading_key.png" alt="Loading user key" width="350">
-</p>
+'Use shared image' enabled means, that an image of particular notebook type is created while first notebook of same type is created in DLab. This image will be availble for all DLab users. This image is used for provisioning of further notebooks of same type within DLab. 'Use share image' disabled means, that image of particular notebook type is created while first notebook of same type is created in DLab. This AMI is available for all users withing same project.
 
-As soon as an Edge node is created, Data Scientist will see a blank “List of Resources” page. The message “To start working, please create new environment” will be displayed:
+As soon as Project is created, Data Scientist can create  notebook server on “List of Resources” page. The message “To start working, please create new environment” is appeared on “List of Resources” page:
 
 ![Main page](doc/main_page.png)
 
 ---------------------------------------------------------------------------------------
 # Setting up analytical environment and managing computational power <a name="setup_environmen"></a>
 
-----------------------
+
 ## Create notebook server <a name="notebook_create"></a>
 
 To create new analytical environment from “List of Resources” page click on "Create new" button.
 
-“Create analytical tool” popup will show-up. Data Scientist can choose a preferable analytical tool to be setup. Adding new analytical tools is supported by architecture, so you can expect new templates to show up in upcoming releases.
-
+The "Create analytical tool" popup shows up. Data Scientist can choose the preferred project, endpoint and analytical tool. Adding new analytical toolset is supported by architecture, so you can expect new templates to show up in upcoming releases.
 Currently by means of DLab, Data Scientists can select between any of the following templates:
 
 -   Jupyter
@@ -127,6 +126,8 @@ Currently by means of DLab, Data Scientists can select between any of the follow
 -   RStudio with TensorFlow
 -   Jupyter with TensorFlow
 -   Deep Learning (Jupyter + MXNet, Caffe, Caffe2, TensorFlow, CNTK, Theano, Torch and Keras)
+-   JupyterLab
+-   Superset (implemented on GCP)
 
 <p align="center"> 
     <img src="doc/notebook_create.png" alt="Create notebook" width="574">
@@ -134,9 +135,9 @@ Currently by means of DLab, Data Scientists can select between any of the follow
 
 After specifying desired template, you should fill in the “Name” and “Instance shape”.
 
-Name field – is just for visual differentiation between analytical tools on “List of resources” dashboard.
+Keep in mind that "Name" field – is just for visual differentiation between analytical tools on “List of resources” dashboard.
 
-Instance shape dropdown, contains configurable list of shapes, which should be chosen depending on the type of analytical work to be performed. Following groups of instance shapes will be showing up with default setup configuration:
+Instance shape dropdown, contains configurable list of shapes, which should be chosen depending on the type of analytical work to be performed. Following groups of instance shapes are showing up with default setup configuration:
 
 <p align="center"> 
     <img src="doc/select_shape.png" alt="Select shape" width="250">
@@ -144,25 +145,29 @@ Instance shape dropdown, contains configurable list of shapes, which should be c
 
 These groups have T-Shirt based shapes (configurable), that can help Data Scientist to either save money\* and leverage not very powerful shapes (for working with relatively small datasets), or that could boost the performance of analytics by selecting more powerful instance shape.
 
-\* Please refer to official documentation from Amazon that will help you understand what [instance shapes](https://aws.amazon.com/ec2/instance-types/) would be most preferable in your particular DLAB setup. Also, you can use [AWS calculator](https://calculator.s3.amazonaws.com/index.html) to roughly estimate the cost of your environment.
+\* Please refer to official documentation from Amazon that helps you to understand what [instance shapes](https://aws.amazon.com/ec2/instance-types/) are the most preferable in your particular DLAB setup. Also, you can use [AWS calculator](https://calculator.s3.amazonaws.com/index.html) to roughly estimate the cost of your environment.
+
+\* Please refer to official documentation from GCP that helps you to understand what [instance shapes](https://cloud.google.com/compute/docs/machine-types) are the most preferable in your particular DLAB setup. Also, you can use [GCP calculator](https://cloud.google.com/products/calculator) to roughly estimate the cost of your environment.
+
+\* Please refer to official documentation from Microsoft Azure that helps you to understand what [virtual machine shapes](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/series/) are the most preferable in your particular DLAB setup. Also, you can use [Microsoft Azure calculator](https://azure.microsoft.com/en-us/pricing/calculator/?&ef_id=EAIaIQobChMItPmK5uj-6AIVj-iaCh0BFgVYEAAYASAAEgJ4KfD_BwE:G:s&OCID=AID2000606_SEM_UOMYUjFz&MarinID=UOMYUjFz_364338000380_microsoft%20 [...]
 
-You can override the default configurations for local spark. The configuration object is referenced as a JSON file. To tune spark configuration check off "Spark configurations" check box and insert JSON format in text box.
+You can override the default configurations of local spark. The configuration object is referenced as a JSON file. To tune spark configuration check off "Spark configurations" check box and insert JSON format in the text box.
 
-After you Select the template, fill in the Name and choose needed instance shape - you need to click on "Create" button for your instance to start creating. Corresponding record will show up in your dashboard:
+After you Select the template, fill in the Name and specify desired instance shape - you need to click on "Create" button for your analytical toolset to be created. Corresponding record shows up in your dashboard:
 
 ![Dashboard](doc/main_page2.png)
 
-As soon as notebook server is created, its status will change to Running:
+As soon as notebook server is created, status changes to Running:
 
 ![Running notebook](doc/main_page3.png)
 
-When you click on the name of your Analytical tool in the dashboard – analytical tool popup will show up:
+When you click on the name of your Analytical tool in the dashboard – analytical tool popup shows up:
 
 <p align="center"> 
     <img src="doc/notebook_info.png" alt="Notebook info" width="574">
 </p>
 
-In the header you will see version of analytical tool, its status and shape.
+In the header you see version of analytical tool, its status and shape.
 
 In the body of the dialog:
 
@@ -170,60 +175,54 @@ In the body of the dialog:
 -   Analytical tool URL
 -   Git UI tool (ungit)
 -   Shared bucket for all users
--   Bucket that has been provisioned for your needs
+-   Project bucket for project members
 
-To access analytical tool Web UI you proceed with one of the options:
-
--   use direct URL's to access notebooks (your access will be established via reverse proxy, so you don't need to have Edge node tunnel up and running)
--   SOCKS proxy based URL's to access notebooks (via tunnel to Edge node)
-
-If you use direct urls you don't need to open tunnel for Edge node and set SOCKS proxy.
-If you use indirect urls you need to configure SOCKS proxy and open tunnel for Edge node. Please follow the steps described on “Read instruction how to create the tunnel” page to configure SOCKS proxy for Windows/MAC/Linux machines. “Read instruction how to create the tunnel” is available on DLab notebook popup.
+To access analytical tool Web UI you use direct URL's (your access is established via reverse proxy, so you don't need to have Edge node tunnel up and running).
 
 ### Manage libraries <a name="manage_libraries"></a>
 
-On every analytical tool instance you can install additional libraries by clicking on gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the Actions column for a needed Notebook and hit Manage libraries:
+On every analytical tool instance you can install additional libraries by clicking on gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the "Actions" column for a needed Notebook and hit "Manage libraries":
 
 <p align="center"> 
     <img src="doc/notebook_menu_manage_libraries.png" alt="Notebook manage_libraries" width="150">
 </p>
 
-After clicking you will see the window with 3 fields:
--   Field for selecting an active resource to install libraries on
+After clicking you see the window with 3 fields:
+-   Field for selecting an active resource to install libraries
 -   Field for selecting group of packages (apt/yum, Python 2, Python 3, R, Java, Others)
 -   Field for search available packages with autocomplete function except for Java. java library you should enter using the next format: "groupID:artifactID:versionID"
 
 ![Install libraries dialog](doc/install_libs_form.png)
 
-You need to wait for a while after resource choosing till list of all available libraries will be received.
+You need to wait for a while after resource choosing till list of all available libraries is received.
 
 ![Libraries list loading](doc/notebook_list_libs.png)
 
-**Note:** apt or yum packages depends on your DLab OS family.
+**Note:** Apt or yum packages depends on your DLab OS family.
 
 **Note:** In group Others you can find other Python (2/3) packages, which haven't classifiers of version.
 
 ![Resource select_lib](doc/notebook_select_lib.png)
 
-After selecting library, you can see it on the right and could delete in from this list before installing.
+After selecting library, you can see it in the midle of the window and can delete it from this list before installation.
 
 ![Resource selected_lib](doc/notebook_selected_libs.png)
 
-After clicking on "Install" button you will see process of installation with appropriate status.
+After clicking on "Install" button you see process of installation with appropriate status.
 
 ![Resources libs_status](doc/notebook_libs_status.png)
 
-**Note:** If package can't be installed you will see "Failed" in status column and button to retry installation.
+**Note:** If package can't be installed you see "Failed" in status column and button to retry installation.
 
 ### Create image <a name="create_image"></a>
 
-Out of each analytical tool instance you can create an AMI image (notebook should be in Running status), including all libraries, which have been installed on it. You can use that AMI to speed-up provisioining of further analytical tool, if you would like to re-use existing configuration. To create an AMI click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the Actions menu for a needed Notebook and hit "Create AMI":
+Out of each analytical tool instance you can create an AMI image (notebook should be in Running status), including all libraries, which have been installed on it. You can use that AMI to speed-up provisioining of further analytical tool, if you want to re-use existing configuration. To create an AMI click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the "Actions" menu for a needed Notebook and hit "Create AMI":
 
 <p align="center"> 
     <img src="doc/notebook_menu_create_ami.png" alt="Notebook create_ami" width="150">
 </p>
 
-On Create AMI popup you will be asked to fill in:
+On "Create AMI" popup you should fill:
 -   text box for an AMI name (mandatory)
 -   text box for an AMI description (optional)
 
@@ -231,11 +230,11 @@ On Create AMI popup you will be asked to fill in:
     <img src="doc/create_ami.png" alt="Create AMI" width="480">
 </p>
 
-After clicking on "Assign" button the Notebook status will change to Creating AMI. Once an image is created the Notebook status changes back to Running.
+After clicking on "Create" button the Notebook status changes to "Creating image". Once an image is created the Notebook status changes back to "Running".
 
-To create new analytical environment from custom image click "Create new" button on “List of Resources” page. 
+To create new analytical environment from custom image click on "Create new" button on “List of Resources” page. 
 
-“Create analytical tool” popup will show-up. Choose a template of a Notebook for which the custom image is created:
+“Create analytical tool” popup shows up. Choose project, endpoint, template of a Notebook for which the custom image has been created:
 
 <p align="center"> 
     <img src="doc/create_notebook_from_ami.png" alt="Create notebook from AMI" width="560">
@@ -243,56 +242,59 @@ To create new analytical environment from custom image click "Create new" button
 
 Before clicking "Create" button you should choose the image from "Select AMI" and fill in the "Name" and "Instance shape".
 
+**NOTE:** This functionality is implemented for AWS and Azure.
+
 --------------------------
 ## Stop Notebook server <a name="notebook_stop"></a>
 
-Once you have stopped working with an analytical tool and you would like to release cloud resources for the sake of the costs, you might want to Stop the notebook. You will be able to Start the notebook again after a while and proceed with your analytics.
+Once you have stopped working with an analytical tool and you need to release Cloud resources for the sake of the costs, you might want to stop the notebook. You are able to start the notebook later and proceed with your analytical work.
 
-To Stop the Notebook click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the Actions column for a needed Notebook and hit Stop:
+To stop the Notebook click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the "Actions" column for a needed Notebook and hit "Stop":
 
 <p align="center"> 
     <img src="doc/notebook_menu_stop.png" alt="Notebook stopping" width="150">
 </p>
 
-Hit OK in confirmation popup.
+Hit "OK" in confirmation popup.
 
-**NOTE:** if any Computational resources except for Spark cluster have been connected to your notebook server – they will be automatically terminated if you stop the notebook and Spark cluster will be automatically stopped.
+**NOTE:** Connected Data Engine Service becomes Terminated while connected (if any) Data Engine (Standalone Apache Spark cluster) becomes Stopped.
 
 <p align="center"> 
     <img src="doc/notebook_stop_confirm.png" alt="Notebook stop confirm" width="400">
 </p>
 
-After you confirm you intent to Stop the notebook - the status will be changed to Stopping and will become Stopped in a while. Spark cluster status will be changed to Stopped and other Computational resource status  will be changed to Terminated.
+After you confirm your intent to stop the notebook - the status changes to "Stopping" and later becomes "Stopped". 
 
 --------------------------------
 ## Terminate Notebook server <a name="notebook_terminate"></a>
 
-Once you have finished working with an analytical tool and you would like to release cloud resources for the sake of the costs, you might want to Terminate the notebook. You will not be able to Start the notebook which has been Terminated. Instead, you will have to create new Notebook server if you will need to proceed your analytical activities.
+Once you have finished working with an analytical tool and you need don't neeed cloud resources anymore, for the sake of the costs, we recommend to terminate the notebook. You are not able to start the notebook which has been terminated. Instead, you have to create new Notebook if you need to proceed with your analytical activities.
 
-To Terminate the Notebook click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the Actions column for a needed Notebook and hit Terminate:
+**NOTE:** Make sure you back-up your data (if exists on Notebook) and playbooks before termination.
 
-**NOTE:** if any Computational resources have been linked to your notebook server – they will be automatically terminated if you stop the notebook.
+To terminate the Notebook click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the "Actions" column for a needed Notebook and hit "Terminate":
 
-Confirm termination of the notebook and afterward notebook status will be changed to **Terminating**:
+**NOTE:** If any Computational resources have been linked to your notebook server – they are automatically terminated if you terminate the notebook.
+
+Confirm termination of the notebook and afterwards notebook status changes to "Terminating":
 
 ![Notebook terminating](doc/notebook_terminating.png)
 
-Once corresponding instances are terminated on cloud, status will finally
-change to Terminated:
+Once corresponding instances become terminated in Cloud console, status finally changes to "Terminated":
 
 ![Notebook terminated](doc/notebook_terminated.png)
 
 ---------------
 ## Deploy Computational resource <a name="computational_deploy"></a>
 
-After deploying Notebook node, you can deploy Computational resource and it will be automatically linked with your Notebook server. Computational resource is a managed cluster platform, that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark on cloud to process and analyze vast amounts of data. Adding Computational resource is not mandatory and is needed in case computational resources are required for job execution.
+After deploying Notebook node, you can deploy Computational resource and it is automatically linked with your Notebook server. Computational resource is a managed cluster platform, that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark on cloud to process and analyze vast amounts of data. Adding Computational resource is not mandatory and is needed in case computational resources are required for job execution.
 
-On “Create Computational Resource” popup you will have to choose Computational resource version (configurable) and specify alias for it. To setup a cluster that meets your needs – you will have to define:
+On “Create Computational Resource” popup you have to choose Computational resource version (configurable) and specify alias for it. To setup a cluster that meets your needs – you have to define:
 
 -   Total number of instances (min 2 and max 14, configurable);
 -   Master and Slave instance shapes (list is configurable and supports all available cloud instance shapes, supported in your cloud region);
 
-Also, if you would like to save some costs for your Computational resource you can create it based on [spot instances](https://aws.amazon.com/ec2/spot/), which are often available at a discount price (this functionality is only available for AWS cloud):
+Also, if you want to save some costs for your Computational resource you can create it based on [spot instances](https://aws.amazon.com/ec2/spot/)(this functionality is for AWS cloud) or [preemptible instances](https://cloud.google.com/compute/docs/instances/preemptible) (this functionality is for GCP), which are often available at a discount price:
 
 -   Select Spot Instance checkbox;
 -   Specify preferable bid for your spot instance in % (between 20 and 90, configurable).
@@ -304,41 +306,48 @@ This picture shows menu for creating Computational resource for AWS:
     <img src="doc/emr_create.png" alt="Create Computational resource on AWS" width="760">
 </p>
 
-You can override the default configurations for applications by supplying a configuration object for applications when you create a cluster (this functionality is only available for Amazon EMR cluster ). The configuration object is referenced as a JSON file.
+You can override the default configurations for applications by supplying a configuration object for applications when you create a cluster (this functionality is only available for Amazon EMR cluster). The configuration object is referenced as a JSON file.
 To tune computational resource configuration check off "Cluster configurations" check box and insert JSON format in text box:
 
 <p align="center"> 
     <img src="doc/emr_create_configuration.png" alt="Create Custom Computational resource on AWS" width="760">
 </p>
 
+This picture shows menu for creating Computational resource for GCP:
+<p align="center"> 
+    <img src="doc/dataproc_create.png" alt="Create Computational resource on GCP" width="760">
+</p>
+
+To create Data Engine Service (Dataproc) with preemptible instances check off 'preemptible node count'. You can add from 1 to 11 preemptible instances.
+
 This picture shows menu for creating Computational resource for Azure:
 <p align="center"> 
     <img src="doc/dataengine_creating_menu.png" alt="Create Computational resource on Azure" width="760">
 </p>
 
-If you click on "Create" button Computational resource creation will kick off. You will see corresponding record on DLab Web UI in status **Creating**:
+If you click on "Create" button Computational resource creation kicks off. You see corresponding record on DLab Web UI in status "Creating":
 
 ![Creating Computational resource](doc/emr_creating.png)
 
-Once Computational resources are provisioned, their status will be changed to **Running**.
+Once Computational resources are provisioned, their status changes to "Running".
 
-Clicking on Computational resource name in DLab dashboard will open Computational resource details popup:
+After clicking on Computational resource name in DLab dashboard you see Computational resource details popup:
 
 <p align="center"> 
     <img src="doc/emr_info.png" alt="Computational resource info" width="480">
 </p>
 
-Also you can go to computational resource master UI via link "Apache Spark Master' or "EMR Master" (this functionality is only available for AWS cloud).
+Also you can go to computational resource master UI via link "Spark job tracker URL', "EMR job tracker URL" or "Dataproc job tracker URL".
 
 Since Computational resource is up and running - you are now able to leverage cluster computational power to run your analytical jobs on.
 
 To do that open any of the analytical tools and select proper kernel/interpreter:
 
-**Jupyter** – goto Kernel and choose preferable interpreter between local and Computational resource ones. Currently we have added support of Python 2/3, Spark, Scala, R into Jupyter.
+**Jupyter** – go to Kernel and choose preferable interpreter between local and Computational resource ones. Currently we have added support of Python 2/3, Spark, Scala, R in Jupyter.
 
 ![Jupiter](doc/jupiter.png)
 
-**Zeppelin** – goto Interpreter Biding menu and switch between local and Computational resource there. Once needed interpreter is selected click on Save.
+**Zeppelin** – go to Interpreter Biding menu and switch between local and Computational resource there. Once needed interpreter is selected click on "Save".
 
 ![Zeppelin](doc/zeppelin.png)
 
@@ -354,11 +363,11 @@ Insert following “magics” before blocks of your code to start executing your
 ![RStudio](doc/rstudio.png)
 
 ---------------
-## Stop  Apache Spark cluster <a name="spark_stop"></a>
+## Stop Standalone Apache Spark cluster <a name="spark_stop"></a>
 
-Once you have stopped working with a spark cluster and you would like to release cloud resources for the sake of the costs, you might want to Stop Apache Spark cluster. You will be able to Start apache Spark cluster again after a while and proceed with your analytics.
+Once you have stopped working with Standalone Apache Spark cluster (Data Engine) and you need to release cloud resources for the sake of the costs, you might want to stop Standalone Apache Spark cluster. You are able to start Standalone Apache Spark cluster again after a while and proceed with your analytics.
 
-To Stop Apache Spark cluster click on <img src="doc/stop_icon.png" alt="stop" width="20"> button close to spark cluster alias.
+To stop Standalone Apache Spark cluster click on <img src="doc/stop_icon.png" alt="stop" width="20"> button close to Standalone Apache Spark cluster alias.
 
 Hit "YES" in confirmation popup.
 
@@ -366,48 +375,103 @@ Hit "YES" in confirmation popup.
     <img src="doc/spark_stop_confirm.png" alt="Spark stop confirm" width="400">
 </p>
 
-After you confirm your intent to Apache Spark cluster - the status will be changed to Stopping and will become Stopped in a while.
+After you confirm your intent to stop Standalone Apache Spark cluster - the status changes to "Stopping" and soon becomes "Stopped".
 
 ------------------
 ## Terminate Computational resource <a name="computational_terminate"></a>
 
-To release cluster computational resources click on <img src="doc/cross_icon.png" alt="cross" width="16"> button close to Computational resource alias. Confirm decommissioning of Computational resource by hitting Yes:
+To release computational resources click on <img src="doc/cross_icon.png" alt="cross" width="16"> button close to Computational resource alias. Confirm decommissioning of Computational resource by hitting "Yes":
 
 <p align="center"> 
     <img src="doc/emr_terminate_confirm.png" alt="Computational resource terminate confirm" width="400">
 </p>
 
-In a while Computational resource cluster will get **Terminated**. Corresponding cloud instances will also removed on cloud.
+In a while Computational resource gets "Terminated". Corresponding cloud instance also is removed on cloud.
+
+------------------
+## Scheduler <a name="scheduler"></a>
+
+Scheduler component allows to automatically schedule Start and Stop triggers for a Notebook/Computational, while 
+for Data Engine or Data Engine Service it can only trigger Stop or Terminate action correspondigly. There are 2 types of a scheduler:
+- Scheduler by time;
+- Scheduler by inactivity.
+
+Scheduler by time is for Notebook/Data Engine Start/Stop and for Data Engine/Data Engine Service termination.
+Scheduler by inactivity is for Notebook/Data Engine stopping.
+
+To create scheduler for a Notebook click on an <img src="doc/gear_icon.png" alt="gear" width="20"> icon in the "Actions" column for a needed Notebook and hit "Scheduler":
+
+<p align="center"> 
+    <img src="doc/notebook_menu_scheduler.png" alt="Notebook scheduler action" width="150">
+</p>
+
+Popup with following fields shows up:
+
+- start/finish dates - date range when scheduler is active;
+- start/end time - time when notebook should be running;
+- timezone - your time zone;
+- repeat on - days when scheduler should be active;
+- possibility to synchronize notebook scheduler with computational schedulers;
+- possibility not to stop notebook in case of running job on Standalone Apache Spark cluster.
+
+<p align="center"> 
+    <img src="doc/notebook_scheduler.png" alt="Notebook scheduler" width="400">
+</p>
+
+If you want to stop Notebook on exceeding idle time you should enable "Scheduler by inactivity", fill your inactivity period (in minutes) and click on "Save" button. Notebook is stopped upon exceeding idle time value.
+
+<p align="center"> 
+    <img src="doc/scheduler_by_inactivity.png" alt="Scheduler by Inactivity.png" width="400">
+</p>
+
+Also scheduler can be configured for a Standalone Apache Spark cluster. To configure scheduler for Standalone Apache Spark cluster click on this icon <img src="doc/icon_scheduler_computational.png" alt="scheduler_computational" width="16">:
+
+<p align="center"> 
+    <img src="doc/computational_scheduler_create.png" alt="Computational scheduler create" width="400">
+</p>
+
+There is a possibility to inherit scheduler start settings from notebook, if such scheduler is present:
+
+<p align="center"> 
+    <img src="doc/computational_scheduler.png" alt="Computational scheduler" width="400">
+</p>
+
+Notebook/Standalone Apache Spark cluster is started/stopped automatically after scheduler setting.
+Please also note that if notebook is configured to be stopped, all running data engines assosiated with is stopped (for Standalone Apache Spark cluster) or terminated (for data engine serice) with notebook.
+
+After login user is notified  that corresponding resources are about to be stopped/terminated in some time.
+
+<p align="center"> 
+    <img src="doc/scheduler reminder.png" alt="Scheduler reminder" width="400">
+</p>
 
 --------------------------------
 ## Collaboration space <a name="collaboration_space"></a>
 
 ### Manage Git credentials <a name="git_creds"></a>
 
-To work with Git (pull, push) via UI tool (ungit) you could add multiple credentials in DLab UI, which will be set on all running instances with analytical tools.
+To work with Git (pull, push) via UI tool (ungit) you could add multiple credentials in DLab UI, which are set on all running instances with analytical tools.
 
-When you click on the button "Git credentials" – following popup will show up:
+When you click on the button "Git credentials" – following popup shows up:
 
 <p align="center"> 
     <img src="doc/git_creds_window.png" alt="Git_creds_window" width="760">
 </p>
 
 In this window you need to add:
--   Your Git server hostname, without **http** or **https**, for example: gitlab.com, github.com, or your internal GitLab server, which can be deployed with DLab.
+-   Your Git server hostname, without **http** or **https**, for example: gitlab.com, github.com, bitbucket.com, or your internal Git server.
 -   Your Username and Email - used to display author of commit in git.
 -   Your Login and Password - for authorization into git server.
 
-**Note:** If you have GitLab server, which was deployed with DLab, you should use your LDAP credentials for access to GitLab.
-
-Once all fields are filled in and you click on "Assign" button, you will see the list of all your Git credentials.
+Once all fields are filled in and you click on "Assign" button, you see the list of all your Git credentials.
 
-Clicking on "Apply changes" button, your credentials will be sent to all running instances with analytical tools. It takes a few seconds for changes to be applied.
+Clicking on "Apply changes" button, your credentials are sent to all running instances with analytical tools. It takes a few seconds for changes to be applied.
 
 <p align="center"> 
     <img src="doc/git_creds_window2.png" alt="Git_creds_window1" width="760">
 </p>
 
-On this tab you can also edit your credentials (click on pen icon) or delete (click on bin icon).
+On this tab you can also edit your credentials (click on pen icon <img src="doc/pen_icon.png" alt="pen" width="15">) or delete (click on bin icon <img src="doc/bin_icon.png" alt="bin" width="15">).
 
 ### Git UI tool (ungit) <a name="git_ui"></a>
 
@@ -417,7 +481,7 @@ On every analytical tool instance you can see Git UI tool (ungit):
     <img src="doc/notebook_info.png" alt="Git_ui_link" width="520">
 </p>
 
-Before start working with git repositories, you need to change working directory on the top of window to:
+Before start working with Git repositories, you need to change working directory on the top of window to:
 
 **/home/dlab-user/** or **/opt/zeppelin/notebook** for Zeppelin analytical tool and press Enter.
 
@@ -431,184 +495,168 @@ After creating repository you can see all commits and branches:
 
 ![Git_ui_ungit_work](doc/ungit_work.png)
 
-On the top of window in the red field UI show us changed or new files to commit. You can uncheck or add some files to gitignore.
+On the top of window in the red field UI shows us changed or new files to commit. You can uncheck or add some files to gitignore.
 
 **Note:** Git always checks you credentials. If this is your first commit after adding/changing credentials and after clicking on "Commit" button nothing happened - just click on "Commit" button again.
 
 On the right pane of window you also can see buttons to fetch last changes of repository, add upstreams and switch between branches.
 
-To see all modified files - click on the "circle" button on the center:
+To see all modified files - click on the "Circle" button on the center:
 
 ![Git_ui_ungit_changes](doc/ungit_changes.png)
 
-After commit you will see your local version and remote repository. To push you changes - click on your current branch and press "Push" button.
+After commit you see your local version and remote repository. To push you changes - click on your current branch and press "Push" button.
 
 ![Git_ui_ungit_push](doc/ungit_push.png)
 
-Also clicking on "circle" button you can uncommit or revert changes.
+Also clicking on "Circle" button you can uncommit or revert changes.
 
 --------------------------------
-# DLab Health Status Page <a name="health_page"></a>
+# Administration <a name="administration"></a>
 
-Health Status page is an administration page allowing users to start/stop/recreate gateway node. This might be useful in cases when someone manually deleted corresponding Edge node instance from cloud. This would have made DLab as an application corrupted in general. If any actions are manually done to Edge node instance directly via Cloud Web Console – those changes will be synchronized with DLab automatically and shortly Edge Node status will be updated in DLab.
+## Manage roles <a name="manage_roles"></a>
 
-To access Health status page either navigate to it via main menu:
+Administrator can choose what instance shape(s), notebook(s) and computational resource are supposed to create for certain group(s) or user(s). Administrator can also assign administrator per project, who is able to manage roles within particular project.
+To do it click on "Add group" button. "Add group" popup shows up:
 
 <p align="center"> 
-    <img src="doc/main_menu.png" alt="Main menu" width="250">
+    <img src="doc/manage_role.png" alt="Manage roles" width="780">
 </p>
 
-or by clicking on an icon close to logged in user name in the top right
-corner of the DLab:
-
--   green ![OK](doc/status_icon_ok.png), if Edge node status is Running;
--   red ![Error](doc/status_icon_error.png),if Edge node is Stopped or Terminated;
-
-![Health_status](doc/health_status.png)
-
-To Stop Edge Node please click on actions icon on Health Status page and hit "Stop".
+Roles consist of:
+- Administration - allow to execute administrative operation for the whole DLab or administrative operation only per project;
+- Billing - allow to view billing only the own resources or all users;
+- Compute - list of Compute types which are supposed for creation;
+- Compute shapes - list of Compute shapes which are supposed for creation;
+- Notebook - list of Notebook templates which are supposed for creation;
+- Notebook shapes - list of Notebook shapes which are supposed for creation.
 
 <p align="center"> 
-    <img src="doc/edge_stop.png" alt="EDGE stop" width="150">
+    <img src="doc/roles.png" alt="Roles" width="450">
 </p>
 
-Confirm you want to stop Edge node by clicking "Yes":
+To add group enter group name, choose certain action which should be allowed for group and also you can add discrete user(s) (not mandatory) and then click "Create" button.
+After addidng the group it appears on "Manage roles" popup.
+
+Administrator can remove group or user. For that you should only click on bin icon <img src="doc/bin_icon.png" alt="bin" width="15">for certain group or for icon <img src="doc/delete_btn.png" alt="delete" width="13"> for particular user. After that hit "Yes" in confirmation popup.
 
 <p align="center"> 
-    <img src="doc/edge_stop_confirm.png" alt="EDGE stop confirm" width="400">
+    <img src="doc/delete_group.png" alt="Delete group" width="780">
 </p>
 
-In case you Edge node is Stopped or Terminated – you will have to Start or Recreate it correspondingly to proceed working with DLab. This can done as well via context actions menu.
-
-### Backup <a name="backup"></a>
+## Project management <a name="project_management"></a>
 
-Administrator can use backup functionality. In order to do it click Backup button. "Backup options" popup will show-up. You can choose a preferable option to be backed up.
+After project creation (this step is described in [create project](#setup_edge_node)) administrator is able to manage the project by clicking on gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the "Actions" column for the needed project.
 
 <p align="center"> 
-    <img src="doc/backup_options.png" alt="Backup options" width="400">
+    <img src="doc/project_view.png" alt="Project view" width="780">
 </p>
 
-Confirm you want to do backup by clicking "Apply".
-
-### Manage environment <a name="manage_environment"></a>
-
-Administrator can manage users environment clicking on Manage environment button. "Manage environment" popup will show-up. All users environments will be shown which at least one instance has Running status:
+The following menu shows up:
 
 <p align="center"> 
-    <img src="doc/manage_environment.png" alt="Manage environment" width="520">
+    <img src="doc/project_menu.png" alt="Project menu" width="150">
 </p>
 
-If Administrator hit "Stop" icon <img src="doc/stop_icon_env.png" alt="stop" width="22"> all running instances except for dataengine service will be stopped and dataengine service will be terminated. User will be able to Start instances again except for dataengine service after a while and proceed with his analytics.
+Administrator can edit already existing project:
+- Add or remove group;
+- Add new endpoint;
+- Switch off/on 'Use shared image' option.
 
-If Administrator hit "Terminate" icon <img src="doc/terminate_icon_env.png" alt="terminate" width="22"> all running and stopped instances will be terminated. User will not be able to Start the inctance which has been Terminated. Instead, user will have to Upload his personal public key or Generate ssh key pairs.
+To edit the project hit "Edit project" and choose option which you want to add, remove or change. For applying changes click on "Update" button.
 
-Administrator should confirm user environment stopping or termination by clicking Yes:
+To stop Edge node hit "Stop edge node". After that confirm "OK" in confirmation popup. All related instances change its status from 'Running' to "Stopping" and soon become "Stopped". You are able to start Edge node again after a while and proceed with your work. Do not forget to start notebook again if you want to continue with your analytics. Because start Edge node does not start related instances.
 
-<p align="center"> 
-    <img src="doc/manage_env_confirm.png" alt="Manage environment confirm" width="550">
-</p>
+To terminate Edge node hit "Terminate edge node". After that confirm "OK" in confirmation popup. All related instances change its status to "Terminating" and soon become "Terminated".
 
-Administrator can manage total billing quota for DLab as well as billing quota per user(s).To do this enter appropriate number in text box(es) per user(s) or/and total budget. Hit "Apply" button.
+## Environment management <a name="environment_management"></a>
 
-### Manage roles <a name="manage_roles"></a>
+DLab Environment Management page is an administration page allowing adminstrator to see the list of all users environments and to stop/terminate all of them.
 
-Administrator can choose what instance shape(s) and notebook(s) can be allowed for certain group(s) or user(s).
-To do it click on "Manage roles" button. "Manage roles" popup will show-up:
+To access Environment management page either navigate to it via main menu:
 
 <p align="center"> 
-    <img src="doc/manage_role.png" alt="Manage roles" width="780">
+    <img src="doc/environment_management.png" alt="Environment management">
 </p>
 
-To add group enter group name, choose certain action which should be allowed for group and also you can add discrete user(s) (not mandatory) and then click "Create" button.
-New group will be added and appears on "Manage roles" popup.
-
-Administrator can remove group or user. For that you should only click on "Delete group" button for certain group or click on delete icon <img src="doc/cross_icon.png" alt="delete" width="16"> for particular user. After that Hit "Yes" in confirmation popup.
-
+To stop or terminate the Notebook click on a gear icon <img src="doc/gear_icon.png" alt="gear" width="20"> in the "Actions" column for a needed Notebook and hit "Stop" or "Terminate" action:
 <p align="center"> 
-    <img src="doc/delete_group.png" alt="Delete group" width="780">
+    <img src="doc/manage_env_actions.png" alt="Manage environment actions" width="160">
 </p>
 
-### SSN monitor <a name="ssn_monitor"></a>
+**NOTE:** Connected Data Engine Server is terminated and related Data Engine is stopped during Notebook stopping. During Notebook termination related Computational resources  are automatically terminated. 
 
-Administrator can monitor SSN HDD, Memory and CPU. 
-Clicking on "SSN monitor button" will open "SSN monitor" popup. 
-There are three tabs on  'SSN monitor' popup: CPU, HDD, Memory:
+To stop or release specific cluster click an appropriate button close to cluster alias.
 
 <p align="center"> 
-    <img src="doc/cpu.png" alt="SSN CPU" width="480">
+    <img src="doc/managemanage_resource_actions.png" alt="Manage resource action" width="300">
 </p>
 
-<p align="center"> 
-    <img src="doc/memory.png" alt="SSN memory" width="480">
-</p>
+Confirm stopping/decommissioning of the Computational resource by hitting "Yes":
 
 <p align="center"> 
-    <img src="doc/hdd.png" alt="SSN HDD" width="480">
+    <img src="doc/manage_env_confirm.png" alt="Manage environment action confirm" width="400">
 </p>
 
---------------------------------
-# DLab Billing report <a name="billing_page"></a>
+**NOTE:** Terminate action is available only for notebooks and computational resources, not for Edge Nodes.
 
-On this page you can see all billing information, including all costs assosiated with service base name of SSN.
+### Multiple Cloud Endpoints <a name="multiple_cloud_endpoints"></a>
 
-![Billing page](doc/billing_page.png)
+Administrator can connect to any of Cloud endpoints: AWS, GCP, Azure. For that administrator should click on "Endpoints" button. "Connect endpoint" popup shows up:
 
-In the header you can see 3 fields:
--   Service base name of your environment
--   Resource tag ID
--   Date period of available billing report
+<p align="center"> 
+    <img src="doc/connect_endpoint.png" alt="Connect endpoint" width="520">
+</p>
 
-On the center of header you can choose period of report in datepicker:
+Once all fields are filled in and you click on "Connect" button, you are able to see the list of all your added endpoints on "Endpoint list" tab:
 
 <p align="center"> 
-    <img src="doc/billing_datepicker.png" alt="Billing datepicker" width="400">
+    <img src="doc/endpoint_list.png" alt="Endpoint list" width="520">
 </p>
 
-You can save billing report in csv format hitting "Export" button.
+Administrator can deactivate whole analytical environment via bin icon <img src="doc/bin_icon.png" alt="bin" width="15">. And all related instances change its satuses to "Terminating" and soon become "Terminated".
 
-You can also filter data by each column:
+### Manage DLab quotas <a name="manage_dlab_quotas"></a>
 
-![Billing filter](doc/billing_filter.png)
+Administrator can set quotas per project and for the whole DLab. To do it click on "Manage DLab quotas" button. "Manage DLab quotas" popup shows up. Administrator can see all active project:
 
-**Note:** Administrator can see billing report of all users, and only he can see/filter "User" column.
+<p align="center"> 
+    <img src="doc/manage_environment.png" alt="Manage environment" width="520">
+</p>
 
-In the footer of billing report, you can see Total cost for all environments.
+After filling fields and clicking on "Apply" button, new quotas are used for project and DLab.
+If project and DLab quotas are exceeded the warning shows up during login.
 
---------------------------------
-# DLab Environment Management Page <a name="environment_management"></a>
+<p align="center" class="facebox-popup"> 
+    <img src="doc/exceeded quota.png" alt="Exceeded quota" width="400">
+</p>
 
-DLab Environment Management page is an administration page allowing admins to show the list of all users` environments and to stop/terminate all of them of separate specific resource.
+In such case user cannot create new instance and already "Running" instance changes its status to "Stopping", except for Data Engine Service (its status changes "Terminating") and soon becomes "Stopped" or "Terminated" appropriately.
 
-To access Environment management page either navigate to it via main menu:
+--------------------------------
 
-<p align="center"> 
-    <img src="doc/main_menu_env.png" alt="Main menu" width="250">
-</p>
+# DLab Billing report <a name="billing_page"></a>
 
-<p align="center"> 
-    <img src="doc/environment_management.png" alt="Environment management">
-</p>
+On this page you can see all billing information, including all costs assosiated with service base name of SSN.
 
-To Stop or Terminate the Notebook click on a gear icon gear in the Actions column for a needed Notebook and hit Stop or Terminate action:
-<p align="center"> 
-    <img src="doc/manage_env_actions.png" alt="Manage environment actions" width="160">
-</p>
+![Billing page](doc/billing_page.png)
 
-Any Computational resources except for Spark clusters will be automatically terminated and Spark clusters will be stopped in case of Stop action hitting, and all resources will be killed in case of Terminate action hitting.
+In the header you can see 2 fields:
+-   Service base name of your environment
+-   Date period of available billing report
 
-To stop or release specific cluster click an appropriate button close to cluster alias.
+On the center of header you can choose period of report in datepicker:
 
 <p align="center"> 
-    <img src="doc/managemanage_resource_actions.png" alt="Manage resource action" width="300">
+    <img src="doc/billing_datepicker.png" alt="Billing datepicker" width="400">
 </p>
 
-Confirm stopping/decommissioning of the Computational resource by hitting Yes:
+You can save billing report in csv format hitting "Export" button.
 
-<p align="center"> 
-    <img src="doc/manage_env_confirm.png" alt="Manage environment action confirm" width="400">
-</p>
+You can also filter data by environment name, user, project, resource type, instance size, product. 
+On top of that you can sort data by user, project, service charges.
 
-**NOTE:** terminate action is available only for notebooks and computational resources, not for Edge Nodes.
+In the footer of billing report, you can see "Total" cost for all environments.
 
 --------------------------------
 
@@ -628,61 +676,3 @@ To do this, simply click on icon <img src="doc/filter_icon.png" alt="filter" wid
 Once your list of filtered by any of the columns, icon <img src="doc/filter_icon.png" alt="filter" width="16"> changes to <img src="doc/sort_icon.png" alt="filter" width="16"> for a filtered columns only.
 
 There is also an option for quick and easy way to filter out all inactive instances (Failed and Terminated) by clicking on “Show active” button in the ribbon. To switch back to the list of all resources, click on “Show all”.
-
-# Scheduler <a name="scheduler"></a>
-
-Scheduler component allows to automatically schedule start/stop of notebook/cluster. There are 2 types of schedulers available:
-- notebook scheduler;
-- data engine scheduler (currently spark cluster only);
-
-To create scheduler for a notebook click on a <img src="doc/gear_icon.png" alt="gear" width="20"> icon in the Actions column for a needed Notebook and hit Scheduler:
-
-<p align="center"> 
-    <img src="doc/notebook_menu_scheduler.png" alt="Notebook scheduler action" width="150">
-</p>
-After clicking you will see popup with the following fields:
-
-- start/finish dates - date range when scheduler is active;
-- start/end time - time when notebook should be running;
-- offset - your zone offset;
-- repeat on - days when scheduler should be active
-- possibility to synchronize notebook scheduler with computational schedulers
-
-<p align="center"> 
-    <img src="doc/notebook_scheduler.png" alt="Notebook scheduler" width="400">
-</p>
-
-Also scheduler can be configured for a spark cluster. To configure scheduler for spark cluster <img src="doc/icon_scheduler_computational.png" alt="scheduler_computational" width="16"> should be clicked (near computational status):
-
-<p align="center"> 
-    <img src="doc/computational_scheduler_create.png" alt="Computational scheduler create" width="400">
-</p>
-
-There is a possibility to inherit scheduler start settings from notebook, if such scheduler is present:
-
-<p align="center"> 
-    <img src="doc/computational_scheduler.png" alt="Computational scheduler" width="400">
-</p>
-
-Once any scheduler is set up, notebook/spark cluster will be started/stopped automatically.
-Please also note that if notebook is configured to be stopped, all running data engines assosiated with it will be stopped (for spark cluster) or terminated (for data engine serice) with notebook.
-
-After login user will be notified  that corresponding resources are about to be stopped/terminated in some time.
-
-<p align="center"> 
-    <img src="doc/scheduler reminder.png" alt="Scheduler reminder" width="400">
-</p>
-
-# Key reupload <a name="key_reupload"></a>
-In case when user private key was corrupted, lost etc. DLAB provide a possibility to reupload user public key.
-It can be done on manage environment page using ACTIONS menu on edge instance:
-
-<p align="center"> 
-    <img src="doc/reupload_key_action.png" alt="Reupload key action" width="200">
-</p>
-
-After that similar to create initial environment dialog appeared where you can upload new key or generate new key-pair:
- 
- <p align="center"> 
-     <img src="doc/reupload_key_dialog.png" alt="Reupload key dialog" width="400">
- </p>
diff --git a/doc/billing_filter.png b/doc/billing_filter.png
index 09a0acd..e1dbd78 100644
Binary files a/doc/billing_filter.png and b/doc/billing_filter.png differ
diff --git a/doc/billing_page.png b/doc/billing_page.png
index cc08102..33bd674 100644
Binary files a/doc/billing_page.png and b/doc/billing_page.png differ
diff --git a/doc/bin_icon.png b/doc/bin_icon.png
new file mode 100644
index 0000000..d289b5f
Binary files /dev/null and b/doc/bin_icon.png differ
diff --git a/doc/computational_scheduler.png b/doc/computational_scheduler.png
index b00c626..d87a22f 100644
Binary files a/doc/computational_scheduler.png and b/doc/computational_scheduler.png differ
diff --git a/doc/computational_scheduler_create.png b/doc/computational_scheduler_create.png
index 463351d..5d1ef24 100644
Binary files a/doc/computational_scheduler_create.png and b/doc/computational_scheduler_create.png differ
diff --git a/doc/connect_endpoint.png b/doc/connect_endpoint.png
new file mode 100644
index 0000000..054b3e8
Binary files /dev/null and b/doc/connect_endpoint.png differ
diff --git a/doc/create_notebook_from_ami.png b/doc/create_notebook_from_ami.png
index 7e4453e..11cfde0 100644
Binary files a/doc/create_notebook_from_ami.png and b/doc/create_notebook_from_ami.png differ
diff --git a/doc/dataproc_create.png b/doc/dataproc_create.png
new file mode 100644
index 0000000..cbab3f4
Binary files /dev/null and b/doc/dataproc_create.png differ
diff --git a/doc/delete_btn.png b/doc/delete_btn.png
new file mode 100644
index 0000000..6229abf
Binary files /dev/null and b/doc/delete_btn.png differ
diff --git a/doc/delete_group.png b/doc/delete_group.png
index d5c38e3..9b7c878 100644
Binary files a/doc/delete_group.png and b/doc/delete_group.png differ
diff --git a/doc/emr_creating.png b/doc/emr_creating.png
index 7fb7fde..1e20418 100644
Binary files a/doc/emr_creating.png and b/doc/emr_creating.png differ
diff --git a/doc/emr_terminate_confirm.png b/doc/emr_terminate_confirm.png
index b1fa871..5eb515e 100644
Binary files a/doc/emr_terminate_confirm.png and b/doc/emr_terminate_confirm.png differ
diff --git a/doc/endpoint_list.png b/doc/endpoint_list.png
new file mode 100644
index 0000000..ea8586f
Binary files /dev/null and b/doc/endpoint_list.png differ
diff --git a/doc/environment_management.png b/doc/environment_management.png
index e4c2cda..ba0399c 100644
Binary files a/doc/environment_management.png and b/doc/environment_management.png differ
diff --git a/doc/git_creds_window.png b/doc/git_creds_window.png
index fdf7a41..ed41936 100644
Binary files a/doc/git_creds_window.png and b/doc/git_creds_window.png differ
diff --git a/doc/git_creds_window2.png b/doc/git_creds_window2.png
index 1481df0..f13444f 100644
Binary files a/doc/git_creds_window2.png and b/doc/git_creds_window2.png differ
diff --git a/doc/main_page.png b/doc/main_page.png
index 4338603..b6f1e17 100644
Binary files a/doc/main_page.png and b/doc/main_page.png differ
diff --git a/doc/main_page2.png b/doc/main_page2.png
index 5305a05..3d3af40 100644
Binary files a/doc/main_page2.png and b/doc/main_page2.png differ
diff --git a/doc/main_page3.png b/doc/main_page3.png
index 255de05..1812925 100644
Binary files a/doc/main_page3.png and b/doc/main_page3.png differ
diff --git a/doc/main_page_filter.png b/doc/main_page_filter.png
index 5818548..cd764ec 100644
Binary files a/doc/main_page_filter.png and b/doc/main_page_filter.png differ
diff --git a/doc/manage_env_confirm.png b/doc/manage_env_confirm.png
index 91f3d30..ae4b543 100644
Binary files a/doc/manage_env_confirm.png and b/doc/manage_env_confirm.png differ
diff --git a/doc/manage_environment.png b/doc/manage_environment.png
index ead01e1..73060ff 100644
Binary files a/doc/manage_environment.png and b/doc/manage_environment.png differ
diff --git a/doc/manage_role.png b/doc/manage_role.png
index 152cf7c..9db76c2 100644
Binary files a/doc/manage_role.png and b/doc/manage_role.png differ
diff --git a/doc/managemanage_resource_actions.png b/doc/managemanage_resource_actions.png
index 23c58d4..bd1394c 100644
Binary files a/doc/managemanage_resource_actions.png and b/doc/managemanage_resource_actions.png differ
diff --git a/doc/notebook_create.png b/doc/notebook_create.png
index 18a674b..9ca407e 100644
Binary files a/doc/notebook_create.png and b/doc/notebook_create.png differ
diff --git a/doc/notebook_info.png b/doc/notebook_info.png
index 4cc01a2..83e8e22 100644
Binary files a/doc/notebook_info.png and b/doc/notebook_info.png differ
diff --git a/doc/notebook_libs_status.png b/doc/notebook_libs_status.png
index 5f49722..8aa861d 100644
Binary files a/doc/notebook_libs_status.png and b/doc/notebook_libs_status.png differ
diff --git a/doc/notebook_scheduler.png b/doc/notebook_scheduler.png
index 31bd9ac..81502c3 100644
Binary files a/doc/notebook_scheduler.png and b/doc/notebook_scheduler.png differ
diff --git a/doc/notebook_terminated.png b/doc/notebook_terminated.png
index fb6399b..408e5ee 100644
Binary files a/doc/notebook_terminated.png and b/doc/notebook_terminated.png differ
diff --git a/doc/notebook_terminating.png b/doc/notebook_terminating.png
index d20b967..b62a492 100644
Binary files a/doc/notebook_terminating.png and b/doc/notebook_terminating.png differ
diff --git a/doc/pen_icon.png b/doc/pen_icon.png
new file mode 100644
index 0000000..c6a3a7f
Binary files /dev/null and b/doc/pen_icon.png differ
diff --git a/doc/project_menu.png b/doc/project_menu.png
new file mode 100644
index 0000000..c6d4976
Binary files /dev/null and b/doc/project_menu.png differ
diff --git a/doc/project_view.png b/doc/project_view.png
new file mode 100644
index 0000000..2415ac5
Binary files /dev/null and b/doc/project_view.png differ
diff --git a/doc/roles.png b/doc/roles.png
new file mode 100644
index 0000000..f7468a6
Binary files /dev/null and b/doc/roles.png differ
diff --git a/doc/scheduler_by_inactivity.png b/doc/scheduler_by_inactivity.png
new file mode 100644
index 0000000..decebac
Binary files /dev/null and b/doc/scheduler_by_inactivity.png differ
diff --git a/doc/spark_stop_confirm.png b/doc/spark_stop_confirm.png
index 59b6bf9..7b6bc34 100644
Binary files a/doc/spark_stop_confirm.png and b/doc/spark_stop_confirm.png differ
diff --git a/doc/upload_or_generate_user_key.png b/doc/upload_or_generate_user_key.png
index 2766334..6d6e6e1 100644
Binary files a/doc/upload_or_generate_user_key.png and b/doc/upload_or_generate_user_key.png differ
diff --git a/services/self-service/src/main/resources/webapp/src/app/resources/resources-grid/resources-grid.component.ts b/services/self-service/src/main/resources/webapp/src/app/resources/resources-grid/resources-grid.component.ts
index 8dfdf4e..3a7aabd 100644
--- a/services/self-service/src/main/resources/webapp/src/app/resources/resources-grid/resources-grid.component.ts
+++ b/services/self-service/src/main/resources/webapp/src/app/resources/resources-grid/resources-grid.component.ts
@@ -46,6 +46,7 @@ import {NotebookModel} from '../exploratory/notebook.model';
 
 
 
+
 @Component({
   selector: 'resources-grid',
   templateUrl: 'resources-grid.component.html',


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@dlab.apache.org
For additional commands, e-mail: commits-help@dlab.apache.org