You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@superset.apache.org by ru...@apache.org on 2020/08/31 23:49:32 UTC

[incubator-superset-site] branch asf-site updated: Revert "Merge pull request #1 from pkdotson/website-refresh"

This is an automated email from the ASF dual-hosted git repository.

rusackas pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-superset-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new b241e82  Revert "Merge pull request #1 from pkdotson/website-refresh"
b241e82 is described below

commit b241e8201c9d383d54f9fc037a52cc498fba2a14
Author: Evan Rusackas <ev...@preset.io>
AuthorDate: Mon Aug 31 16:48:40 2020 -0700

    Revert "Merge pull request #1 from pkdotson/website-refresh"
    
    This reverts commit 0f170a03e4e28efc986f24d5b488e5baa35fb08d, reversing
    changes made to 36907df869a6c3ca11f93f46d5672ad4a8463f3b.
---
 .buildinfo                                         |     4 +
 .gitignore                                         |    76 -
 LICENSE                                            |    14 -
 README.md                                          |     7 +-
 _images/add_db.png                                 |   Bin 0 -> 157717 bytes
 _images/add_new_chart.png                          |   Bin 0 -> 42447 bytes
 _images/advanced_analytics_base.png                |   Bin 0 -> 122647 bytes
 _images/annotation.png                             |   Bin 0 -> 101822 bytes
 _images/annotation_settings.png                    |   Bin 0 -> 22421 bytes
 _images/apache_feather.png                         |   Bin 0 -> 138140 bytes
 _images/area.png                                   |   Bin 0 -> 14469 bytes
 _images/average_aggregate_for_cost.png             |   Bin 0 -> 31741 bytes
 _images/bank_dash.png                              |   Bin 0 -> 1600232 bytes
 _images/bar.png                                    |   Bin 0 -> 9058 bytes
 _images/big_number.png                             |   Bin 0 -> 103045 bytes
 _images/big_number_total.png                       |   Bin 0 -> 4925 bytes
 _images/blue_bar_insert_component.png              |   Bin 0 -> 56554 bytes
 _images/box_plot.png                               |   Bin 0 -> 9496 bytes
 _images/bubble.png                                 |   Bin 0 -> 22779 bytes
 _images/bullet.png                                 |   Bin 0 -> 2174 bytes
 _images/cal_heatmap.png                            |   Bin 0 -> 11238 bytes
 _images/chord.png                                  |   Bin 0 -> 39273 bytes
 _images/chose_a_datasource.png                     |   Bin 0 -> 21013 bytes
 _images/compare.png                                |   Bin 0 -> 32918 bytes
 _images/country_map.png                            |   Bin 0 -> 41210 bytes
 _images/create_role.png                            |   Bin 0 -> 51474 bytes
 _images/csv_to_database_configuration.png          |   Bin 0 -> 30607 bytes
 _images/deck_arc.png                               |   Bin 0 -> 38815 bytes
 _images/deck_geojson.png                           |   Bin 0 -> 42386 bytes
 _images/deck_grid.png                              |   Bin 0 -> 143670 bytes
 _images/deck_hex.png                               |   Bin 0 -> 85015 bytes
 _images/deck_multi.png                             |   Bin 0 -> 106790 bytes
 _images/deck_path.png                              |   Bin 0 -> 75705 bytes
 _images/deck_polygon.png                           |   Bin 0 -> 37261 bytes
 _images/deck_scatter.png                           |   Bin 0 -> 120091 bytes
 _images/deck_screengrid.png                        |   Bin 0 -> 76990 bytes
 _images/deckgl_dash.png                            |   Bin 0 -> 6777438 bytes
 _images/directed_force.png                         |   Bin 0 -> 42753 bytes
 _images/dist_bar.png                               |   Bin 0 -> 8752 bytes
 _images/druid_agg.png                              |   Bin 0 -> 104052 bytes
 _images/dual_line.png                              |   Bin 0 -> 19229 bytes
 _images/edit-record.png                            |   Bin 0 -> 4940 bytes
 _images/edit_annotation.png                        |   Bin 0 -> 34104 bytes
 _images/event_flow.png                             |   Bin 0 -> 17191 bytes
 _images/explore.png                                |   Bin 0 -> 659975 bytes
 _images/filter_box.png                             |   Bin 0 -> 8550 bytes
 _images/filter_on_origin_country.png               |   Bin 0 -> 44695 bytes
 _images/heatmap.png                                |   Bin 0 -> 39866 bytes
 _images/histogram.png                              |   Bin 0 -> 9717 bytes
 _images/horizon.png                                |   Bin 0 -> 24924 bytes
 _images/iframe.png                                 |   Bin 0 -> 50998 bytes
 _images/line.png                                   |   Bin 0 -> 42915 bytes
 _images/mapbox.png                                 |   Bin 0 -> 85714 bytes
 _images/markdown.png                               |   Bin 0 -> 9030 bytes
 _images/markup.png                                 |   Bin 0 -> 23186 bytes
 _images/no_filter_on_time_filter.png               |   Bin 0 -> 35991 bytes
 _images/paired_ttest.png                           |   Bin 0 -> 23323 bytes
 _images/para.png                                   |   Bin 0 -> 52039 bytes
 _images/parse_dates_column.png                     |   Bin 0 -> 22004 bytes
 _images/partition.png                              |   Bin 0 -> 11364 bytes
 _images/pie.png                                    |   Bin 0 -> 6007 bytes
 _images/pivot_table.png                            |   Bin 0 -> 54527 bytes
 _images/publish_dashboard.png                      |   Bin 0 -> 67785 bytes
 _images/resample.png                               |   Bin 0 -> 88488 bytes
 _images/resize_tutorial_table_on_dashboard.png     |   Bin 0 -> 40391 bytes
 _images/rolling_mean.png                           |   Bin 0 -> 99706 bytes
 _images/rose.png                                   |   Bin 0 -> 37386 bytes
 {src/images => _images}/s.png                      |   Bin
 _images/sankey.png                                 |   Bin 0 -> 43735 bytes
 _images/save_tutorial_table.png                    |   Bin 0 -> 8282 bytes
 _images/select_dates_pivot_table.png               |   Bin 0 -> 35466 bytes
 _images/select_table_visualization_type.png        |   Bin 0 -> 47283 bytes
 _images/separator.png                              |   Bin 0 -> 16632 bytes
 _images/sqllab.png                                 |   Bin 0 -> 791403 bytes
 _images/sum_cost_column.png                        |   Bin 0 -> 36632 bytes
 _images/sunburst.png                               |   Bin 0 -> 26030 bytes
 _images/table.png                                  |   Bin 0 -> 22572 bytes
 _images/time_comparison_absolute_difference.png    |   Bin 0 -> 98829 bytes
 _images/time_comparison_two_series.png             |   Bin 0 -> 137009 bytes
 _images/time_pivot.png                             |   Bin 0 -> 13209 bytes
 _images/time_table.png                             |   Bin 0 -> 17464 bytes
 _images/treemap.png                                |   Bin 0 -> 16623 bytes
 _images/tutorial_01_sources_database.png           |   Bin 0 -> 19291 bytes
 _images/tutorial_02_add_database.png               |   Bin 0 -> 24994 bytes
 _images/tutorial_03_database_name.png              |   Bin 0 -> 13947 bytes
 .../tutorial_04_sqlalchemy_connection_string.png   |   Bin 0 -> 52808 bytes
 _images/tutorial_05_connection_popup.png           |   Bin 0 -> 84173 bytes
 _images/tutorial_06_list_of_tables.png             |   Bin 0 -> 23859 bytes
 _images/tutorial_07_save_button.png                |   Bin 0 -> 8210 bytes
 _images/tutorial_08_sources_tables.png             |   Bin 0 -> 18728 bytes
 _images/tutorial_09_add_new_table.png              |   Bin 0 -> 18777 bytes
 _images/tutorial_10_table_name.png                 |   Bin 0 -> 26728 bytes
 _images/tutorial_11_choose_db.png                  |   Bin 0 -> 22024 bytes
 _images/tutorial_12_table_creation_success_msg.png |   Bin 0 -> 33013 bytes
 _images/tutorial_13_edit_table_config.png          |   Bin 0 -> 32220 bytes
 _images/tutorial_14_field_config.png               |   Bin 0 -> 61811 bytes
 _images/tutorial_15_click_table_name.png           |   Bin 0 -> 7863 bytes
 _images/tutorial_16_datasource_chart_type.png      |   Bin 0 -> 13822 bytes
 _images/tutorial_17_choose_time_range.png          |   Bin 0 -> 11627 bytes
 _images/tutorial_18_choose_metric.png              |   Bin 0 -> 12536 bytes
 _images/tutorial_19_click_query.png                |   Bin 0 -> 5734 bytes
 _images/tutorial_20_count_star_result.png          |   Bin 0 -> 5333 bytes
 _images/tutorial_21_group_by.png                   |   Bin 0 -> 6840 bytes
 _images/tutorial_22_group_by_result.png            |   Bin 0 -> 22576 bytes
 _images/tutorial_23_group_by_more_dimensions.png   |   Bin 0 -> 8191 bytes
 _images/tutorial_24_max_metric.png                 |   Bin 0 -> 6731 bytes
 _images/tutorial_25_max_temp_filter.png            |   Bin 0 -> 11654 bytes
 _images/tutorial_26_row_limit.png                  |   Bin 0 -> 4927 bytes
 _images/tutorial_27_top_10_max_temps.png           |   Bin 0 -> 49871 bytes
 _images/tutorial_28_bar_chart.png                  |   Bin 0 -> 14757 bytes
 _images/tutorial_29_bar_chart_series_metrics.png   |   Bin 0 -> 20374 bytes
 _images/tutorial_30_bar_chart_results.png          |   Bin 0 -> 75926 bytes
 _images/tutorial_31_save_slice_to_dashboard.png    |   Bin 0 -> 33789 bytes
 _images/tutorial_32_save_slice_confirmation.png    |   Bin 0 -> 24041 bytes
 _images/tutorial_33_dashboard.png                  |   Bin 0 -> 5232 bytes
 _images/tutorial_34_weather_dashboard.png          |   Bin 0 -> 6703 bytes
 _images/tutorial_35_slice_on_dashboard.png         |   Bin 0 -> 66781 bytes
 _images/tutorial_36_adjust_dimensions.gif          |   Bin 0 -> 126264 bytes
 _images/tutorial_line_chart.png                    |   Bin 0 -> 97148 bytes
 _images/tutorial_pivot_table.png                   |   Bin 0 -> 57761 bytes
 _images/tutorial_table.png                         |   Bin 0 -> 35897 bytes
 _images/upload_a_csv.png                           |   Bin 0 -> 38213 bytes
 _images/word_cloud.png                             |   Bin 0 -> 24048 bytes
 _images/world_map.png                              |   Bin 0 -> 28598 bytes
 _modules/index.html                                |   192 +
 _modules/superset/jinja_context.html               |   525 +
 _sources/admintutorial.rst.txt                     |   325 +
 _sources/druid.rst.txt                             |    64 +
 _sources/druid.txt                                 |    48 +
 _sources/faq.rst.txt                               |   339 +
 _sources/faq.txt                                   |   198 +
 _sources/gallery.rst.txt                           |   206 +
 _sources/gallery.txt                               |    89 +
 _sources/import_export_datasources.rst.txt         |   125 +
 _sources/index.rst.txt                             |   175 +
 _sources/index.txt                                 |    86 +
 _sources/installation.rst.txt                      |  1581 ++
 _sources/installation.txt                          |   552 +
 _sources/issue_code_reference.rst.txt              |    39 +
 _sources/misc.rst.txt                              |    27 +
 _sources/security.rst.txt                          |   178 +
 _sources/security.txt                              |   162 +
 _sources/sqllab.rst.txt                            |   177 +
 _sources/sqllab.txt                                |    64 +
 _sources/tutorial.rst.txt                          |   325 +
 _sources/tutorial.txt                              |   308 +
 _sources/tutorials.rst.txt                         |    25 +
 _sources/usertutorial.rst.txt                      |   507 +
 _sources/videos.rst.txt                            |    22 +
 _sources/videos.txt                                |    54 +
 _sources/visualization.rst.txt                     |  2007 ++
 _sources/visualization.txt                         |  1759 ++
 _static/ajax-loader.gif                            |   Bin 0 -> 673 bytes
 _static/basic.css                                  |   768 +
 _static/comment-bright.png                         |   Bin 0 -> 756 bytes
 _static/comment-close.png                          |   Bin 0 -> 829 bytes
 _static/comment.png                                |   Bin 0 -> 641 bytes
 _static/css/badge_only.css                         |     1 +
 _static/css/theme.css                              |     6 +
 _static/docs.css                                   |    77 +
 _static/doctools.js                                |   315 +
 _static/documentation_options.js                   |    12 +
 _static/down-pressed.png                           |   Bin 0 -> 222 bytes
 _static/down.png                                   |   Bin 0 -> 202 bytes
 _static/file.png                                   |   Bin 0 -> 286 bytes
 _static/fonts/Inconsolata-Bold.ttf                 |   Bin 0 -> 109948 bytes
 _static/fonts/Inconsolata-Regular.ttf              |   Bin 0 -> 96964 bytes
 _static/fonts/Inconsolata.ttf                      |   Bin 0 -> 63184 bytes
 _static/fonts/Lato-Bold.ttf                        |   Bin 0 -> 656544 bytes
 _static/fonts/Lato-BoldItalic.ttf                  |   Bin 0 -> 698364 bytes
 _static/fonts/Lato-Italic.ttf                      |   Bin 0 -> 722900 bytes
 _static/fonts/Lato-Regular.ttf                     |   Bin 0 -> 656568 bytes
 _static/fonts/Lato/lato-bold.eot                   |   Bin 0 -> 256056 bytes
 _static/fonts/Lato/lato-bold.ttf                   |   Bin 0 -> 600856 bytes
 _static/fonts/Lato/lato-bold.woff                  |   Bin 0 -> 309728 bytes
 _static/fonts/Lato/lato-bold.woff2                 |   Bin 0 -> 184912 bytes
 _static/fonts/Lato/lato-bolditalic.eot             |   Bin 0 -> 266158 bytes
 _static/fonts/Lato/lato-bolditalic.ttf             |   Bin 0 -> 622572 bytes
 _static/fonts/Lato/lato-bolditalic.woff            |   Bin 0 -> 323344 bytes
 _static/fonts/Lato/lato-bolditalic.woff2           |   Bin 0 -> 193308 bytes
 _static/fonts/Lato/lato-italic.eot                 |   Bin 0 -> 268604 bytes
 _static/fonts/Lato/lato-italic.ttf                 |   Bin 0 -> 639388 bytes
 _static/fonts/Lato/lato-italic.woff                |   Bin 0 -> 328412 bytes
 _static/fonts/Lato/lato-italic.woff2               |   Bin 0 -> 195704 bytes
 _static/fonts/Lato/lato-regular.eot                |   Bin 0 -> 253461 bytes
 _static/fonts/Lato/lato-regular.ttf                |   Bin 0 -> 607720 bytes
 _static/fonts/Lato/lato-regular.woff               |   Bin 0 -> 309192 bytes
 _static/fonts/Lato/lato-regular.woff2              |   Bin 0 -> 182708 bytes
 _static/fonts/RobotoSlab-Bold.ttf                  |   Bin 0 -> 170616 bytes
 _static/fonts/RobotoSlab-Regular.ttf               |   Bin 0 -> 169064 bytes
 _static/fonts/RobotoSlab/roboto-slab-v7-bold.eot   |   Bin 0 -> 79520 bytes
 _static/fonts/RobotoSlab/roboto-slab-v7-bold.ttf   |   Bin 0 -> 170616 bytes
 _static/fonts/RobotoSlab/roboto-slab-v7-bold.woff  |   Bin 0 -> 87624 bytes
 _static/fonts/RobotoSlab/roboto-slab-v7-bold.woff2 |   Bin 0 -> 67312 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.eot    |   Bin 0 -> 78331 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.ttf    |   Bin 0 -> 169064 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.woff   |   Bin 0 -> 86288 bytes
 .../fonts/RobotoSlab/roboto-slab-v7-regular.woff2  |   Bin 0 -> 66444 bytes
 _static/fonts/fontawesome-webfont.eot              |   Bin 0 -> 165742 bytes
 _static/fonts/fontawesome-webfont.svg              |  2671 +++
 _static/fonts/fontawesome-webfont.ttf              |   Bin 0 -> 165548 bytes
 _static/fonts/fontawesome-webfont.woff             |   Bin 0 -> 98024 bytes
 _static/fonts/fontawesome-webfont.woff2            |   Bin 0 -> 77160 bytes
 _static/images/apache_feather.png                  |   Bin 0 -> 138140 bytes
 _static/images/babies.png                          |   Bin 0 -> 59832 bytes
 _static/images/bubble.png                          |   Bin 0 -> 470048 bytes
 _static/images/cloud.png                           |   Bin 0 -> 718612 bytes
 _static/images/create_role.png                     |   Bin 0 -> 51474 bytes
 _static/images/dash.png                            |   Bin 0 -> 336285 bytes
 _static/images/druid_agg.png                       |   Bin 0 -> 104052 bytes
 _static/images/favicon.png                         |   Bin 0 -> 10863 bytes
 _static/images/icons/cancel-x.svg                  |    27 +
 _static/images/icons/check.svg                     |    22 +
 _static/images/icons/checkbox-half.svg             |    22 +
 _static/images/icons/checkbox-off.svg              |    21 +
 _static/images/icons/checkbox-on.svg               |    22 +
 _static/images/icons/circle-check-solid.svg        |    22 +
 _static/images/icons/circle-check.svg              |    22 +
 _static/images/icons/close.svg                     |    21 +
 _static/images/icons/compass.svg                   |    22 +
 _static/images/icons/dataset_physical.svg          |    21 +
 _static/images/icons/dataset_virtual.svg           |    22 +
 _static/images/icons/error.svg                     |    22 +
 _static/images/icons/pencil.svg                    |    21 +
 _static/images/icons/search.svg                    |    29 +
 _static/images/icons/share.svg                     |    25 +
 _static/images/icons/sort-asc.svg                  |    24 +
 _static/images/icons/sort-desc.svg                 |    24 +
 _static/images/icons/sort.svg                      |    21 +
 _static/images/icons/trash.svg                     |    21 +
 _static/images/icons/warning.svg                   |    22 +
 _static/images/loading.gif                         |   Bin 0 -> 79023 bytes
 _static/images/noimg.png                           |   Bin 0 -> 1101 bytes
 {src => _static}/images/s.png                      |   Bin
 _static/images/screenshots/bank_dash.png           |   Bin 0 -> 1600232 bytes
 _static/images/screenshots/deckgl_dash.png         |   Bin 0 -> 6777438 bytes
 _static/images/screenshots/explore.png             |   Bin 0 -> 659975 bytes
 _static/images/screenshots/sqllab.png              |   Bin 0 -> 791403 bytes
 _static/images/screenshots/visualizations.png      |   Bin 0 -> 2016718 bytes
 _static/images/superset-logo-horiz.png             |   Bin 0 -> 11310 bytes
 _static/images/superset-logo@2x.png                |   Bin 0 -> 4132 bytes
 _static/images/superset.png                        |   Bin 0 -> 4722 bytes
 _static/images/superset_screenshot.png             |   Bin 0 -> 565023 bytes
 _static/images/tutorial/add_db.png                 |   Bin 0 -> 157717 bytes
 .../tutorial/tutorial_01_sources_database.png      |   Bin 0 -> 19291 bytes
 .../images/tutorial/tutorial_02_add_database.png   |   Bin 0 -> 24994 bytes
 .../images/tutorial/tutorial_03_database_name.png  |   Bin 0 -> 13947 bytes
 .../tutorial_04_sqlalchemy_connection_string.png   |   Bin 0 -> 52808 bytes
 .../tutorial/tutorial_05_connection_popup.png      |   Bin 0 -> 84173 bytes
 .../images/tutorial/tutorial_06_list_of_tables.png |   Bin 0 -> 23859 bytes
 .../images/tutorial/tutorial_07_save_button.png    |   Bin 0 -> 8210 bytes
 .../images/tutorial/tutorial_08_sources_tables.png |   Bin 0 -> 18728 bytes
 .../images/tutorial/tutorial_09_add_new_table.png  |   Bin 0 -> 18777 bytes
 _static/images/tutorial/tutorial_10_table_name.png |   Bin 0 -> 26728 bytes
 _static/images/tutorial/tutorial_11_choose_db.png  |   Bin 0 -> 22024 bytes
 .../tutorial_12_table_creation_success_msg.png     |   Bin 0 -> 33013 bytes
 .../tutorial/tutorial_13_edit_table_config.png     |   Bin 0 -> 32220 bytes
 .../images/tutorial/tutorial_14_field_config.png   |   Bin 0 -> 61811 bytes
 .../tutorial/tutorial_15_click_table_name.png      |   Bin 0 -> 7863 bytes
 .../tutorial/tutorial_16_datasource_chart_type.png |   Bin 0 -> 13822 bytes
 .../tutorial/tutorial_17_choose_time_range.png     |   Bin 0 -> 11627 bytes
 .../images/tutorial/tutorial_18_choose_metric.png  |   Bin 0 -> 12536 bytes
 .../images/tutorial/tutorial_19_click_query.png    |   Bin 0 -> 5734 bytes
 .../tutorial/tutorial_20_count_star_result.png     |   Bin 0 -> 5333 bytes
 _static/images/tutorial/tutorial_21_group_by.png   |   Bin 0 -> 6840 bytes
 .../tutorial/tutorial_22_group_by_result.png       |   Bin 0 -> 22576 bytes
 .../tutorial_23_group_by_more_dimensions.png       |   Bin 0 -> 8191 bytes
 _static/images/tutorial/tutorial_24_max_metric.png |   Bin 0 -> 6731 bytes
 .../tutorial/tutorial_25_max_temp_filter.png       |   Bin 0 -> 11654 bytes
 _static/images/tutorial/tutorial_26_row_limit.png  |   Bin 0 -> 4927 bytes
 .../tutorial/tutorial_27_top_10_max_temps.png      |   Bin 0 -> 49871 bytes
 _static/images/tutorial/tutorial_28_bar_chart.png  |   Bin 0 -> 14757 bytes
 .../tutorial_29_bar_chart_series_metrics.png       |   Bin 0 -> 20374 bytes
 .../tutorial/tutorial_30_bar_chart_results.png     |   Bin 0 -> 75926 bytes
 .../tutorial_31_save_slice_to_dashboard.png        |   Bin 0 -> 33789 bytes
 .../tutorial_32_save_slice_confirmation.png        |   Bin 0 -> 24041 bytes
 _static/images/tutorial/tutorial_33_dashboard.png  |   Bin 0 -> 5232 bytes
 .../tutorial/tutorial_34_weather_dashboard.png     |   Bin 0 -> 6703 bytes
 .../tutorial/tutorial_35_slice_on_dashboard.png    |   Bin 0 -> 66781 bytes
 .../tutorial/tutorial_36_adjust_dimensions.gif     |   Bin 0 -> 126264 bytes
 _static/images/usertutorial/add_new_chart.png      |   Bin 0 -> 42447 bytes
 .../usertutorial/advanced_analytics_base.png       |   Bin 0 -> 122647 bytes
 _static/images/usertutorial/annotation.png         |   Bin 0 -> 101822 bytes
 .../images/usertutorial/annotation_settings.png    |   Bin 0 -> 22421 bytes
 .../usertutorial/average_aggregate_for_cost.png    |   Bin 0 -> 31741 bytes
 .../usertutorial/blue_bar_insert_component.png     |   Bin 0 -> 56554 bytes
 _static/images/usertutorial/chose_a_datasource.png |   Bin 0 -> 21013 bytes
 .../usertutorial/csv_to_database_configuration.png |   Bin 0 -> 30607 bytes
 _static/images/usertutorial/edit-record.png        |   Bin 0 -> 4940 bytes
 _static/images/usertutorial/edit_annotation.png    |   Bin 0 -> 34104 bytes
 .../usertutorial/filter_on_origin_country.png      |   Bin 0 -> 44695 bytes
 _static/images/usertutorial/markdown.png           |   Bin 0 -> 9030 bytes
 .../usertutorial/no_filter_on_time_filter.png      |   Bin 0 -> 35991 bytes
 _static/images/usertutorial/parse_dates_column.png |   Bin 0 -> 22004 bytes
 _static/images/usertutorial/publish_dashboard.png  |   Bin 0 -> 67785 bytes
 _static/images/usertutorial/resample.png           |   Bin 0 -> 88488 bytes
 .../resize_tutorial_table_on_dashboard.png         |   Bin 0 -> 40391 bytes
 _static/images/usertutorial/rolling_mean.png       |   Bin 0 -> 99706 bytes
 .../images/usertutorial/save_tutorial_table.png    |   Bin 0 -> 8282 bytes
 .../usertutorial/select_dates_pivot_table.png      |   Bin 0 -> 35466 bytes
 .../select_table_visualization_type.png            |   Bin 0 -> 47283 bytes
 _static/images/usertutorial/sum_cost_column.png    |   Bin 0 -> 36632 bytes
 .../time_comparison_absolute_difference.png        |   Bin 0 -> 98829 bytes
 .../usertutorial/time_comparison_two_series.png    |   Bin 0 -> 137009 bytes
 .../images/usertutorial/tutorial_line_chart.png    |   Bin 0 -> 97148 bytes
 .../images/usertutorial/tutorial_pivot_table.png   |   Bin 0 -> 57761 bytes
 _static/images/usertutorial/tutorial_table.png     |   Bin 0 -> 35897 bytes
 _static/images/usertutorial/upload_a_csv.png       |   Bin 0 -> 38213 bytes
 _static/images/viz_thumbnails/area.png             |   Bin 0 -> 14469 bytes
 _static/images/viz_thumbnails/bar.png              |   Bin 0 -> 9058 bytes
 _static/images/viz_thumbnails/big_number.png       |   Bin 0 -> 103045 bytes
 _static/images/viz_thumbnails/big_number_total.png |   Bin 0 -> 4925 bytes
 _static/images/viz_thumbnails/box_plot.png         |   Bin 0 -> 9496 bytes
 _static/images/viz_thumbnails/bubble.png           |   Bin 0 -> 22779 bytes
 _static/images/viz_thumbnails/bullet.png           |   Bin 0 -> 2174 bytes
 _static/images/viz_thumbnails/cal_heatmap.png      |   Bin 0 -> 11238 bytes
 _static/images/viz_thumbnails/chord.png            |   Bin 0 -> 39273 bytes
 _static/images/viz_thumbnails/compare.png          |   Bin 0 -> 32918 bytes
 _static/images/viz_thumbnails/country_map.png      |   Bin 0 -> 41210 bytes
 _static/images/viz_thumbnails/deck_arc.png         |   Bin 0 -> 38815 bytes
 _static/images/viz_thumbnails/deck_geojson.png     |   Bin 0 -> 42386 bytes
 _static/images/viz_thumbnails/deck_grid.png        |   Bin 0 -> 143670 bytes
 _static/images/viz_thumbnails/deck_hex.png         |   Bin 0 -> 85015 bytes
 _static/images/viz_thumbnails/deck_multi.png       |   Bin 0 -> 106790 bytes
 _static/images/viz_thumbnails/deck_path.png        |   Bin 0 -> 75705 bytes
 _static/images/viz_thumbnails/deck_polygon.png     |   Bin 0 -> 37261 bytes
 _static/images/viz_thumbnails/deck_scatter.png     |   Bin 0 -> 120091 bytes
 _static/images/viz_thumbnails/deck_screengrid.png  |   Bin 0 -> 76990 bytes
 _static/images/viz_thumbnails/directed_force.png   |   Bin 0 -> 42753 bytes
 _static/images/viz_thumbnails/dist_bar.png         |   Bin 0 -> 8752 bytes
 _static/images/viz_thumbnails/dual_line.png        |   Bin 0 -> 19229 bytes
 _static/images/viz_thumbnails/event_flow.png       |   Bin 0 -> 17191 bytes
 _static/images/viz_thumbnails/filter_box.png       |   Bin 0 -> 8550 bytes
 _static/images/viz_thumbnails/heatmap.png          |   Bin 0 -> 39866 bytes
 _static/images/viz_thumbnails/histogram.png        |   Bin 0 -> 9717 bytes
 _static/images/viz_thumbnails/horizon.png          |   Bin 0 -> 24924 bytes
 _static/images/viz_thumbnails/iframe.png           |   Bin 0 -> 50998 bytes
 _static/images/viz_thumbnails/line.png             |   Bin 0 -> 42915 bytes
 _static/images/viz_thumbnails/line_multi.png       |   Bin 0 -> 54363 bytes
 _static/images/viz_thumbnails/mapbox.png           |   Bin 0 -> 85714 bytes
 _static/images/viz_thumbnails/markup.png           |   Bin 0 -> 23186 bytes
 _static/images/viz_thumbnails/multi.png            |   Bin 0 -> 108443 bytes
 _static/images/viz_thumbnails/paired_ttest.png     |   Bin 0 -> 23323 bytes
 _static/images/viz_thumbnails/para.png             |   Bin 0 -> 52039 bytes
 _static/images/viz_thumbnails/partition.png        |   Bin 0 -> 11364 bytes
 _static/images/viz_thumbnails/pie.png              |   Bin 0 -> 6007 bytes
 _static/images/viz_thumbnails/pivot_table.png      |   Bin 0 -> 54527 bytes
 _static/images/viz_thumbnails/rose.png             |   Bin 0 -> 37386 bytes
 _static/images/viz_thumbnails/sankey.png           |   Bin 0 -> 43735 bytes
 _static/images/viz_thumbnails/separator.png        |   Bin 0 -> 16632 bytes
 _static/images/viz_thumbnails/sunburst.png         |   Bin 0 -> 26030 bytes
 _static/images/viz_thumbnails/table.png            |   Bin 0 -> 22572 bytes
 _static/images/viz_thumbnails/time_pivot.png       |   Bin 0 -> 13209 bytes
 _static/images/viz_thumbnails/time_table.png       |   Bin 0 -> 17464 bytes
 _static/images/viz_thumbnails/treemap.png          |   Bin 0 -> 16623 bytes
 _static/images/viz_thumbnails/word_cloud.png       |   Bin 0 -> 24048 bytes
 _static/images/viz_thumbnails/world_map.png        |   Bin 0 -> 28598 bytes
 _static/img/apache_feather.png                     |   Bin 0 -> 138140 bytes
 _static/img/babies.png                             |   Bin 0 -> 59832 bytes
 _static/img/babytux.jpg                            |   Bin 0 -> 10131 bytes
 _static/img/bubble.png                             |   Bin 0 -> 470048 bytes
 _static/img/cloud.png                              |   Bin 0 -> 718612 bytes
 _static/img/create_role.png                        |   Bin 0 -> 51474 bytes
 _static/img/dash.png                               |   Bin 0 -> 336285 bytes
 _static/img/docs/apache_feather.png                |   Bin 0 -> 138140 bytes
 _static/img/docs/create_role.png                   |   Bin 0 -> 51474 bytes
 _static/img/docs/druid_agg.png                     |   Bin 0 -> 104052 bytes
 _static/img/docs/screenshots/bank_dash.png         |   Bin 0 -> 1532812 bytes
 _static/img/docs/screenshots/deckgl_dash.png       |   Bin 0 -> 6777438 bytes
 _static/img/docs/screenshots/explore.png           |   Bin 0 -> 674489 bytes
 _static/img/docs/screenshots/sqllab.png            |   Bin 0 -> 514789 bytes
 _static/img/docs/screenshots/visualizations.png    |   Bin 0 -> 2016718 bytes
 _static/img/docs/tutorial/add_db.png               |   Bin 0 -> 157717 bytes
 .../docs/tutorial/tutorial_01_sources_database.png |   Bin 0 -> 19291 bytes
 .../img/docs/tutorial/tutorial_02_add_database.png |   Bin 0 -> 24994 bytes
 .../docs/tutorial/tutorial_03_database_name.png    |   Bin 0 -> 13947 bytes
 .../tutorial_04_sqlalchemy_connection_string.png   |   Bin 0 -> 52808 bytes
 .../docs/tutorial/tutorial_05_connection_popup.png |   Bin 0 -> 84173 bytes
 .../docs/tutorial/tutorial_06_list_of_tables.png   |   Bin 0 -> 23859 bytes
 .../img/docs/tutorial/tutorial_07_save_button.png  |   Bin 0 -> 8210 bytes
 .../docs/tutorial/tutorial_08_sources_tables.png   |   Bin 0 -> 18728 bytes
 .../docs/tutorial/tutorial_09_add_new_table.png    |   Bin 0 -> 18777 bytes
 .../img/docs/tutorial/tutorial_10_table_name.png   |   Bin 0 -> 26728 bytes
 .../img/docs/tutorial/tutorial_11_choose_db.png    |   Bin 0 -> 22024 bytes
 .../tutorial_12_table_creation_success_msg.png     |   Bin 0 -> 33013 bytes
 .../tutorial/tutorial_13_edit_table_config.png     |   Bin 0 -> 32220 bytes
 .../img/docs/tutorial/tutorial_14_field_config.png |   Bin 0 -> 61811 bytes
 .../docs/tutorial/tutorial_15_click_table_name.png |   Bin 0 -> 7863 bytes
 .../tutorial/tutorial_16_datasource_chart_type.png |   Bin 0 -> 13822 bytes
 .../tutorial/tutorial_17_choose_time_range.png     |   Bin 0 -> 11627 bytes
 .../docs/tutorial/tutorial_18_choose_metric.png    |   Bin 0 -> 12536 bytes
 .../img/docs/tutorial/tutorial_19_click_query.png  |   Bin 0 -> 5734 bytes
 .../tutorial/tutorial_20_count_star_result.png     |   Bin 0 -> 5333 bytes
 _static/img/docs/tutorial/tutorial_21_group_by.png |   Bin 0 -> 6840 bytes
 .../docs/tutorial/tutorial_22_group_by_result.png  |   Bin 0 -> 22576 bytes
 .../tutorial_23_group_by_more_dimensions.png       |   Bin 0 -> 8191 bytes
 .../img/docs/tutorial/tutorial_24_max_metric.png   |   Bin 0 -> 6731 bytes
 .../docs/tutorial/tutorial_25_max_temp_filter.png  |   Bin 0 -> 11654 bytes
 .../img/docs/tutorial/tutorial_26_row_limit.png    |   Bin 0 -> 4927 bytes
 .../docs/tutorial/tutorial_27_top_10_max_temps.png |   Bin 0 -> 49871 bytes
 .../img/docs/tutorial/tutorial_28_bar_chart.png    |   Bin 0 -> 14757 bytes
 .../tutorial_29_bar_chart_series_metrics.png       |   Bin 0 -> 20374 bytes
 .../tutorial/tutorial_30_bar_chart_results.png     |   Bin 0 -> 75926 bytes
 .../tutorial_31_save_slice_to_dashboard.png        |   Bin 0 -> 33789 bytes
 .../tutorial_32_save_slice_confirmation.png        |   Bin 0 -> 24041 bytes
 .../img/docs/tutorial/tutorial_33_dashboard.png    |   Bin 0 -> 5232 bytes
 .../tutorial/tutorial_34_weather_dashboard.png     |   Bin 0 -> 6703 bytes
 .../tutorial/tutorial_35_slice_on_dashboard.png    |   Bin 0 -> 66781 bytes
 .../tutorial/tutorial_36_adjust_dimensions.gif     |   Bin 0 -> 126264 bytes
 _static/img/druid_agg.png                          |   Bin 0 -> 104052 bytes
 _static/img/favicon.png                            |   Bin 0 -> 6927 bytes
 _static/img/loading.gif                            |   Bin 0 -> 79023 bytes
 _static/img/noimg.png                              |   Bin 0 -> 1101 bytes
 _static/img/s.png                                  |   Bin 0 -> 11833 bytes
 _static/img/screenshots/bank_dash.png              |   Bin 0 -> 1532812 bytes
 _static/img/screenshots/deckgl_dash.png            |   Bin 0 -> 6777438 bytes
 _static/img/screenshots/explore.png                |   Bin 0 -> 674489 bytes
 _static/img/screenshots/sqllab.png                 |   Bin 0 -> 514789 bytes
 _static/img/screenshots/visualizations.png         |   Bin 0 -> 2016718 bytes
 _static/img/superset-logo@2x.png                   |   Bin 0 -> 4132 bytes
 _static/img/superset.png                           |   Bin 0 -> 4722 bytes
 _static/img/superset_screenshot.png                |   Bin 0 -> 565023 bytes
 _static/img/tutorial/add_db.png                    |   Bin 0 -> 157717 bytes
 .../img/tutorial/tutorial_01_sources_database.png  |   Bin 0 -> 19291 bytes
 _static/img/tutorial/tutorial_02_add_database.png  |   Bin 0 -> 24994 bytes
 _static/img/tutorial/tutorial_03_database_name.png |   Bin 0 -> 13947 bytes
 .../tutorial_04_sqlalchemy_connection_string.png   |   Bin 0 -> 52808 bytes
 .../img/tutorial/tutorial_05_connection_popup.png  |   Bin 0 -> 84173 bytes
 .../img/tutorial/tutorial_06_list_of_tables.png    |   Bin 0 -> 23859 bytes
 _static/img/tutorial/tutorial_07_save_button.png   |   Bin 0 -> 8210 bytes
 .../img/tutorial/tutorial_08_sources_tables.png    |   Bin 0 -> 18728 bytes
 _static/img/tutorial/tutorial_09_add_new_table.png |   Bin 0 -> 18777 bytes
 _static/img/tutorial/tutorial_10_table_name.png    |   Bin 0 -> 26728 bytes
 _static/img/tutorial/tutorial_11_choose_db.png     |   Bin 0 -> 22024 bytes
 .../tutorial_12_table_creation_success_msg.png     |   Bin 0 -> 33013 bytes
 .../img/tutorial/tutorial_13_edit_table_config.png |   Bin 0 -> 32220 bytes
 _static/img/tutorial/tutorial_14_field_config.png  |   Bin 0 -> 61811 bytes
 .../img/tutorial/tutorial_15_click_table_name.png  |   Bin 0 -> 7863 bytes
 .../tutorial/tutorial_16_datasource_chart_type.png |   Bin 0 -> 13822 bytes
 .../img/tutorial/tutorial_17_choose_time_range.png |   Bin 0 -> 11627 bytes
 _static/img/tutorial/tutorial_18_choose_metric.png |   Bin 0 -> 12536 bytes
 _static/img/tutorial/tutorial_19_click_query.png   |   Bin 0 -> 5734 bytes
 .../img/tutorial/tutorial_20_count_star_result.png |   Bin 0 -> 5333 bytes
 _static/img/tutorial/tutorial_21_group_by.png      |   Bin 0 -> 6840 bytes
 .../img/tutorial/tutorial_22_group_by_result.png   |   Bin 0 -> 22576 bytes
 .../tutorial_23_group_by_more_dimensions.png       |   Bin 0 -> 8191 bytes
 _static/img/tutorial/tutorial_24_max_metric.png    |   Bin 0 -> 6731 bytes
 .../img/tutorial/tutorial_25_max_temp_filter.png   |   Bin 0 -> 11654 bytes
 _static/img/tutorial/tutorial_26_row_limit.png     |   Bin 0 -> 4927 bytes
 .../img/tutorial/tutorial_27_top_10_max_temps.png  |   Bin 0 -> 49871 bytes
 _static/img/tutorial/tutorial_28_bar_chart.png     |   Bin 0 -> 14757 bytes
 .../tutorial_29_bar_chart_series_metrics.png       |   Bin 0 -> 20374 bytes
 .../img/tutorial/tutorial_30_bar_chart_results.png |   Bin 0 -> 75926 bytes
 .../tutorial_31_save_slice_to_dashboard.png        |   Bin 0 -> 33789 bytes
 .../tutorial_32_save_slice_confirmation.png        |   Bin 0 -> 24041 bytes
 _static/img/tutorial/tutorial_33_dashboard.png     |   Bin 0 -> 5232 bytes
 .../img/tutorial/tutorial_34_weather_dashboard.png |   Bin 0 -> 6703 bytes
 .../tutorial/tutorial_35_slice_on_dashboard.png    |   Bin 0 -> 66781 bytes
 .../img/tutorial/tutorial_36_adjust_dimensions.gif |   Bin 0 -> 126264 bytes
 _static/img/usertutorial/add_new_chart.png         |   Bin 0 -> 42447 bytes
 .../img/usertutorial/advanced_analytics_base.png   |   Bin 0 -> 122647 bytes
 _static/img/usertutorial/annotation.png            |   Bin 0 -> 101822 bytes
 _static/img/usertutorial/annotation_settings.png   |   Bin 0 -> 22421 bytes
 .../usertutorial/average_aggregate_for_cost.png    |   Bin 0 -> 31741 bytes
 .../img/usertutorial/blue_bar_insert_component.png |   Bin 0 -> 56554 bytes
 _static/img/usertutorial/chose_a_datasource.png    |   Bin 0 -> 21013 bytes
 .../usertutorial/csv_to_database_configuration.png |   Bin 0 -> 30607 bytes
 _static/img/usertutorial/edit-record.png           |   Bin 0 -> 4940 bytes
 _static/img/usertutorial/edit_annotation.png       |   Bin 0 -> 34104 bytes
 .../img/usertutorial/filter_on_origin_country.png  |   Bin 0 -> 44695 bytes
 _static/img/usertutorial/markdown.png              |   Bin 0 -> 9030 bytes
 .../img/usertutorial/no_filter_on_time_filter.png  |   Bin 0 -> 35991 bytes
 _static/img/usertutorial/parse_dates_column.png    |   Bin 0 -> 22004 bytes
 _static/img/usertutorial/publish_dashboard.png     |   Bin 0 -> 67785 bytes
 _static/img/usertutorial/resample.png              |   Bin 0 -> 88488 bytes
 .../resize_tutorial_table_on_dashboard.png         |   Bin 0 -> 40391 bytes
 _static/img/usertutorial/rolling_mean.png          |   Bin 0 -> 99706 bytes
 _static/img/usertutorial/save_tutorial_table.png   |   Bin 0 -> 8282 bytes
 .../img/usertutorial/select_dates_pivot_table.png  |   Bin 0 -> 35466 bytes
 .../select_table_visualization_type.png            |   Bin 0 -> 47283 bytes
 _static/img/usertutorial/sum_cost_column.png       |   Bin 0 -> 36632 bytes
 .../time_comparison_absolute_difference.png        |   Bin 0 -> 98829 bytes
 .../usertutorial/time_comparison_two_series.png    |   Bin 0 -> 137009 bytes
 _static/img/usertutorial/tutorial_line_chart.png   |   Bin 0 -> 97148 bytes
 _static/img/usertutorial/tutorial_pivot_table.png  |   Bin 0 -> 57761 bytes
 _static/img/usertutorial/tutorial_table.png        |   Bin 0 -> 35897 bytes
 _static/img/usertutorial/upload_a_csv.png          |   Bin 0 -> 38213 bytes
 _static/img/viz_thumbnails/area.png                |   Bin 0 -> 14469 bytes
 _static/img/viz_thumbnails/bar.png                 |   Bin 0 -> 9058 bytes
 _static/img/viz_thumbnails/big_number.png          |   Bin 0 -> 103045 bytes
 _static/img/viz_thumbnails/big_number_total.png    |   Bin 0 -> 4925 bytes
 _static/img/viz_thumbnails/box_plot.png            |   Bin 0 -> 9496 bytes
 _static/img/viz_thumbnails/bubble.png              |   Bin 0 -> 22779 bytes
 _static/img/viz_thumbnails/bullet.png              |   Bin 0 -> 2174 bytes
 _static/img/viz_thumbnails/cal_heatmap.png         |   Bin 0 -> 11238 bytes
 _static/img/viz_thumbnails/chord.png               |   Bin 0 -> 39273 bytes
 _static/img/viz_thumbnails/compare.png             |   Bin 0 -> 32918 bytes
 _static/img/viz_thumbnails/country_map.png         |   Bin 0 -> 41210 bytes
 _static/img/viz_thumbnails/deck_arc.png            |   Bin 0 -> 38815 bytes
 _static/img/viz_thumbnails/deck_geojson.png        |   Bin 0 -> 42386 bytes
 _static/img/viz_thumbnails/deck_grid.png           |   Bin 0 -> 143670 bytes
 _static/img/viz_thumbnails/deck_hex.png            |   Bin 0 -> 85015 bytes
 _static/img/viz_thumbnails/deck_multi.png          |   Bin 0 -> 106790 bytes
 _static/img/viz_thumbnails/deck_path.png           |   Bin 0 -> 75705 bytes
 _static/img/viz_thumbnails/deck_polygon.png        |   Bin 0 -> 37261 bytes
 _static/img/viz_thumbnails/deck_scatter.png        |   Bin 0 -> 120091 bytes
 _static/img/viz_thumbnails/deck_screengrid.png     |   Bin 0 -> 76990 bytes
 _static/img/viz_thumbnails/directed_force.png      |   Bin 0 -> 42753 bytes
 _static/img/viz_thumbnails/dist_bar.png            |   Bin 0 -> 8752 bytes
 _static/img/viz_thumbnails/dual_line.png           |   Bin 0 -> 19229 bytes
 _static/img/viz_thumbnails/event_flow.png          |   Bin 0 -> 17191 bytes
 _static/img/viz_thumbnails/filter_box.png          |   Bin 0 -> 8550 bytes
 _static/img/viz_thumbnails/heatmap.png             |   Bin 0 -> 39866 bytes
 _static/img/viz_thumbnails/histogram.png           |   Bin 0 -> 9717 bytes
 _static/img/viz_thumbnails/horizon.png             |   Bin 0 -> 24924 bytes
 _static/img/viz_thumbnails/iframe.png              |   Bin 0 -> 50998 bytes
 _static/img/viz_thumbnails/line.png                |   Bin 0 -> 42915 bytes
 _static/img/viz_thumbnails/line_multi.png          |   Bin 0 -> 54363 bytes
 _static/img/viz_thumbnails/mapbox.png              |   Bin 0 -> 85714 bytes
 _static/img/viz_thumbnails/markup.png              |   Bin 0 -> 23186 bytes
 _static/img/viz_thumbnails/multi.png               |   Bin 0 -> 108443 bytes
 _static/img/viz_thumbnails/paired_ttest.png        |   Bin 0 -> 23323 bytes
 _static/img/viz_thumbnails/para.png                |   Bin 0 -> 52039 bytes
 _static/img/viz_thumbnails/partition.png           |   Bin 0 -> 11364 bytes
 _static/img/viz_thumbnails/pie.png                 |   Bin 0 -> 6007 bytes
 _static/img/viz_thumbnails/pivot_table.png         |   Bin 0 -> 54527 bytes
 _static/img/viz_thumbnails/rose.png                |   Bin 0 -> 37386 bytes
 _static/img/viz_thumbnails/sankey.png              |   Bin 0 -> 43735 bytes
 _static/img/viz_thumbnails/separator.png           |   Bin 0 -> 16632 bytes
 _static/img/viz_thumbnails/sunburst.png            |   Bin 0 -> 26030 bytes
 _static/img/viz_thumbnails/table.png               |   Bin 0 -> 22572 bytes
 _static/img/viz_thumbnails/time_pivot.png          |   Bin 0 -> 13209 bytes
 _static/img/viz_thumbnails/time_table.png          |   Bin 0 -> 17464 bytes
 _static/img/viz_thumbnails/treemap.png             |   Bin 0 -> 16623 bytes
 _static/img/viz_thumbnails/word_cloud.png          |   Bin 0 -> 24048 bytes
 _static/img/viz_thumbnails/world_map.png           |   Bin 0 -> 28598 bytes
 _static/img/viz_thumbnails_large/area.png          |   Bin 0 -> 105237 bytes
 _static/img/viz_thumbnails_large/bar.png           |   Bin 0 -> 50564 bytes
 _static/img/viz_thumbnails_large/big_number.png    |   Bin 0 -> 51404 bytes
 .../img/viz_thumbnails_large/big_number_total.png  |   Bin 0 -> 27637 bytes
 _static/img/viz_thumbnails_large/box_plot.png      |   Bin 0 -> 56136 bytes
 _static/img/viz_thumbnails_large/bubble.png        |   Bin 0 -> 135455 bytes
 _static/img/viz_thumbnails_large/bullet.png        |   Bin 0 -> 8764 bytes
 _static/img/viz_thumbnails_large/cal_heatmap.png   |   Bin 0 -> 31627 bytes
 _static/img/viz_thumbnails_large/chord.png         |   Bin 0 -> 407616 bytes
 _static/img/viz_thumbnails_large/compare.png       |   Bin 0 -> 258894 bytes
 _static/img/viz_thumbnails_large/country_map.png   |   Bin 0 -> 303004 bytes
 _static/img/viz_thumbnails_large/deck_arc.png      |   Bin 0 -> 230107 bytes
 _static/img/viz_thumbnails_large/deck_geojson.png  |   Bin 0 -> 181512 bytes
 _static/img/viz_thumbnails_large/deck_grid.png     |   Bin 0 -> 2125810 bytes
 _static/img/viz_thumbnails_large/deck_hex.png      |   Bin 0 -> 1090997 bytes
 _static/img/viz_thumbnails_large/deck_multi.png    |   Bin 0 -> 991412 bytes
 _static/img/viz_thumbnails_large/deck_path.png     |   Bin 0 -> 523094 bytes
 _static/img/viz_thumbnails_large/deck_polygon.png  |   Bin 0 -> 443630 bytes
 _static/img/viz_thumbnails_large/deck_scatter.png  |   Bin 0 -> 795739 bytes
 .../img/viz_thumbnails_large/deck_screengrid.png   |   Bin 0 -> 591701 bytes
 .../img/viz_thumbnails_large/directed_force.png    |   Bin 0 -> 247382 bytes
 _static/img/viz_thumbnails_large/dist_bar.png      |   Bin 0 -> 52519 bytes
 _static/img/viz_thumbnails_large/dual_line.png     |   Bin 0 -> 165716 bytes
 _static/img/viz_thumbnails_large/event_flow.png    |   Bin 0 -> 108626 bytes
 _static/img/viz_thumbnails_large/filter_box.png    |   Bin 0 -> 49653 bytes
 _static/img/viz_thumbnails_large/heatmap.png       |   Bin 0 -> 435496 bytes
 _static/img/viz_thumbnails_large/histogram.png     |   Bin 0 -> 64899 bytes
 _static/img/viz_thumbnails_large/horizon.png       |   Bin 0 -> 165253 bytes
 _static/img/viz_thumbnails_large/iframe.png        |   Bin 0 -> 755166 bytes
 _static/img/viz_thumbnails_large/line.png          |   Bin 0 -> 321509 bytes
 _static/img/viz_thumbnails_large/line_multi.png    |   Bin 0 -> 116138 bytes
 _static/img/viz_thumbnails_large/mapbox.png        |   Bin 0 -> 225567 bytes
 _static/img/viz_thumbnails_large/markup.png        |   Bin 0 -> 227846 bytes
 _static/img/viz_thumbnails_large/multi.png         |   Bin 0 -> 761211 bytes
 _static/img/viz_thumbnails_large/paired_ttest.png  |   Bin 0 -> 236049 bytes
 _static/img/viz_thumbnails_large/para.png          |   Bin 0 -> 471027 bytes
 _static/img/viz_thumbnails_large/partition.png     |   Bin 0 -> 198125 bytes
 _static/img/viz_thumbnails_large/pie.png           |   Bin 0 -> 28302 bytes
 _static/img/viz_thumbnails_large/pivot_table.png   |   Bin 0 -> 276020 bytes
 _static/img/viz_thumbnails_large/rose.png          |   Bin 0 -> 506254 bytes
 _static/img/viz_thumbnails_large/sankey.png        |   Bin 0 -> 205313 bytes
 _static/img/viz_thumbnails_large/separator.png     |   Bin 0 -> 101451 bytes
 _static/img/viz_thumbnails_large/sunburst.png      |   Bin 0 -> 173806 bytes
 _static/img/viz_thumbnails_large/table.png         |   Bin 0 -> 109326 bytes
 _static/img/viz_thumbnails_large/time_pivot.png    |   Bin 0 -> 84481 bytes
 _static/img/viz_thumbnails_large/time_table.png    |   Bin 0 -> 65153 bytes
 _static/img/viz_thumbnails_large/treemap.png       |   Bin 0 -> 96420 bytes
 _static/img/viz_thumbnails_large/word_cloud.png    |   Bin 0 -> 117846 bytes
 _static/img/viz_thumbnails_large/world_map.png     |   Bin 0 -> 136501 bytes
 _static/jquery-1.11.1.js                           | 10308 +++++++++
 _static/jquery-3.1.0.js                            | 10074 ++++++++
 _static/jquery-3.2.1.js                            | 10253 +++++++++
 _static/jquery-3.4.1.js                            | 10598 +++++++++
 _static/jquery.js                                  |     2 +
 _static/js/modernizr.min.js                        |     4 +
 _static/js/theme.js                                |     3 +
 _static/language_data.js                           |   297 +
 _static/minus.png                                  |   Bin 0 -> 90 bytes
 _static/plus.png                                   |   Bin 0 -> 90 bytes
 _static/pygments.css                               |    69 +
 _static/s.png                                      |   Bin 0 -> 11833 bytes
 _static/searchtools.js                             |   515 +
 _static/underscore-1.3.1.js                        |   999 +
 _static/underscore.js                              |    31 +
 _static/up-pressed.png                             |   Bin 0 -> 214 bytes
 _static/up.png                                     |   Bin 0 -> 203 bytes
 _static/websupport.js                              |   808 +
 admintutorial.html                                 |   422 +
 doczrc.js                                          |    12 -
 druid.html                                         |   252 +
 faq.html                                           |   483 +
 gallery.html                                       |   253 +
 gatsby-browser.js                                  |     7 -
 gatsby-config.js                                   |    35 -
 gatsby-node.js                                     |     7 -
 gatsby-ssr.js                                      |     7 -
 genindex.html                                      |   276 +
 import_export_datasources.html                     |   317 +
 index.html                                         |   436 +
 installation.html                                  |  1731 ++
 issue_code_reference.html                          |   226 +
 misc.html                                          |   228 +
 objects.inv                                        |   Bin 0 -> 718 bytes
 package-lock.json                                  | 22965 -------------------
 package.json                                       |    57 -
 prettier.config.js                                 |     7 -
 search.html                                        |   212 +
 searchindex.js                                     |     1 +
 security.html                                      |   363 +
 sql                                                |    13 +
 sqllab.html                                        |   529 +
 src/components/footer.tsx                          |   123 -
 src/components/image.tsx                           |    79 -
 src/components/layout.css                          |    16 -
 src/components/layout.tsx                          |   176 -
 src/components/menu.tsx                            |    39 -
 src/components/next.tsx                            |    28 -
 src/components/select.tsx                          |    31 -
 src/components/seo.js                              |    81 -
 src/gatsby-theme-docz/index.tsx                    |    23 -
 src/images/apache-drill.png                        |   Bin 40173 -> 0 bytes
 src/images/apache-druid.jpeg                       |   Bin 214904 -> 0 bytes
 src/images/apache-druid.png                        |   Bin 12839 -> 0 bytes
 src/images/apache-hive.svg                         |    51 -
 src/images/apache-impala.png                       |   Bin 5216 -> 0 bytes
 src/images/apache-kylin.png                        |   Bin 13694 -> 0 bytes
 src/images/aws-redshift.png                        |   Bin 9168 -> 0 bytes
 src/images/clickhouse.png                          |   Bin 7651 -> 0 bytes
 src/images/docker.png                              |   Bin 24928 -> 0 bytes
 src/images/exasol.png                              |   Bin 8582 -> 0 bytes
 src/images/firebird.png                            |   Bin 10895 -> 0 bytes
 src/images/gatsby-astronaut.png                    |   Bin 167273 -> 0 bytes
 src/images/gatsby-icon.png                         |   Bin 21212 -> 0 bytes
 src/images/googleBQ.png                            |   Bin 16418 -> 0 bytes
 src/images/greenplum.jpeg                          |   Bin 7559 -> 0 bytes
 src/images/greenplum.png                           |   Bin 17811 -> 0 bytes
 src/images/ibmdb2.png                              |   Bin 14127 -> 0 bytes
 src/images/monet.png                               |   Bin 21830 -> 0 bytes
 src/images/msql.png                                |   Bin 21970 -> 0 bytes
 src/images/mysql.html                              |   573 -
 src/images/mysql.png                               |   Bin 14453 -> 0 bytes
 src/images/oracle-logo.png                         |   Bin 10347 -> 0 bytes
 src/images/oracle.png                              |   Bin 8231 -> 0 bytes
 src/images/oraclelogo.png                          |   Bin 29864 -> 0 bytes
 src/images/postgresql.jpg                          |   Bin 19019 -> 0 bytes
 src/images/postsql.png                             |   Bin 44334 -> 0 bytes
 src/images/preset.png                              |   Bin 39030 -> 0 bytes
 src/images/preset.svg                              |    15 -
 src/images/presto-og.png                           |   Bin 18505 -> 0 bytes
 src/images/snowflake.png                           |   Bin 21654 -> 0 bytes
 src/images/sqllite.jpg                             |   Bin 13006 -> 0 bytes
 src/images/sqllite.png                             |   Bin 38063 -> 0 bytes
 src/images/stack_overflow.png                      |   Bin 30065 -> 0 bytes
 src/images/superset-logo-horiz-apache.png          |   Bin 121779 -> 0 bytes
 src/images/vertica.png                             |   Bin 6800 -> 0 bytes
 src/pages/404.js                                   |    14 -
 src/pages/community.tsx                            |   128 -
 src/pages/docs/Best Practices/index.mdx            |    21 -
 .../docs/Build Your Own Viz Plugins/index.mdx      |   168 -
 src/pages/docs/Database Connectors/dremio.mdx      |    19 -
 src/pages/docs/Database Connectors/drill.mdx       |    34 -
 src/pages/docs/Database Connectors/druid.mdx       |    19 -
 .../docs/Database Connectors/elasticsearch.mdx     |    46 -
 .../docs/Database Connectors/google-bigquery.mdx   |    46 -
 src/pages/docs/Database Connectors/index.mdx       |    24 -
 src/pages/docs/Database Connectors/mysql.mdx       |    25 -
 src/pages/docs/Database Connectors/postgres.mdx    |    33 -
 src/pages/docs/Database Connectors/presto.mdx      |    18 -
 src/pages/docs/Database Connectors/redshift.mdx    |    21 -
 src/pages/docs/Database Connectors/snowflake.mdx   |    24 -
 src/pages/docs/Database Connectors/teradata.mdx    |    24 -
 src/pages/docs/Database Connectors/vertica.mdx     |    30 -
 src/pages/docs/contributing.mdx                    |     8 -
 .../docs/installation/create-your-first-chart.mdx  |   100 -
 .../docs/installation/docker-local-deploy.mdx      |    77 -
 .../docs/installation/explore-data-sql-lab.mdx     |    47 -
 src/pages/docs/installation/index.mdx              |    13 -
 .../docs/installation/install-database-drivers.mdx |    91 -
 .../docs/installation/share-chart-dashboard.mdx    |    65 -
 src/pages/docs/learning-resources.mdx              |    27 -
 src/pages/docs/roadmap.mdx                         |     8 -
 src/pages/docs/security.mdx                        |    13 -
 src/pages/index.tsx                                |   316 -
 src/pages/resources.tsx                            |   158 -
 src/resources/data.js                              |   200 -
 src/utils.js                                       |    78 -
 static/images/data-point.jpg                       |   Bin 1795832 -> 0 bytes
 static/images/first-chart-barComplete.png          |   Bin 84478 -> 0 bytes
 static/images/first-chart-chartOption.png          |   Bin 199672 -> 0 bytes
 static/images/first-chart-customizeChart.png       |   Bin 76650 -> 0 bytes
 static/images/first-chart-dataSource.png           |   Bin 55959 -> 0 bytes
 static/images/first-chart-fields.png               |   Bin 45905 -> 0 bytes
 static/images/first-chart-newChart.png             |   Bin 14066 -> 0 bytes
 static/images/first-chart-pie.png                  |   Bin 169029 -> 0 bytes
 static/images/first-chart-plainChart.png           |   Bin 118177 -> 0 bytes
 static/images/first-chart-save.png                 |   Bin 67249 -> 0 bytes
 static/images/first-chart-table.png                |   Bin 77292 -> 0 bytes
 static/images/first-chart-tree.png                 |   Bin 70530 -> 0 bytes
 static/images/google-analytics.png                 |   Bin 689661 -> 0 bytes
 static/images/ip-address-example.png               |   Bin 20260 -> 0 bytes
 static/images/pie-chart.png                        |   Bin 1320888 -> 0 bytes
 static/images/plugin-1-yeoman-select.png           |   Bin 73428 -> 0 bytes
 static/images/plugin-10-hello-thumbnail.png        |   Bin 82000 -> 0 bytes
 static/images/plugin-11-explore-view.png           |   Bin 260757 -> 0 bytes
 static/images/plugin-12-console-logs.png           |   Bin 87031 -> 0 bytes
 static/images/plugin-2-yeoman-package-name.png     |   Bin 22666 -> 0 bytes
 static/images/plugin-3-yeoman-description.png      |   Bin 29219 -> 0 bytes
 static/images/plugin-4-yeoman-component-type.png   |   Bin 26113 -> 0 bytes
 static/images/plugin-5-yeoman-timeseries.png       |   Bin 21856 -> 0 bytes
 static/images/plugin-6-yeoman-badges.png           |   Bin 14572 -> 0 bytes
 static/images/plugin-7-yeoman-files.png            |   Bin 162570 -> 0 bytes
 static/images/plugin-8-package-json.png            |   Bin 52915 -> 0 bytes
 static/images/plugin-9-mainpreset-import.png       |   Bin 123027 -> 0 bytes
 static/images/plugin-9-mainpreset-register.png     |   Bin 112575 -> 0 bytes
 static/images/root-cert-example.png                |   Bin 41232 -> 0 bytes
 static/images/share-dashboard-1.png                |   Bin 66488 -> 0 bytes
 static/images/share-dashboard-2.png                |   Bin 16236 -> 0 bytes
 static/images/share-dashboard-3.png                |   Bin 30386 -> 0 bytes
 static/images/share-dashboard-4.png                |   Bin 113537 -> 0 bytes
 static/images/share-dashboard-5.png                |   Bin 54869 -> 0 bytes
 static/images/so-icon.svg                          |     1 -
 static/images/sql-lab-1.png                        |   Bin 13733 -> 0 bytes
 static/images/sql-lab-2.png                        |   Bin 307841 -> 0 bytes
 static/images/sql-lab-3.png                        |   Bin 317301 -> 0 bytes
 static/images/sql-lab-4.png                        |   Bin 272081 -> 0 bytes
 static/images/sqllab.png                           |   Bin 393766 -> 0 bytes
 static/images/tip-trick-dayIntervalGrouping.png    |   Bin 42883 -> 0 bytes
 static/images/tip-trick-filterOption.png           |   Bin 28014 -> 0 bytes
 static/images/tip-trick-filterOption2.png          |   Bin 59356 -> 0 bytes
 static/images/tip-trick-metrics.png                |   Bin 107466 -> 0 bytes
 static/images/youtube.png                          |   Bin 20462 -> 0 bytes
 tutorial.html                                      |   415 +
 tutorials.html                                     |   236 +
 usertutorial.html                                  |   625 +
 videos.html                                        |   217 +
 visualization.html                                 |  3272 +++
 752 files changed, 68935 insertions(+), 26324 deletions(-)

diff --git a/.buildinfo b/.buildinfo
new file mode 100644
index 0000000..79eea93
--- /dev/null
+++ b/.buildinfo
@@ -0,0 +1,4 @@
+# Sphinx build info version 1
+# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
+config: 57f4258c8ecbc6bcd73251593f94511a
+tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 6cf5918..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,76 +0,0 @@
-# Logs
-logs
-*.log
-npm-debug.log*
-yarn-debug.log*
-yarn-error.log*
-
-# Runtime data
-pids
-*.pid
-*.seed
-*.pid.lock
-
-# Node.js, webpack artifacts
-*.entry.js
-*.js.map
-node_modules
-npm-debug.log*
-yarn-error.log
-
-# Directory for instrumented libs generated by jscoverage/JSCover
-lib-cov
-
-# Coverage directory used by tools like istanbul
-coverage
-
-# nyc test coverage
-.nyc_output
-
-# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
-.grunt
-
-# Bower dependency directory (https://bower.io/)
-bower_components
-
-# node-waf configuration
-.lock-wscript
-
-# Compiled binary addons (http://nodejs.org/api/addons.html)
-build/Release
-
-# Dependency directories
-node_modules/
-jspm_packages/
-
-# Typescript v1 declaration files
-typings/
-
-# Optional npm cache directory
-.npm
-
-# Optional eslint cache
-.eslintcache
-
-# Optional REPL history
-.node_repl_history
-
-# Output of 'npm pack'
-*.tgz
-
-# dotenv environment variable files
-.env*
-
-# gatsby files
-.cache/
-public
-
-# Mac files
-.DS_Store
-
-# Yarn
-yarn-error.log
-.pnp/
-.pnp.js
-# Yarn Integrity file
-.yarn-integrity
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index 7e964c1..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,14 +0,0 @@
-The BSD Zero Clause License (0BSD)
-
-Copyright (c) 2020 Gatsby Inc.
-
-Permission to use, copy, modify, and/or distribute this software for any
-purpose with or without fee is hereby granted.
-
-THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
-REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
-AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
-INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
-LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
-OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
-PERFORMANCE OF THIS SOFTWARE.
diff --git a/README.md b/README.md
index 35893ca..bcba4d6 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1 @@
-##Getting Started
-
-1. Clone the repo.
-2. Cd inside the repo.
-3. `npm install` inside root directory.
-3. `npm run develop` to run the server.
+Folder containing the sphinx-generated documentation
diff --git a/_images/add_db.png b/_images/add_db.png
new file mode 100644
index 0000000..7282343
Binary files /dev/null and b/_images/add_db.png differ
diff --git a/_images/add_new_chart.png b/_images/add_new_chart.png
new file mode 100644
index 0000000..356a2ad
Binary files /dev/null and b/_images/add_new_chart.png differ
diff --git a/_images/advanced_analytics_base.png b/_images/advanced_analytics_base.png
new file mode 100644
index 0000000..c93bb28
Binary files /dev/null and b/_images/advanced_analytics_base.png differ
diff --git a/_images/annotation.png b/_images/annotation.png
new file mode 100644
index 0000000..8e0dda3
Binary files /dev/null and b/_images/annotation.png differ
diff --git a/_images/annotation_settings.png b/_images/annotation_settings.png
new file mode 100644
index 0000000..76e2230
Binary files /dev/null and b/_images/annotation_settings.png differ
diff --git a/_images/apache_feather.png b/_images/apache_feather.png
new file mode 100644
index 0000000..744b8d7
Binary files /dev/null and b/_images/apache_feather.png differ
diff --git a/_images/area.png b/_images/area.png
new file mode 100644
index 0000000..6b2fb75
Binary files /dev/null and b/_images/area.png differ
diff --git a/_images/average_aggregate_for_cost.png b/_images/average_aggregate_for_cost.png
new file mode 100644
index 0000000..4a2ae09
Binary files /dev/null and b/_images/average_aggregate_for_cost.png differ
diff --git a/_images/bank_dash.png b/_images/bank_dash.png
new file mode 100644
index 0000000..cbe38e5
Binary files /dev/null and b/_images/bank_dash.png differ
diff --git a/_images/bar.png b/_images/bar.png
new file mode 100644
index 0000000..1ef2633
Binary files /dev/null and b/_images/bar.png differ
diff --git a/_images/big_number.png b/_images/big_number.png
new file mode 100644
index 0000000..90ac5a5
Binary files /dev/null and b/_images/big_number.png differ
diff --git a/_images/big_number_total.png b/_images/big_number_total.png
new file mode 100644
index 0000000..350d5a1
Binary files /dev/null and b/_images/big_number_total.png differ
diff --git a/_images/blue_bar_insert_component.png b/_images/blue_bar_insert_component.png
new file mode 100644
index 0000000..d8f1b87
Binary files /dev/null and b/_images/blue_bar_insert_component.png differ
diff --git a/_images/box_plot.png b/_images/box_plot.png
new file mode 100644
index 0000000..8925e50
Binary files /dev/null and b/_images/box_plot.png differ
diff --git a/_images/bubble.png b/_images/bubble.png
new file mode 100644
index 0000000..4533881
Binary files /dev/null and b/_images/bubble.png differ
diff --git a/_images/bullet.png b/_images/bullet.png
new file mode 100644
index 0000000..f98c70f
Binary files /dev/null and b/_images/bullet.png differ
diff --git a/_images/cal_heatmap.png b/_images/cal_heatmap.png
new file mode 100644
index 0000000..c83db08
Binary files /dev/null and b/_images/cal_heatmap.png differ
diff --git a/_images/chord.png b/_images/chord.png
new file mode 100644
index 0000000..18df6a7
Binary files /dev/null and b/_images/chord.png differ
diff --git a/_images/chose_a_datasource.png b/_images/chose_a_datasource.png
new file mode 100644
index 0000000..885c551
Binary files /dev/null and b/_images/chose_a_datasource.png differ
diff --git a/_images/compare.png b/_images/compare.png
new file mode 100644
index 0000000..c17af52
Binary files /dev/null and b/_images/compare.png differ
diff --git a/_images/country_map.png b/_images/country_map.png
new file mode 100644
index 0000000..52acbfa
Binary files /dev/null and b/_images/country_map.png differ
diff --git a/_images/create_role.png b/_images/create_role.png
new file mode 100644
index 0000000..0914a58
Binary files /dev/null and b/_images/create_role.png differ
diff --git a/_images/csv_to_database_configuration.png b/_images/csv_to_database_configuration.png
new file mode 100644
index 0000000..b2b6d39
Binary files /dev/null and b/_images/csv_to_database_configuration.png differ
diff --git a/_images/deck_arc.png b/_images/deck_arc.png
new file mode 100644
index 0000000..02b84b1
Binary files /dev/null and b/_images/deck_arc.png differ
diff --git a/_images/deck_geojson.png b/_images/deck_geojson.png
new file mode 100644
index 0000000..9c1a732
Binary files /dev/null and b/_images/deck_geojson.png differ
diff --git a/_images/deck_grid.png b/_images/deck_grid.png
new file mode 100644
index 0000000..2710d9f
Binary files /dev/null and b/_images/deck_grid.png differ
diff --git a/_images/deck_hex.png b/_images/deck_hex.png
new file mode 100644
index 0000000..99149db
Binary files /dev/null and b/_images/deck_hex.png differ
diff --git a/_images/deck_multi.png b/_images/deck_multi.png
new file mode 100644
index 0000000..acedd5b
Binary files /dev/null and b/_images/deck_multi.png differ
diff --git a/_images/deck_path.png b/_images/deck_path.png
new file mode 100644
index 0000000..d783a14
Binary files /dev/null and b/_images/deck_path.png differ
diff --git a/_images/deck_polygon.png b/_images/deck_polygon.png
new file mode 100644
index 0000000..b32c540
Binary files /dev/null and b/_images/deck_polygon.png differ
diff --git a/_images/deck_scatter.png b/_images/deck_scatter.png
new file mode 100644
index 0000000..a111a15
Binary files /dev/null and b/_images/deck_scatter.png differ
diff --git a/_images/deck_screengrid.png b/_images/deck_screengrid.png
new file mode 100644
index 0000000..78a26e6
Binary files /dev/null and b/_images/deck_screengrid.png differ
diff --git a/_images/deckgl_dash.png b/_images/deckgl_dash.png
new file mode 100644
index 0000000..6ba049c
Binary files /dev/null and b/_images/deckgl_dash.png differ
diff --git a/_images/directed_force.png b/_images/directed_force.png
new file mode 100644
index 0000000..15e0edb
Binary files /dev/null and b/_images/directed_force.png differ
diff --git a/_images/dist_bar.png b/_images/dist_bar.png
new file mode 100644
index 0000000..cdd5120
Binary files /dev/null and b/_images/dist_bar.png differ
diff --git a/_images/druid_agg.png b/_images/druid_agg.png
new file mode 100644
index 0000000..2d14e1e
Binary files /dev/null and b/_images/druid_agg.png differ
diff --git a/_images/dual_line.png b/_images/dual_line.png
new file mode 100644
index 0000000..a5f723c
Binary files /dev/null and b/_images/dual_line.png differ
diff --git a/_images/edit-record.png b/_images/edit-record.png
new file mode 100644
index 0000000..129efb1
Binary files /dev/null and b/_images/edit-record.png differ
diff --git a/_images/edit_annotation.png b/_images/edit_annotation.png
new file mode 100644
index 0000000..9ef34f8
Binary files /dev/null and b/_images/edit_annotation.png differ
diff --git a/_images/event_flow.png b/_images/event_flow.png
new file mode 100644
index 0000000..a24c1c5
Binary files /dev/null and b/_images/event_flow.png differ
diff --git a/_images/explore.png b/_images/explore.png
new file mode 100644
index 0000000..1e08272
Binary files /dev/null and b/_images/explore.png differ
diff --git a/_images/filter_box.png b/_images/filter_box.png
new file mode 100644
index 0000000..be08f68
Binary files /dev/null and b/_images/filter_box.png differ
diff --git a/_images/filter_on_origin_country.png b/_images/filter_on_origin_country.png
new file mode 100644
index 0000000..961d41a
Binary files /dev/null and b/_images/filter_on_origin_country.png differ
diff --git a/_images/heatmap.png b/_images/heatmap.png
new file mode 100644
index 0000000..8d5f8da
Binary files /dev/null and b/_images/heatmap.png differ
diff --git a/_images/histogram.png b/_images/histogram.png
new file mode 100644
index 0000000..c6f8fdc
Binary files /dev/null and b/_images/histogram.png differ
diff --git a/_images/horizon.png b/_images/horizon.png
new file mode 100644
index 0000000..f927b76
Binary files /dev/null and b/_images/horizon.png differ
diff --git a/_images/iframe.png b/_images/iframe.png
new file mode 100644
index 0000000..5c6524a
Binary files /dev/null and b/_images/iframe.png differ
diff --git a/_images/line.png b/_images/line.png
new file mode 100644
index 0000000..7df5084
Binary files /dev/null and b/_images/line.png differ
diff --git a/_images/mapbox.png b/_images/mapbox.png
new file mode 100644
index 0000000..2132df5
Binary files /dev/null and b/_images/mapbox.png differ
diff --git a/_images/markdown.png b/_images/markdown.png
new file mode 100644
index 0000000..f0345ae
Binary files /dev/null and b/_images/markdown.png differ
diff --git a/_images/markup.png b/_images/markup.png
new file mode 100644
index 0000000..5878e15
Binary files /dev/null and b/_images/markup.png differ
diff --git a/_images/no_filter_on_time_filter.png b/_images/no_filter_on_time_filter.png
new file mode 100644
index 0000000..ea564d6
Binary files /dev/null and b/_images/no_filter_on_time_filter.png differ
diff --git a/_images/paired_ttest.png b/_images/paired_ttest.png
new file mode 100644
index 0000000..4a3b225
Binary files /dev/null and b/_images/paired_ttest.png differ
diff --git a/_images/para.png b/_images/para.png
new file mode 100644
index 0000000..5401034
Binary files /dev/null and b/_images/para.png differ
diff --git a/_images/parse_dates_column.png b/_images/parse_dates_column.png
new file mode 100644
index 0000000..69982f4
Binary files /dev/null and b/_images/parse_dates_column.png differ
diff --git a/_images/partition.png b/_images/partition.png
new file mode 100644
index 0000000..f49ee88
Binary files /dev/null and b/_images/partition.png differ
diff --git a/_images/pie.png b/_images/pie.png
new file mode 100644
index 0000000..1c93bf5
Binary files /dev/null and b/_images/pie.png differ
diff --git a/_images/pivot_table.png b/_images/pivot_table.png
new file mode 100644
index 0000000..a22794b
Binary files /dev/null and b/_images/pivot_table.png differ
diff --git a/_images/publish_dashboard.png b/_images/publish_dashboard.png
new file mode 100644
index 0000000..74fcb28
Binary files /dev/null and b/_images/publish_dashboard.png differ
diff --git a/_images/resample.png b/_images/resample.png
new file mode 100644
index 0000000..04f78a0
Binary files /dev/null and b/_images/resample.png differ
diff --git a/_images/resize_tutorial_table_on_dashboard.png b/_images/resize_tutorial_table_on_dashboard.png
new file mode 100644
index 0000000..c547521
Binary files /dev/null and b/_images/resize_tutorial_table_on_dashboard.png differ
diff --git a/_images/rolling_mean.png b/_images/rolling_mean.png
new file mode 100644
index 0000000..505fe44
Binary files /dev/null and b/_images/rolling_mean.png differ
diff --git a/_images/rose.png b/_images/rose.png
new file mode 100644
index 0000000..2006746
Binary files /dev/null and b/_images/rose.png differ
diff --git a/src/images/s.png b/_images/s.png
similarity index 100%
copy from src/images/s.png
copy to _images/s.png
diff --git a/_images/sankey.png b/_images/sankey.png
new file mode 100644
index 0000000..93d73f6
Binary files /dev/null and b/_images/sankey.png differ
diff --git a/_images/save_tutorial_table.png b/_images/save_tutorial_table.png
new file mode 100644
index 0000000..8f5ee21
Binary files /dev/null and b/_images/save_tutorial_table.png differ
diff --git a/_images/select_dates_pivot_table.png b/_images/select_dates_pivot_table.png
new file mode 100644
index 0000000..f206476
Binary files /dev/null and b/_images/select_dates_pivot_table.png differ
diff --git a/_images/select_table_visualization_type.png b/_images/select_table_visualization_type.png
new file mode 100644
index 0000000..ab238fd
Binary files /dev/null and b/_images/select_table_visualization_type.png differ
diff --git a/_images/separator.png b/_images/separator.png
new file mode 100644
index 0000000..0533413
Binary files /dev/null and b/_images/separator.png differ
diff --git a/_images/sqllab.png b/_images/sqllab.png
new file mode 100644
index 0000000..8d199e1
Binary files /dev/null and b/_images/sqllab.png differ
diff --git a/_images/sum_cost_column.png b/_images/sum_cost_column.png
new file mode 100644
index 0000000..5dbd7c4
Binary files /dev/null and b/_images/sum_cost_column.png differ
diff --git a/_images/sunburst.png b/_images/sunburst.png
new file mode 100644
index 0000000..ec6b607
Binary files /dev/null and b/_images/sunburst.png differ
diff --git a/_images/table.png b/_images/table.png
new file mode 100644
index 0000000..0561210
Binary files /dev/null and b/_images/table.png differ
diff --git a/_images/time_comparison_absolute_difference.png b/_images/time_comparison_absolute_difference.png
new file mode 100644
index 0000000..691d0c8
Binary files /dev/null and b/_images/time_comparison_absolute_difference.png differ
diff --git a/_images/time_comparison_two_series.png b/_images/time_comparison_two_series.png
new file mode 100644
index 0000000..282b3dd
Binary files /dev/null and b/_images/time_comparison_two_series.png differ
diff --git a/_images/time_pivot.png b/_images/time_pivot.png
new file mode 100644
index 0000000..83201c2
Binary files /dev/null and b/_images/time_pivot.png differ
diff --git a/_images/time_table.png b/_images/time_table.png
new file mode 100644
index 0000000..fe11d8e
Binary files /dev/null and b/_images/time_table.png differ
diff --git a/_images/treemap.png b/_images/treemap.png
new file mode 100644
index 0000000..27c6c5c
Binary files /dev/null and b/_images/treemap.png differ
diff --git a/_images/tutorial_01_sources_database.png b/_images/tutorial_01_sources_database.png
new file mode 100644
index 0000000..ad92723
Binary files /dev/null and b/_images/tutorial_01_sources_database.png differ
diff --git a/_images/tutorial_02_add_database.png b/_images/tutorial_02_add_database.png
new file mode 100644
index 0000000..7eb671a
Binary files /dev/null and b/_images/tutorial_02_add_database.png differ
diff --git a/_images/tutorial_03_database_name.png b/_images/tutorial_03_database_name.png
new file mode 100644
index 0000000..68f15cd
Binary files /dev/null and b/_images/tutorial_03_database_name.png differ
diff --git a/_images/tutorial_04_sqlalchemy_connection_string.png b/_images/tutorial_04_sqlalchemy_connection_string.png
new file mode 100644
index 0000000..b7d0c43
Binary files /dev/null and b/_images/tutorial_04_sqlalchemy_connection_string.png differ
diff --git a/_images/tutorial_05_connection_popup.png b/_images/tutorial_05_connection_popup.png
new file mode 100644
index 0000000..d5c49af
Binary files /dev/null and b/_images/tutorial_05_connection_popup.png differ
diff --git a/_images/tutorial_06_list_of_tables.png b/_images/tutorial_06_list_of_tables.png
new file mode 100644
index 0000000..849f4cc
Binary files /dev/null and b/_images/tutorial_06_list_of_tables.png differ
diff --git a/_images/tutorial_07_save_button.png b/_images/tutorial_07_save_button.png
new file mode 100644
index 0000000..976c619
Binary files /dev/null and b/_images/tutorial_07_save_button.png differ
diff --git a/_images/tutorial_08_sources_tables.png b/_images/tutorial_08_sources_tables.png
new file mode 100644
index 0000000..08eb79f
Binary files /dev/null and b/_images/tutorial_08_sources_tables.png differ
diff --git a/_images/tutorial_09_add_new_table.png b/_images/tutorial_09_add_new_table.png
new file mode 100644
index 0000000..fca2b51
Binary files /dev/null and b/_images/tutorial_09_add_new_table.png differ
diff --git a/_images/tutorial_10_table_name.png b/_images/tutorial_10_table_name.png
new file mode 100644
index 0000000..97838a3
Binary files /dev/null and b/_images/tutorial_10_table_name.png differ
diff --git a/_images/tutorial_11_choose_db.png b/_images/tutorial_11_choose_db.png
new file mode 100644
index 0000000..c7fec3d
Binary files /dev/null and b/_images/tutorial_11_choose_db.png differ
diff --git a/_images/tutorial_12_table_creation_success_msg.png b/_images/tutorial_12_table_creation_success_msg.png
new file mode 100644
index 0000000..085c211
Binary files /dev/null and b/_images/tutorial_12_table_creation_success_msg.png differ
diff --git a/_images/tutorial_13_edit_table_config.png b/_images/tutorial_13_edit_table_config.png
new file mode 100644
index 0000000..54b0062
Binary files /dev/null and b/_images/tutorial_13_edit_table_config.png differ
diff --git a/_images/tutorial_14_field_config.png b/_images/tutorial_14_field_config.png
new file mode 100644
index 0000000..245e436
Binary files /dev/null and b/_images/tutorial_14_field_config.png differ
diff --git a/_images/tutorial_15_click_table_name.png b/_images/tutorial_15_click_table_name.png
new file mode 100644
index 0000000..d6fc628
Binary files /dev/null and b/_images/tutorial_15_click_table_name.png differ
diff --git a/_images/tutorial_16_datasource_chart_type.png b/_images/tutorial_16_datasource_chart_type.png
new file mode 100644
index 0000000..0dae19a
Binary files /dev/null and b/_images/tutorial_16_datasource_chart_type.png differ
diff --git a/_images/tutorial_17_choose_time_range.png b/_images/tutorial_17_choose_time_range.png
new file mode 100644
index 0000000..f54b074
Binary files /dev/null and b/_images/tutorial_17_choose_time_range.png differ
diff --git a/_images/tutorial_18_choose_metric.png b/_images/tutorial_18_choose_metric.png
new file mode 100644
index 0000000..8cc62b9
Binary files /dev/null and b/_images/tutorial_18_choose_metric.png differ
diff --git a/_images/tutorial_19_click_query.png b/_images/tutorial_19_click_query.png
new file mode 100644
index 0000000..9ff8bba
Binary files /dev/null and b/_images/tutorial_19_click_query.png differ
diff --git a/_images/tutorial_20_count_star_result.png b/_images/tutorial_20_count_star_result.png
new file mode 100644
index 0000000..a50ca54
Binary files /dev/null and b/_images/tutorial_20_count_star_result.png differ
diff --git a/_images/tutorial_21_group_by.png b/_images/tutorial_21_group_by.png
new file mode 100644
index 0000000..b4ea5d5
Binary files /dev/null and b/_images/tutorial_21_group_by.png differ
diff --git a/_images/tutorial_22_group_by_result.png b/_images/tutorial_22_group_by_result.png
new file mode 100644
index 0000000..fb3205d
Binary files /dev/null and b/_images/tutorial_22_group_by_result.png differ
diff --git a/_images/tutorial_23_group_by_more_dimensions.png b/_images/tutorial_23_group_by_more_dimensions.png
new file mode 100644
index 0000000..a004703
Binary files /dev/null and b/_images/tutorial_23_group_by_more_dimensions.png differ
diff --git a/_images/tutorial_24_max_metric.png b/_images/tutorial_24_max_metric.png
new file mode 100644
index 0000000..b1ccc89
Binary files /dev/null and b/_images/tutorial_24_max_metric.png differ
diff --git a/_images/tutorial_25_max_temp_filter.png b/_images/tutorial_25_max_temp_filter.png
new file mode 100644
index 0000000..e57efb1
Binary files /dev/null and b/_images/tutorial_25_max_temp_filter.png differ
diff --git a/_images/tutorial_26_row_limit.png b/_images/tutorial_26_row_limit.png
new file mode 100644
index 0000000..3d01983
Binary files /dev/null and b/_images/tutorial_26_row_limit.png differ
diff --git a/_images/tutorial_27_top_10_max_temps.png b/_images/tutorial_27_top_10_max_temps.png
new file mode 100644
index 0000000..bc65243
Binary files /dev/null and b/_images/tutorial_27_top_10_max_temps.png differ
diff --git a/_images/tutorial_28_bar_chart.png b/_images/tutorial_28_bar_chart.png
new file mode 100644
index 0000000..936b008
Binary files /dev/null and b/_images/tutorial_28_bar_chart.png differ
diff --git a/_images/tutorial_29_bar_chart_series_metrics.png b/_images/tutorial_29_bar_chart_series_metrics.png
new file mode 100644
index 0000000..7c3758b
Binary files /dev/null and b/_images/tutorial_29_bar_chart_series_metrics.png differ
diff --git a/_images/tutorial_30_bar_chart_results.png b/_images/tutorial_30_bar_chart_results.png
new file mode 100644
index 0000000..77afceb
Binary files /dev/null and b/_images/tutorial_30_bar_chart_results.png differ
diff --git a/_images/tutorial_31_save_slice_to_dashboard.png b/_images/tutorial_31_save_slice_to_dashboard.png
new file mode 100644
index 0000000..6019c00
Binary files /dev/null and b/_images/tutorial_31_save_slice_to_dashboard.png differ
diff --git a/_images/tutorial_32_save_slice_confirmation.png b/_images/tutorial_32_save_slice_confirmation.png
new file mode 100644
index 0000000..027d3bb
Binary files /dev/null and b/_images/tutorial_32_save_slice_confirmation.png differ
diff --git a/_images/tutorial_33_dashboard.png b/_images/tutorial_33_dashboard.png
new file mode 100644
index 0000000..7f332a5
Binary files /dev/null and b/_images/tutorial_33_dashboard.png differ
diff --git a/_images/tutorial_34_weather_dashboard.png b/_images/tutorial_34_weather_dashboard.png
new file mode 100644
index 0000000..1dd6776
Binary files /dev/null and b/_images/tutorial_34_weather_dashboard.png differ
diff --git a/_images/tutorial_35_slice_on_dashboard.png b/_images/tutorial_35_slice_on_dashboard.png
new file mode 100644
index 0000000..dc7d7e4
Binary files /dev/null and b/_images/tutorial_35_slice_on_dashboard.png differ
diff --git a/_images/tutorial_36_adjust_dimensions.gif b/_images/tutorial_36_adjust_dimensions.gif
new file mode 100644
index 0000000..01347e1
Binary files /dev/null and b/_images/tutorial_36_adjust_dimensions.gif differ
diff --git a/_images/tutorial_line_chart.png b/_images/tutorial_line_chart.png
new file mode 100644
index 0000000..5cf5235
Binary files /dev/null and b/_images/tutorial_line_chart.png differ
diff --git a/_images/tutorial_pivot_table.png b/_images/tutorial_pivot_table.png
new file mode 100644
index 0000000..50253a0
Binary files /dev/null and b/_images/tutorial_pivot_table.png differ
diff --git a/_images/tutorial_table.png b/_images/tutorial_table.png
new file mode 100644
index 0000000..a94fdaf
Binary files /dev/null and b/_images/tutorial_table.png differ
diff --git a/_images/upload_a_csv.png b/_images/upload_a_csv.png
new file mode 100644
index 0000000..91f0e55
Binary files /dev/null and b/_images/upload_a_csv.png differ
diff --git a/_images/word_cloud.png b/_images/word_cloud.png
new file mode 100644
index 0000000..1829a2f
Binary files /dev/null and b/_images/word_cloud.png differ
diff --git a/_images/world_map.png b/_images/world_map.png
new file mode 100644
index 0000000..4b3fe0b
Binary files /dev/null and b/_images/world_map.png differ
diff --git a/_modules/index.html b/_modules/index.html
new file mode 100644
index 0000000..994f044
--- /dev/null
+++ b/_modules/index.html
@@ -0,0 +1,192 @@
+
+
+<!DOCTYPE html>
+<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
+<head>
+  <meta charset="utf-8">
+  
+  <meta name="viewport" content="width=device-width, initial-scale=1.0">
+  
+  <title>Overview: module code &mdash; Apache Superset  documentation</title>
+  
+
+  
+  
+  
+  
+
+  
+  <script type="text/javascript" src="../_static/js/modernizr.min.js"></script>
+  
+    
+      <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
+        <script src="../_static/jquery.js"></script>
+        <script src="../_static/underscore.js"></script>
+        <script src="../_static/doctools.js"></script>
+        <script src="../_static/language_data.js"></script>
+    
+    <script type="text/javascript" src="../_static/js/theme.js"></script>
+
+    
+
+  
+  <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
+  <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
+    <link rel="index" title="Index" href="../genindex.html" />
+    <link rel="search" title="Search" href="../search.html" /> 
+</head>
+
+<body class="wy-body-for-nav">
+
+   
+  <div class="wy-grid-for-nav">
+    
+    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
+      <div class="wy-side-scroll">
+        <div class="wy-side-nav-search" >
+          
+
+          
+            <a href="../index.html" class="icon icon-home"> Apache Superset
+          
+
+          
+          </a>
+
+          
+            
+            
+          
+
+          
+<div role="search">
+  <form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
+    <input type="text" name="q" placeholder="Search docs" />
+    <input type="hidden" name="check_keywords" value="yes" />
+    <input type="hidden" name="area" value="default" />
+  </form>
+</div>
+
+          
+        </div>
+
+        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
+          
+            
+            
+              
+            
+            
+              <ul>
+<li class="toctree-l1"><a class="reference internal" href="../installation.html">Installation &amp; Configuration</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../tutorials.html">Tutorials</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../security.html">Security</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../sqllab.html">SQL Lab</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../gallery.html">Visualizations Gallery</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../druid.html">Druid</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../misc.html">Misc</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../faq.html">FAQ</a></li>
+</ul>
+
+            
+          
+        </div>
+      </div>
+    </nav>
+
+    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
+
+      
+      <nav class="wy-nav-top" aria-label="top navigation">
+        
+          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
+          <a href="../index.html">Apache Superset</a>
+        
+      </nav>
+
+
+      <div class="wy-nav-content">
+        
+        <div class="rst-content">
+        
+          
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+<div role="navigation" aria-label="breadcrumbs navigation">
+
+  <ul class="wy-breadcrumbs">
+    
+      <li><a href="../index.html">Docs</a> &raquo;</li>
+        
+      <li>Overview: module code</li>
+    
+    
+      <li class="wy-breadcrumbs-aside">
+        
+      </li>
+    
+  </ul>
+
+  
+  <hr/>
+</div>
+          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
+           <div itemprop="articleBody">
+            
+  <h1>All modules for which code is available</h1>
+<ul><li><a href="superset/jinja_context.html">superset.jinja_context</a></li>
+</ul>
+
+           </div>
+           
+          </div>
+          <footer>
+  
+
+  <hr/>
+
+  <div role="contentinfo">
+    <p>
+        &copy; Copyright Copyright © 2020 The Apache Software Foundation, Licensed under the Apache License, Version 2.0.
+
+    </p>
+  </div> 
+
+</footer>
+
+        </div>
+      </div>
+
+    </section>
+
+  </div>
+  
+
+
+  <script type="text/javascript">
+      jQuery(function () {
+          SphinxRtdTheme.Navigation.enable(true);
+      });
+  </script>
+
+  
+  
+    
+   
+
+</body>
+</html>
\ No newline at end of file
diff --git a/_modules/superset/jinja_context.html b/_modules/superset/jinja_context.html
new file mode 100644
index 0000000..f14b07b
--- /dev/null
+++ b/_modules/superset/jinja_context.html
@@ -0,0 +1,525 @@
+
+
+<!DOCTYPE html>
+<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
+<head>
+  <meta charset="utf-8">
+  
+  <meta name="viewport" content="width=device-width, initial-scale=1.0">
+  
+  <title>superset.jinja_context &mdash; Apache Superset  documentation</title>
+  
+
+  
+  
+  
+  
+
+  
+  <script type="text/javascript" src="../../_static/js/modernizr.min.js"></script>
+  
+    
+      <script type="text/javascript" id="documentation_options" data-url_root="../../" src="../../_static/documentation_options.js"></script>
+        <script src="../../_static/jquery.js"></script>
+        <script src="../../_static/underscore.js"></script>
+        <script src="../../_static/doctools.js"></script>
+        <script src="../../_static/language_data.js"></script>
+    
+    <script type="text/javascript" src="../../_static/js/theme.js"></script>
+
+    
+
+  
+  <link rel="stylesheet" href="../../_static/css/theme.css" type="text/css" />
+  <link rel="stylesheet" href="../../_static/pygments.css" type="text/css" />
+    <link rel="index" title="Index" href="../../genindex.html" />
+    <link rel="search" title="Search" href="../../search.html" /> 
+</head>
+
+<body class="wy-body-for-nav">
+
+   
+  <div class="wy-grid-for-nav">
+    
+    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
+      <div class="wy-side-scroll">
+        <div class="wy-side-nav-search" >
+          
+
+          
+            <a href="../../index.html" class="icon icon-home"> Apache Superset
+          
+
+          
+          </a>
+
+          
+            
+            
+          
+
+          
+<div role="search">
+  <form id="rtd-search-form" class="wy-form" action="../../search.html" method="get">
+    <input type="text" name="q" placeholder="Search docs" />
+    <input type="hidden" name="check_keywords" value="yes" />
+    <input type="hidden" name="area" value="default" />
+  </form>
+</div>
+
+          
+        </div>
+
+        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
+          
+            
+            
+              
+            
+            
+              <ul>
+<li class="toctree-l1"><a class="reference internal" href="../../installation.html">Installation &amp; Configuration</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../tutorials.html">Tutorials</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../security.html">Security</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../sqllab.html">SQL Lab</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../gallery.html">Visualizations Gallery</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../druid.html">Druid</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../misc.html">Misc</a></li>
+<li class="toctree-l1"><a class="reference internal" href="../../faq.html">FAQ</a></li>
+</ul>
+
+            
+          
+        </div>
+      </div>
+    </nav>
+
+    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
+
+      
+      <nav class="wy-nav-top" aria-label="top navigation">
+        
+          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
+          <a href="../../index.html">Apache Superset</a>
+        
+      </nav>
+
+
+      <div class="wy-nav-content">
+        
+        <div class="rst-content">
+        
+          
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+<div role="navigation" aria-label="breadcrumbs navigation">
+
+  <ul class="wy-breadcrumbs">
+    
+      <li><a href="../../index.html">Docs</a> &raquo;</li>
+        
+          <li><a href="../index.html">Module code</a> &raquo;</li>
+        
+      <li>superset.jinja_context</li>
+    
+    
+      <li class="wy-breadcrumbs-aside">
+        
+      </li>
+    
+  </ul>
+
+  
+  <hr/>
+</div>
+          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
+           <div itemprop="articleBody">
+            
+  <h1>Source code for superset.jinja_context</h1><div class="highlight"><pre>
+<span></span><span class="c1"># Licensed to the Apache Software Foundation (ASF) under one</span>
+<span class="c1"># or more contributor license agreements.  See the NOTICE file</span>
+<span class="c1"># distributed with this work for additional information</span>
+<span class="c1"># regarding copyright ownership.  The ASF licenses this file</span>
+<span class="c1"># to you under the Apache License, Version 2.0 (the</span>
+<span class="c1"># &quot;License&quot;); you may not use this file except in compliance</span>
+<span class="c1"># with the License.  You may obtain a copy of the License at</span>
+<span class="c1">#</span>
+<span class="c1">#   http://www.apache.org/licenses/LICENSE-2.0</span>
+<span class="c1">#</span>
+<span class="c1"># Unless required by applicable law or agreed to in writing,</span>
+<span class="c1"># software distributed under the License is distributed on an</span>
+<span class="c1"># &quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY</span>
+<span class="c1"># KIND, either express or implied.  See the License for the</span>
+<span class="c1"># specific language governing permissions and limitations</span>
+<span class="c1"># under the License.</span>
+<span class="sd">&quot;&quot;&quot;Defines the templating context for SQL Lab&quot;&quot;&quot;</span>
+<span class="kn">import</span> <span class="nn">inspect</span>
+<span class="kn">import</span> <span class="nn">re</span>
+<span class="kn">from</span> <span class="nn">typing</span> <span class="kn">import</span> <span class="n">Any</span><span class="p">,</span> <span class="n">cast</span><span class="p">,</span> <span class="n">List</span><span class="p">,</span> <span class="n">Optional</span><span class="p">,</span> <span class="n">Tuple</span><span class="p">,</span> <span class="n">TYPE_CHECKING</span>
+
+<span class="kn">from</span> <span class="nn">flask</span> <span class="kn">import</span> <span class="n">g</span><span class="p">,</span> <span class="n">request</span>
+<span class="kn">from</span> <span class="nn">jinja2.sandbox</span> <span class="kn">import</span> <span class="n">SandboxedEnvironment</span>
+
+<span class="kn">from</span> <span class="nn">superset</span> <span class="kn">import</span> <span class="n">jinja_base_context</span>
+<span class="kn">from</span> <span class="nn">superset.extensions</span> <span class="kn">import</span> <span class="n">jinja_context_manager</span>
+<span class="kn">from</span> <span class="nn">superset.utils.core</span> <span class="kn">import</span> <span class="n">convert_legacy_filters_into_adhoc</span><span class="p">,</span> <span class="n">merge_extra_filters</span>
+
+<span class="k">if</span> <span class="n">TYPE_CHECKING</span><span class="p">:</span>
+    <span class="kn">from</span> <span class="nn">superset.connectors.sqla.models</span> <span class="kn">import</span> <span class="p">(</span>  <span class="c1"># pylint: disable=unused-import</span>
+        <span class="n">SqlaTable</span><span class="p">,</span>
+    <span class="p">)</span>
+    <span class="kn">from</span> <span class="nn">superset.models.core</span> <span class="kn">import</span> <span class="n">Database</span>  <span class="c1"># pylint: disable=unused-import</span>
+    <span class="kn">from</span> <span class="nn">superset.models.sql_lab</span> <span class="kn">import</span> <span class="n">Query</span>  <span class="c1"># pylint: disable=unused-import</span>
+
+
+<div class="viewcode-block" id="filter_values"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.filter_values">[docs]</a><span class="k">def</span> <span class="nf">filter_values</span><span class="p">(</span><span class="n">column</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">default</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">str</span><span class="p">] [...]
+    <span class="sd">&quot;&quot;&quot; Gets a values for a particular filter as a list</span>
+
+<span class="sd">    This is useful if:</span>
+<span class="sd">        - you want to use a filter box to filter a query where the name of filter box</span>
+<span class="sd">          column doesn&#39;t match the one in the select statement</span>
+<span class="sd">        - you want to have the ability for filter inside the main query for speed</span>
+<span class="sd">          purposes</span>
+
+<span class="sd">    Usage example::</span>
+
+<span class="sd">        SELECT action, count(*) as times</span>
+<span class="sd">        FROM logs</span>
+<span class="sd">        WHERE action in ( {{ &quot;&#39;&quot; + &quot;&#39;,&#39;&quot;.join(filter_values(&#39;action_type&#39;)) + &quot;&#39;&quot; }} )</span>
+<span class="sd">        GROUP BY action</span>
+
+<span class="sd">    :param column: column/filter name to lookup</span>
+<span class="sd">    :param default: default value to return if there&#39;s no matching columns</span>
+<span class="sd">    :return: returns a list of filter values</span>
+<span class="sd">    &quot;&quot;&quot;</span>
+
+    <span class="kn">from</span> <span class="nn">superset.views.utils</span> <span class="kn">import</span> <span class="n">get_form_data</span>
+
+    <span class="n">form_data</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">get_form_data</span><span class="p">()</span>
+    <span class="n">convert_legacy_filters_into_adhoc</span><span class="p">(</span><span class="n">form_data</span><span class="p">)</span>
+    <span class="n">merge_extra_filters</span><span class="p">(</span><span class="n">form_data</span><span class="p">)</span>
+
+    <span class="n">return_val</span> <span class="o">=</span> <span class="p">[</span>
+        <span class="n">comparator</span>
+        <span class="k">for</span> <span class="nb">filter</span> <span class="ow">in</span> <span class="n">form_data</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;adhoc_filters&quot;</span><span class="p">,</span> <span class="p">[])</span>
+        <span class="k">for</span> <span class="n">comparator</span> <span class="ow">in</span> <span class="p">(</span>
+            <span class="nb">filter</span><span class="p">[</span><span class="s2">&quot;comparator&quot;</span><span class="p">]</span>
+            <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="nb">filter</span><span class="p">[</span><span class="s2">&quot;comparator&quot;</span><span class="p">],</span> <span class="nb">list</span><span class="p">)</span>
+            <span class="k">else</span> <span class="p">[</span><span class="nb">filter</span><span class="p">[</span><span class="s2">&quot;comparator&quot;</span><span class="p">]]</span>
+        <span class="p">)</span>
+        <span class="k">if</span> <span class="p">(</span>
+            <span class="nb">filter</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;expressionType&quot;</span><span class="p">)</span> <span class="o">==</span> <span class="s2">&quot;SIMPLE&quot;</span>
+            <span class="ow">and</span> <span class="nb">filter</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;clause&quot;</span><span class="p">)</span> <span class="o">==</span> <span class="s2">&quot;WHERE&quot;</span>
+            <span class="ow">and</span> <span class="nb">filter</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;subject&quot;</span><span class="p">)</span> <span class="o">==</span> <span class="n">column</span>
+            <span class="ow">and</span> <span class="nb">filter</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;comparator&quot;</span><span class="p">)</span>
+        <span class="p">)</span>
+    <span class="p">]</span>
+
+    <span class="k">if</span> <span class="n">return_val</span><span class="p">:</span>
+        <span class="k">return</span> <span class="n">return_val</span>
+
+    <span class="k">if</span> <span class="n">default</span><span class="p">:</span>
+        <span class="k">return</span> <span class="p">[</span><span class="n">default</span><span class="p">]</span>
+
+    <span class="k">return</span> <span class="p">[]</span></div>
+
+
+<div class="viewcode-block" id="ExtraCache"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.ExtraCache">[docs]</a><span class="k">class</span> <span class="nc">ExtraCache</span><span class="p">:</span>
+    <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">    Dummy class that exposes a method used to store additional values used in</span>
+<span class="sd">    calculation of query object cache keys.</span>
+<span class="sd">    &quot;&quot;&quot;</span>
+
+    <span class="c1"># Regular expression for detecting the presence of templated methods which could</span>
+    <span class="c1"># be added to the cache key.</span>
+    <span class="n">regex</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span>
+        <span class="sa">r</span><span class="s2">&quot;\{\{.*(&quot;</span>
+        <span class="sa">r</span><span class="s2">&quot;current_user_id\(.*\)|&quot;</span>
+        <span class="sa">r</span><span class="s2">&quot;current_username\(.*\)|&quot;</span>
+        <span class="sa">r</span><span class="s2">&quot;cache_key_wrapper\(.*\)|&quot;</span>
+        <span class="sa">r</span><span class="s2">&quot;url_param\(.*\)&quot;</span>
+        <span class="sa">r</span><span class="s2">&quot;).*\}\}&quot;</span>
+    <span class="p">)</span>
+
+    <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">extra_cache_keys</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="n">Any</span><span class="p">]]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">):</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">extra_cache_keys</span> <span class="o">=</span> <span class="n">extra_cache_keys</span>
+
+<div class="viewcode-block" id="ExtraCache.current_user_id"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.ExtraCache.current_user_id">[docs]</a>    <span class="k">def</span> <span class="nf">current_user_id</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">add_to_cache_keys</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span> <sp [...]
+        <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">        Return the user ID of the user who is currently logged in.</span>
+
+<span class="sd">        :param add_to_cache_keys: Whether the value should be included in the cache key</span>
+<span class="sd">        :returns: The user ID</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+
+        <span class="k">if</span> <span class="nb">hasattr</span><span class="p">(</span><span class="n">g</span><span class="p">,</span> <span class="s2">&quot;user&quot;</span><span class="p">)</span> <span class="ow">and</span> <span class="n">g</span><span class="o">.</span><span class="n">user</span><span class="p">:</span>
+            <span class="k">if</span> <span class="n">add_to_cache_keys</span><span class="p">:</span>
+                <span class="bp">self</span><span class="o">.</span><span class="n">cache_key_wrapper</span><span class="p">(</span><span class="n">g</span><span class="o">.</span><span class="n">user</span><span class="o">.</span><span class="n">id</span><span class="p">)</span>
+            <span class="k">return</span> <span class="n">g</span><span class="o">.</span><span class="n">user</span><span class="o">.</span><span class="n">id</span>
+        <span class="k">return</span> <span class="kc">None</span></div>
+
+<div class="viewcode-block" id="ExtraCache.current_username"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.ExtraCache.current_username">[docs]</a>    <span class="k">def</span> <span class="nf">current_username</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">add_to_cache_keys</span><span class="p">:</span> <span class="nb">bool</span> <span class="o">=</span> <span class="kc">True</span><span class="p">)</span>  [...]
+        <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">        Return the username of the user who is currently logged in.</span>
+
+<span class="sd">        :param add_to_cache_keys: Whether the value should be included in the cache key</span>
+<span class="sd">        :returns: The username</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+
+        <span class="k">if</span> <span class="n">g</span><span class="o">.</span><span class="n">user</span><span class="p">:</span>
+            <span class="k">if</span> <span class="n">add_to_cache_keys</span><span class="p">:</span>
+                <span class="bp">self</span><span class="o">.</span><span class="n">cache_key_wrapper</span><span class="p">(</span><span class="n">g</span><span class="o">.</span><span class="n">user</span><span class="o">.</span><span class="n">username</span><span class="p">)</span>
+            <span class="k">return</span> <span class="n">g</span><span class="o">.</span><span class="n">user</span><span class="o">.</span><span class="n">username</span>
+        <span class="k">return</span> <span class="kc">None</span></div>
+
+<div class="viewcode-block" id="ExtraCache.cache_key_wrapper"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.ExtraCache.cache_key_wrapper">[docs]</a>    <span class="k">def</span> <span class="nf">cache_key_wrapper</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">key</span><span class="p">:</span> <span class="n">Any</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">Any</span><span class= [...]
+        <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">        Adds values to a list that is added to the query object used for calculating a</span>
+<span class="sd">        cache key.</span>
+
+<span class="sd">        This is needed if the following applies:</span>
+<span class="sd">            - Caching is enabled</span>
+<span class="sd">            - The query is dynamically generated using a jinja template</span>
+<span class="sd">            - A `JINJA_CONTEXT_ADDONS` or similar is used as a filter in the query</span>
+
+<span class="sd">        :param key: Any value that should be considered when calculating the cache key</span>
+<span class="sd">        :return: the original value ``key`` passed to the function</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+        <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">extra_cache_keys</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
+            <span class="bp">self</span><span class="o">.</span><span class="n">extra_cache_keys</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">key</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">key</span></div>
+
+<div class="viewcode-block" id="ExtraCache.url_param"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.ExtraCache.url_param">[docs]</a>    <span class="k">def</span> <span class="nf">url_param</span><span class="p">(</span>
+        <span class="bp">self</span><span class="p">,</span> <span class="n">param</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">default</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">str</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span> <span class="n">add_to_cache_keys</span><span class="p">:</span> <span class="nb">bool [...]
+    <span class="p">)</span> <span class="o">-&gt;</span> <span class="n">Optional</span><span class="p">[</span><span class="n">Any</span><span class="p">]:</span>
+        <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">        Read a url or post parameter and use it in your SQL Lab query.</span>
+
+<span class="sd">        When in SQL Lab, it&#39;s possible to add arbitrary URL &quot;query string&quot; parameters,</span>
+<span class="sd">        and use those in your SQL code. For instance you can alter your url and add</span>
+<span class="sd">        `?foo=bar`, as in `{domain}/superset/sqllab?foo=bar`. Then if your query is</span>
+<span class="sd">        something like SELECT * FROM foo = &#39;{{ url_param(&#39;foo&#39;) }}&#39;, it will be parsed</span>
+<span class="sd">        at runtime and replaced by the value in the URL.</span>
+
+<span class="sd">        As you create a visualization form this SQL Lab query, you can pass parameters</span>
+<span class="sd">        in the explore view as well as from the dashboard, and it should carry through</span>
+<span class="sd">        to your queries.</span>
+
+<span class="sd">        Default values for URL parameters can be defined in chart metadata by adding the</span>
+<span class="sd">        key-value pair `url_params: {&#39;foo&#39;: &#39;bar&#39;}`</span>
+
+<span class="sd">        :param param: the parameter to lookup</span>
+<span class="sd">        :param default: the value to return in the absence of the parameter</span>
+<span class="sd">        :param add_to_cache_keys: Whether the value should be included in the cache key</span>
+<span class="sd">        :returns: The URL parameters</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+
+        <span class="kn">from</span> <span class="nn">superset.views.utils</span> <span class="kn">import</span> <span class="n">get_form_data</span>
+
+        <span class="k">if</span> <span class="n">request</span><span class="o">.</span><span class="n">args</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">param</span><span class="p">):</span>
+            <span class="k">return</span> <span class="n">request</span><span class="o">.</span><span class="n">args</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">param</span><span class="p">,</span> <span class="n">default</span><span class="p">)</span>
+        <span class="n">form_data</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">get_form_data</span><span class="p">()</span>
+        <span class="n">url_params</span> <span class="o">=</span> <span class="n">form_data</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;url_params&quot;</span><span class="p">)</span> <span class="ow">or</span> <span class="p">{}</span>
+        <span class="n">result</span> <span class="o">=</span> <span class="n">url_params</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">param</span><span class="p">,</span> <span class="n">default</span><span class="p">)</span>
+        <span class="k">if</span> <span class="n">add_to_cache_keys</span><span class="p">:</span>
+            <span class="bp">self</span><span class="o">.</span><span class="n">cache_key_wrapper</span><span class="p">(</span><span class="n">result</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">result</span></div></div>
+
+
+<span class="k">class</span> <span class="nc">BaseTemplateProcessor</span><span class="p">:</span>  <span class="c1"># pylint: disable=too-few-public-methods</span>
+    <span class="sd">&quot;&quot;&quot;Base class for database-specific jinja context</span>
+
+<span class="sd">    There&#39;s this bit of magic in ``process_template`` that instantiates only</span>
+<span class="sd">    the database context for the active database as a ``models.Database``</span>
+<span class="sd">    object binds it to the context object, so that object methods</span>
+<span class="sd">    have access to</span>
+<span class="sd">    that context. This way, {{ hive.latest_partition(&#39;mytable&#39;) }} just</span>
+<span class="sd">    knows about the database it is operating in.</span>
+
+<span class="sd">    This means that object methods are only available for the active database</span>
+<span class="sd">    and are given access to the ``models.Database`` object and schema</span>
+<span class="sd">    name. For globally available methods use ``@classmethod``.</span>
+<span class="sd">    &quot;&quot;&quot;</span>
+
+    <span class="n">engine</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">str</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span>
+
+    <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span>
+        <span class="bp">self</span><span class="p">,</span>
+        <span class="n">database</span><span class="p">:</span> <span class="s2">&quot;Database&quot;</span><span class="p">,</span>
+        <span class="n">query</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="s2">&quot;Query&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
+        <span class="n">table</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="s2">&quot;SqlaTable&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
+        <span class="n">extra_cache_keys</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">List</span><span class="p">[</span><span class="n">Any</span><span class="p">]]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
+        <span class="o">**</span><span class="n">kwargs</span><span class="p">:</span> <span class="n">Any</span><span class="p">,</span>
+    <span class="p">)</span> <span class="o">-&gt;</span> <span class="kc">None</span><span class="p">:</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">database</span> <span class="o">=</span> <span class="n">database</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">query</span> <span class="o">=</span> <span class="n">query</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">schema</span> <span class="o">=</span> <span class="kc">None</span>
+        <span class="k">if</span> <span class="n">query</span> <span class="ow">and</span> <span class="n">query</span><span class="o">.</span><span class="n">schema</span><span class="p">:</span>
+            <span class="bp">self</span><span class="o">.</span><span class="n">schema</span> <span class="o">=</span> <span class="n">query</span><span class="o">.</span><span class="n">schema</span>
+        <span class="k">elif</span> <span class="n">table</span><span class="p">:</span>
+            <span class="bp">self</span><span class="o">.</span><span class="n">schema</span> <span class="o">=</span> <span class="n">table</span><span class="o">.</span><span class="n">schema</span>
+
+        <span class="n">extra_cache</span> <span class="o">=</span> <span class="n">ExtraCache</span><span class="p">(</span><span class="n">extra_cache_keys</span><span class="p">)</span>
+
+        <span class="bp">self</span><span class="o">.</span><span class="n">context</span> <span class="o">=</span> <span class="p">{</span>
+            <span class="s2">&quot;url_param&quot;</span><span class="p">:</span> <span class="n">extra_cache</span><span class="o">.</span><span class="n">url_param</span><span class="p">,</span>
+            <span class="s2">&quot;current_user_id&quot;</span><span class="p">:</span> <span class="n">extra_cache</span><span class="o">.</span><span class="n">current_user_id</span><span class="p">,</span>
+            <span class="s2">&quot;current_username&quot;</span><span class="p">:</span> <span class="n">extra_cache</span><span class="o">.</span><span class="n">current_username</span><span class="p">,</span>
+            <span class="s2">&quot;cache_key_wrapper&quot;</span><span class="p">:</span> <span class="n">extra_cache</span><span class="o">.</span><span class="n">cache_key_wrapper</span><span class="p">,</span>
+            <span class="s2">&quot;filter_values&quot;</span><span class="p">:</span> <span class="n">filter_values</span><span class="p">,</span>
+            <span class="s2">&quot;form_data&quot;</span><span class="p">:</span> <span class="p">{},</span>
+        <span class="p">}</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">context</span><span class="o">.</span><span class="n">update</span><span class="p">(</span><span class="n">kwargs</span><span class="p">)</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">context</span><span class="o">.</span><span class="n">update</span><span class="p">(</span><span class="n">jinja_base_context</span><span class="p">)</span>
+        <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">engine</span><span class="p">:</span>
+            <span class="bp">self</span><span class="o">.</span><span class="n">context</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="o">=</span> <span class="bp">self</span>
+        <span class="bp">self</span><span class="o">.</span><span class="n">env</span> <span class="o">=</span> <span class="n">SandboxedEnvironment</span><span class="p">()</span>
+
+    <span class="k">def</span> <span class="nf">process_template</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">sql</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">:</span> <span class="n">Any</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="nb">str</span><span class="p">:</span>
+        <span class="sd">&quot;&quot;&quot;Processes a sql template</span>
+
+<span class="sd">        &gt;&gt;&gt; sql = &quot;SELECT &#39;{{ datetime(2017, 1, 1).isoformat() }}&#39;&quot;</span>
+<span class="sd">        &gt;&gt;&gt; process_template(sql)</span>
+<span class="sd">        &quot;SELECT &#39;2017-01-01T00:00:00&#39;&quot;</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+        <span class="n">template</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">env</span><span class="o">.</span><span class="n">from_string</span><span class="p">(</span><span class="n">sql</span><span class="p">)</span>
+        <span class="n">kwargs</span><span class="o">.</span><span class="n">update</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">context</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">template</span><span class="o">.</span><span class="n">render</span><span class="p">(</span><span class="n">kwargs</span><span class="p">)</span>
+
+
+<div class="viewcode-block" id="PrestoTemplateProcessor"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.PrestoTemplateProcessor">[docs]</a><span class="k">class</span> <span class="nc">PrestoTemplateProcessor</span><span class="p">(</span><span class="n">BaseTemplateProcessor</span><span class="p">):</span>
+    <span class="sd">&quot;&quot;&quot;Presto Jinja context</span>
+
+<span class="sd">    The methods described here are namespaced under ``presto`` in the</span>
+<span class="sd">    jinja context as in ``SELECT &#39;{{ presto.some_macro_call() }}&#39;``</span>
+<span class="sd">    &quot;&quot;&quot;</span>
+
+    <span class="n">engine</span> <span class="o">=</span> <span class="s2">&quot;presto&quot;</span>
+
+    <span class="nd">@staticmethod</span>
+    <span class="k">def</span> <span class="nf">_schema_table</span><span class="p">(</span>
+        <span class="n">table_name</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="n">schema</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">str</span><span class="p">]</span>
+    <span class="p">)</span> <span class="o">-&gt;</span> <span class="n">Tuple</span><span class="p">[</span><span class="nb">str</span><span class="p">,</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">str</span><span class="p">]]:</span>
+        <span class="k">if</span> <span class="s2">&quot;.&quot;</span> <span class="ow">in</span> <span class="n">table_name</span><span class="p">:</span>
+            <span class="n">schema</span><span class="p">,</span> <span class="n">table_name</span> <span class="o">=</span> <span class="n">table_name</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s2">&quot;.&quot;</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">table_name</span><span class="p">,</span> <span class="n">schema</span>
+
+<div class="viewcode-block" id="PrestoTemplateProcessor.first_latest_partition"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.PrestoTemplateProcessor.first_latest_partition">[docs]</a>    <span class="k">def</span> <span class="nf">first_latest_partition</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">table_name</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-& [...]
+        <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">        Gets the first value in the array of all latest partitions</span>
+
+<span class="sd">        :param table_name: table name in the format `schema.table`</span>
+<span class="sd">        :return: the first (or only) value in the latest partition array</span>
+<span class="sd">        :raises IndexError: If no partition exists</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+
+        <span class="n">latest_partitions</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">latest_partitions</span><span class="p">(</span><span class="n">table_name</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">latest_partitions</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="k">if</span> <span class="n">latest_partitions</span> <span class="k">else</span> <span class="kc">None</span></div>
+
+<div class="viewcode-block" id="PrestoTemplateProcessor.latest_partitions"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.PrestoTemplateProcessor.latest_partitions">[docs]</a>    <span class="k">def</span> <span class="nf">latest_partitions</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">table_name</span><span class="p">:</span> <span class="nb">str</span><span class="p">)</span> <span class="o">-&gt;</span> <spa [...]
+        <span class="sd">&quot;&quot;&quot;</span>
+<span class="sd">        Gets the array of all latest partitions</span>
+
+<span class="sd">        :param table_name: table name in the format `schema.table`</span>
+<span class="sd">        :return: the latest partition array</span>
+<span class="sd">        &quot;&quot;&quot;</span>
+
+        <span class="kn">from</span> <span class="nn">superset.db_engine_specs.presto</span> <span class="kn">import</span> <span class="n">PrestoEngineSpec</span>
+
+        <span class="n">table_name</span><span class="p">,</span> <span class="n">schema</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_schema_table</span><span class="p">(</span><span class="n">table_name</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">schema</span><span class="p">)</span>
+        <span class="k">return</span> <span class="n">cast</span><span class="p">(</span><span class="n">PrestoEngineSpec</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">database</span><span class="o">.</span><span class="n">db_engine_spec</span><span class="p">)</span><span class="o">.</span><span class="n">latest_partition</span><span class="p">(</span>
+            <span class="n">table_name</span><span class="p">,</span> <span class="n">schema</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">database</span>
+        <span class="p">)[</span><span class="mi">1</span><span class="p">]</span></div>
+
+    <span class="k">def</span> <span class="nf">latest_sub_partition</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">table_name</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">:</span> <span class="n">Any</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">Any</span><span class="p">:</span>
+        <span class="n">table_name</span><span class="p">,</span> <span class="n">schema</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_schema_table</span><span class="p">(</span><span class="n">table_name</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">schema</span><span class="p">)</span>
+
+        <span class="kn">from</span> <span class="nn">superset.db_engine_specs.presto</span> <span class="kn">import</span> <span class="n">PrestoEngineSpec</span>
+
+        <span class="k">return</span> <span class="n">cast</span><span class="p">(</span>
+            <span class="n">PrestoEngineSpec</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">database</span><span class="o">.</span><span class="n">db_engine_spec</span>
+        <span class="p">)</span><span class="o">.</span><span class="n">latest_sub_partition</span><span class="p">(</span>
+            <span class="n">table_name</span><span class="o">=</span><span class="n">table_name</span><span class="p">,</span> <span class="n">schema</span><span class="o">=</span><span class="n">schema</span><span class="p">,</span> <span class="n">database</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">database</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span>
+        <span class="p">)</span>
+
+    <span class="n">latest_partition</span> <span class="o">=</span> <span class="n">first_latest_partition</span></div>
+
+
+<div class="viewcode-block" id="HiveTemplateProcessor"><a class="viewcode-back" href="../../sqllab.html#superset.jinja_context.HiveTemplateProcessor">[docs]</a><span class="k">class</span> <span class="nc">HiveTemplateProcessor</span><span class="p">(</span><span class="n">PrestoTemplateProcessor</span><span class="p">):</span>
+    <span class="n">engine</span> <span class="o">=</span> <span class="s2">&quot;hive&quot;</span></div>
+
+
+<span class="c1"># The global template processors from Jinja context manager.</span>
+<span class="n">template_processors</span> <span class="o">=</span> <span class="n">jinja_context_manager</span><span class="o">.</span><span class="n">template_processors</span>
+<span class="n">keys</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="nb">globals</span><span class="p">()</span><span class="o">.</span><span class="n">keys</span><span class="p">())</span>
+<span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="n">keys</span><span class="p">:</span>
+    <span class="n">o</span> <span class="o">=</span> <span class="nb">globals</span><span class="p">()[</span><span class="n">k</span><span class="p">]</span>
+    <span class="k">if</span> <span class="n">o</span> <span class="ow">and</span> <span class="n">inspect</span><span class="o">.</span><span class="n">isclass</span><span class="p">(</span><span class="n">o</span><span class="p">)</span> <span class="ow">and</span> <span class="nb">issubclass</span><span class="p">(</span><span class="n">o</span><span class="p">,</span> <span class="n">BaseTemplateProcessor</span><span class="p">):</span>
+        <span class="n">template_processors</span><span class="p">[</span><span class="n">o</span><span class="o">.</span><span class="n">engine</span><span class="p">]</span> <span class="o">=</span> <span class="n">o</span>
+
+
+<span class="k">def</span> <span class="nf">get_template_processor</span><span class="p">(</span>
+    <span class="n">database</span><span class="p">:</span> <span class="s2">&quot;Database&quot;</span><span class="p">,</span>
+    <span class="n">table</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="s2">&quot;SqlaTable&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
+    <span class="n">query</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="s2">&quot;Query&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
+    <span class="o">**</span><span class="n">kwargs</span><span class="p">:</span> <span class="n">Any</span><span class="p">,</span>
+<span class="p">)</span> <span class="o">-&gt;</span> <span class="n">BaseTemplateProcessor</span><span class="p">:</span>
+    <span class="n">template_processor</span> <span class="o">=</span> <span class="n">template_processors</span><span class="o">.</span><span class="n">get</span><span class="p">(</span>
+        <span class="n">database</span><span class="o">.</span><span class="n">backend</span><span class="p">,</span> <span class="n">BaseTemplateProcessor</span>
+    <span class="p">)</span>
+    <span class="k">return</span> <span class="n">template_processor</span><span class="p">(</span><span class="n">database</span><span class="o">=</span><span class="n">database</span><span class="p">,</span> <span class="n">table</span><span class="o">=</span><span class="n">table</span><span class="p">,</span> <span class="n">query</span><span class="o">=</span><span class="n">query</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
+</pre></div>
+
+           </div>
+           
+          </div>
+          <footer>
+  
+
+  <hr/>
+
+  <div role="contentinfo">
+    <p>
+        &copy; Copyright Copyright © 2020 The Apache Software Foundation, Licensed under the Apache License, Version 2.0.
+
+    </p>
+  </div> 
+
+</footer>
+
+        </div>
+      </div>
+
+    </section>
+
+  </div>
+  
+
+
+  <script type="text/javascript">
+      jQuery(function () {
+          SphinxRtdTheme.Navigation.enable(true);
+      });
+  </script>
+
+  
+  
+    
+   
+
+</body>
+</html>
\ No newline at end of file
diff --git a/_sources/admintutorial.rst.txt b/_sources/admintutorial.rst.txt
new file mode 100644
index 0000000..87490b4
--- /dev/null
+++ b/_sources/admintutorial.rst.txt
@@ -0,0 +1,325 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Creating your first dashboard
+=============================
+
+This tutorial targets someone who wants to create charts and dashboards
+in Superset. We'll show you how to connect Superset
+to a new database and configure a table in that database for analysis. You'll
+also explore the data you've exposed and add a visualization to a dashboard
+so that you get a feel for the end-to-end user experience.
+
+Connecting to a new database
+----------------------------
+
+We assume you already have a database configured and can connect to it from the
+instance on which you’re running Superset. If you’re just testing Superset and
+want to explore sample data, you can load some
+`sample PostgreSQL datasets <https://wiki.postgresql.org/wiki/Sample_Databases>`_
+into a fresh DB, or configure the
+`example weather data <https://github.com/dylburger/noaa-ghcn-weather-data>`_
+we use here.
+
+Under the **Sources** menu, select the *Databases* option:
+
+.. image:: _static/images/tutorial/tutorial_01_sources_database.png
+   :scale: 70%
+
+On the resulting page, click on the green plus sign, near the top right:
+
+.. image:: _static/images/tutorial/tutorial_02_add_database.png
+   :scale: 70%
+
+You can configure a number of advanced options on this page, but for
+this walkthrough, you’ll only need to do **two things**:
+
+1. Name your database connection:
+
+.. image:: _static/images/tutorial/tutorial_03_database_name.png
+   :scale: 70%
+
+2. Provide the SQLAlchemy Connection URI and test the connection:
+
+.. image:: _static/images/tutorial/tutorial_04_sqlalchemy_connection_string.png
+   :scale: 70%
+
+This example shows the connection string for our test weather database.
+As noted in the text below the URI, you should refer to the SQLAlchemy
+documentation on
+`creating new connection URIs <https://docs.sqlalchemy.org/en/rel_1_2/core/engines.html#database-urls>`_
+for your target database.
+
+Click the **Test Connection** button to confirm things work end to end.
+Once Superset can successfully connect and authenticate, you should see
+a popup like this:
+
+.. image:: _static/images/tutorial/tutorial_05_connection_popup.png
+   :scale: 50%
+
+Moreover, you should also see the list of tables Superset can read from
+the schema you’re connected to, at the bottom of the page:
+
+.. image:: _static/images/tutorial/tutorial_06_list_of_tables.png
+   :scale: 70%
+
+If the connection looks good, save the configuration by clicking the **Save**
+button at the bottom of the page:
+
+.. image:: _static/images/tutorial/tutorial_07_save_button.png
+   :scale: 70%
+
+Adding a new table
+------------------
+
+Now that you’ve configured a database, you’ll need to add specific tables
+to Superset that you’d like to query.
+
+Under the **Sources** menu, select the *Tables* option:
+
+.. image:: _static/images/tutorial/tutorial_08_sources_tables.png
+   :scale: 70%
+
+On the resulting page, click on the green plus sign, near the top left:
+
+.. image:: _static/images/tutorial/tutorial_09_add_new_table.png
+   :scale: 70%
+
+You only need a few pieces of information to add a new table to Superset:
+
+* The name of the table
+
+.. image:: _static/images/tutorial/tutorial_10_table_name.png
+   :scale: 70%
+
+* The target database from the **Database** drop-down menu (i.e. the one
+  you just added above)
+
+.. image:: _static/images/tutorial/tutorial_11_choose_db.png
+   :scale: 70%
+
+* Optionally, the database schema. If the table exists in the “default” schema
+  (e.g. the *public* schema in PostgreSQL or Redshift), you can leave the schema
+  field blank.
+
+Click on the **Save** button to save the configuration:
+
+.. image:: _static/images/tutorial/tutorial_07_save_button.png
+   :scale: 70%
+
+When redirected back to the list of tables, you should see a message indicating
+that your table was created:
+
+.. image:: _static/images/tutorial/tutorial_12_table_creation_success_msg.png
+   :scale: 70%
+
+This message also directs you to edit the table configuration. We’ll edit a limited
+portion of the configuration now - just to get you started - and leave the rest for
+a more advanced tutorial.
+
+Click on the edit button next to the table you’ve created:
+
+.. image:: _static/images/tutorial/tutorial_13_edit_table_config.png
+   :scale: 70%
+
+On the resulting page, click on the **List Table Column** tab. Here, you’ll define the
+way you can use specific columns of your table when exploring your data. We’ll run
+through these options to describe their purpose:
+
+* If you want users to group metrics by a specific field, mark it as **Groupable**.
+* If you need to filter on a specific field, mark it as **Filterable**.
+* Is this field something you’d like to get the distinct count of? Check the **Count
+  Distinct** box.
+* Is this a metric you want to sum, or get basic summary statistics for? The **Sum**,
+  **Min**, and **Max** columns will help.
+* The **is temporal** field should be checked for any date or time fields. We’ll cover
+  how this manifests itself in analyses in a moment.
+
+Here’s how we’ve configured fields for the weather data. Even for measures like the
+weather measurements (precipitation, snowfall, etc.), it’s ideal to group and filter
+by these values:
+
+.. image:: _static/images/tutorial/tutorial_14_field_config.png
+
+As with the configurations above, click the **Save** button to save these settings.
+
+Exploring your data
+-------------------
+
+To start exploring your data, simply click on the table name you just created in
+the list of available tables:
+
+.. image:: _static/images/tutorial/tutorial_15_click_table_name.png
+
+By default, you’ll be presented with a Table View:
+
+.. image:: _static/images/tutorial/tutorial_16_datasource_chart_type.png
+
+Let’s walk through a basic query to get the count of all records in our table.
+First, we’ll need to change the **Since** filter to capture the range of our data.
+You can use simple phrases to apply these filters, like "3 years ago":
+
+.. image:: _static/images/tutorial/tutorial_17_choose_time_range.png
+
+The upper limit for time, the **Until** filter, defaults to "now", which may or may
+not be what you want.
+
+Look for the Metrics section under the **GROUP BY** header, and start typing "Count"
+- you’ll see a list of metrics matching what you type:
+
+.. image:: _static/images/tutorial/tutorial_18_choose_metric.png
+
+Select the *COUNT(\*)* metric, then click the green **Query** button near the top
+of the explore:
+
+.. image:: _static/images/tutorial/tutorial_19_click_query.png
+
+You’ll see your results in the table:
+
+.. image:: _static/images/tutorial/tutorial_20_count_star_result.png
+
+Let’s group this by the *weather_description* field to get the count of records by
+the type of weather recorded by adding it to the *Group by* section:
+
+.. image:: _static/images/tutorial/tutorial_21_group_by.png
+
+and run the query:
+
+.. image:: _static/images/tutorial/tutorial_22_group_by_result.png
+
+Let’s find a more useful data point: the top 10 times and places that recorded the
+highest temperature in 2015.
+
+We replace *weather_description* with *latitude*, *longitude* and *measurement_date* in the
+*Group by* section:
+
+.. image:: _static/images/tutorial/tutorial_23_group_by_more_dimensions.png
+
+And replace *COUNT(\*)* with *max__measurement_flag*:
+
+.. image:: _static/images/tutorial/tutorial_24_max_metric.png
+
+The *max__measurement_flag* metric was created when we checked the box under **Max** and
+next to the *measurement_flag* field, indicating that this field was numeric and that
+we wanted to find its maximum value when grouped by specific fields.
+
+In our case, *measurement_flag* is the value of the measurement taken, which clearly
+depends on the type of measurement (the researchers recorded different values for
+precipitation and temperature). Therefore, we must filter our query only on records
+where the *weather_description* is equal to "Maximum temperature", which we do in
+the **Filters** section at the bottom of the explore:
+
+.. image:: _static/images/tutorial/tutorial_25_max_temp_filter.png
+
+Finally, since we only care about the top 10 measurements, we limit our results to
+10 records using the *Row limit* option under the **Options** header:
+
+.. image:: _static/images/tutorial/tutorial_26_row_limit.png
+
+We click **Query** and get the following results:
+
+.. image:: _static/images/tutorial/tutorial_27_top_10_max_temps.png
+
+In this dataset, the maximum temperature is recorded in tenths of a degree Celsius.
+The top value of 1370, measured in the middle of Nevada, is equal to 137 C, or roughly
+278 degrees F. It’s unlikely this value was correctly recorded. We’ve already been able
+to investigate some outliers with Superset, but this just scratches the surface of what
+we can do.
+
+You may want to do a couple more things with this measure:
+
+* The default formatting shows values like 1.37k, which may be difficult for some
+  users to read. It’s likely you may want to see the full, comma-separated value.
+  You can change the formatting of any measure by editing its config (*Edit Table
+  Config > List Sql Metric > Edit Metric > D3Format*)
+* Moreover, you may want to see the temperature measurements in plain degrees C,
+  not tenths of a degree. Or you may want to convert the temperature to degrees
+  Fahrenheit. You can change the SQL that gets executed against the database, baking
+  the logic into the measure itself (*Edit Table Config > List Sql Metric > Edit
+  Metric > SQL Expression*)
+
+For now, though, let’s create a better visualization of these data and add it to
+a dashboard.
+
+We change the Chart Type to "Distribution - Bar Chart":
+
+.. image:: _static/images/tutorial/tutorial_28_bar_chart.png
+
+Our filter on Maximum temperature measurements was retained, but the query and
+formatting options are dependent on the chart type, so you’ll have to set the
+values again:
+
+.. image:: _static/images/tutorial/tutorial_29_bar_chart_series_metrics.png
+
+You should note the extensive formatting options for this chart: the ability to
+set axis labels, margins, ticks, etc. To make the data presentable to a broad
+audience, you’ll want to apply many of these to slices that end up in dashboards.
+For now, though, we run our query and get the following chart:
+
+.. image:: _static/images/tutorial/tutorial_30_bar_chart_results.png
+   :scale: 70%
+
+Creating a slice and dashboard
+------------------------------
+
+This view might be interesting to researchers, so let’s save it. In Superset,
+a saved query is called a **Slice**.
+
+To create a slice, click the **Save as** button near the top-left of the
+explore:
+
+.. image:: _static/images/tutorial/tutorial_19_click_query.png
+
+A popup should appear, asking you to name the slice, and optionally add it to a
+dashboard. Since we haven’t yet created any dashboards, we can create one and
+immediately add our slice to it. Let’s do it:
+
+.. image:: _static/images/tutorial/tutorial_31_save_slice_to_dashboard.png
+   :scale: 70%
+
+Click Save, which will direct you back to your original query. We see that
+our slice and dashboard were successfully created:
+
+.. image:: _static/images/tutorial/tutorial_32_save_slice_confirmation.png
+   :scale: 70%
+
+Let’s check out our new dashboard. We click on the **Dashboards** menu:
+
+.. image:: _static/images/tutorial/tutorial_33_dashboard.png
+
+and find the dashboard we just created:
+
+.. image:: _static/images/tutorial/tutorial_34_weather_dashboard.png
+
+Things seemed to have worked - our slice is here!
+
+.. image:: _static/images/tutorial/tutorial_35_slice_on_dashboard.png
+   :scale: 70%
+
+But it’s a bit smaller than we might like. Luckily, you can adjust the size
+of slices in a dashboard by clicking, holding and dragging the bottom-right
+corner to your desired dimensions:
+
+.. image:: _static/images/tutorial/tutorial_36_adjust_dimensions.gif
+   :scale: 120%
+
+After adjusting the size, you’ll be asked to click on the icon near the
+top-right of the dashboard to save the new configuration.
+
+Congrats! You’ve successfully linked, analyzed, and visualized data in Superset.
+There are a wealth of other table configuration and visualization options, so
+please start exploring and creating slices and dashboards of your own.
diff --git a/_sources/druid.rst.txt b/_sources/druid.rst.txt
new file mode 100644
index 0000000..cfbb67f
--- /dev/null
+++ b/_sources/druid.rst.txt
@@ -0,0 +1,64 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Druid
+=====
+
+Superset has a native connector to Druid and a majority of Druid's
+features are accessible through Superset.
+
+.. note ::
+    Druid now supports SQL and can be accessed through Superset's
+    SQLAlchemy connector. The long-term vision is to deprecate
+    the Druid native REST connector and query Druid exclusively through
+    the SQL interface.
+
+Aggregations
+------------
+
+Common aggregations or Druid metrics can be defined and used in Superset.
+The first and simpler use case is to use the checkbox matrix expose in your
+datasource's edit view (``Sources -> Druid Datasources ->
+[your datasource] -> Edit -> [tab] List Druid Column``).
+Clicking the ``GroupBy`` and ``Filterable`` checkboxes will make the column
+appear in the related dropdowns while in explore view. Checking
+``Count Distinct``, ``Min``, ``Max`` or ``Sum`` will result in creating
+new metrics that will appear in the ``List Druid Metric`` tab upon saving the
+datasource. By editing these metrics, you'll notice that their ``json``
+element corresponds to Druid aggregation definition. You can create your own
+aggregations manually from the ``List Druid Metric`` tab following Druid
+documentation.
+
+.. image:: _static/images/druid_agg.png
+   :scale: 50 %
+
+Post-Aggregations
+-----------------
+
+Druid supports post aggregation and this works in Superset. All you have to
+do is create a metric, much like you would create an aggregation manually,
+but specify ``postagg`` as a ``Metric Type``. You then have to provide a valid
+json post-aggregation definition (as specified in the Druid docs) in the
+Json field.
+
+
+Unsupported Features
+--------------------
+
+.. note ::
+    Unclear at this point, this section of the documentation could use
+    some input.
diff --git a/_sources/druid.txt b/_sources/druid.txt
new file mode 100644
index 0000000..af956d9
--- /dev/null
+++ b/_sources/druid.txt
@@ -0,0 +1,48 @@
+Druid
+=====
+
+Superset works well with Druid, though currently not all
+advanced features out of Druid are covered. This page clarifies what is
+covered and what isn't and explains how to use some of the features.
+
+.. note ::
+    Currently Airbnb runs against Druid ``0.8.x`` and previous /
+    following versions are not tested against.
+
+Supported
+'''''''''
+
+Aggregations
+------------
+
+Common aggregations, or Druid metrics can be defined and used in Superset.
+The first and simpler use case is to use the checkbox matrix expose in your
+datasource's edit view (``Sources -> Druid Datasources ->
+[your datasource] -> Edit -> [tab] List Druid Column``).
+Clicking the ``GroupBy`` and ``Filterable`` checkboxes will make the column
+appear in the related dropdowns while in explore view. Checking
+``Count Distinct``, ``Min``, ``Max`` or ``Sum`` will result in creating
+new metrics that will appear in the ``List Druid Metric`` tab upon saving the
+datasource. By editing these metrics, you'll notice that they their ``json``
+element correspond to Druid aggregation definition. You can create your own
+aggregations manually from the ``List Druid Metric`` tab following Druid
+documentation.
+
+.. image:: _static/img/druid_agg.png
+   :scale: 50 %
+
+Post-Aggregations
+-----------------
+
+Druid supports post aggregation and this works in Superset. All you have to
+do is creating a metric, much like you would create an aggregation manually,
+but specify ``postagg`` as a ``Metric Type``. You then have to provide a valid
+json post-aggregation definition (as specified in the Druid docs) in the
+Json field.
+
+
+Not yet supported
+'''''''''''''''''
+
+- Regex filters
+- Lookups / joins
diff --git a/_sources/faq.rst.txt b/_sources/faq.rst.txt
new file mode 100644
index 0000000..07dda0c
--- /dev/null
+++ b/_sources/faq.rst.txt
@@ -0,0 +1,339 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+FAQ
+===
+
+
+Can I query/join multiple tables at one time?
+---------------------------------------------
+Not directly no. A Superset SQLAlchemy datasource can only be a single table
+or a view.
+
+When working with tables, the solution would be to materialize
+a table that contains all the fields needed for your analysis, most likely
+through some scheduled batch process.
+
+A view is a simple logical layer that abstract an arbitrary SQL queries as
+a virtual table. This can allow you to join and union multiple tables, and
+to apply some transformation using arbitrary SQL expressions. The limitation
+there is your database performance as Superset effectively will run a query
+on top of your query (view). A good practice may be to limit yourself to
+joining your main large table to one or many small tables only, and avoid
+using ``GROUP BY`` where possible as Superset will do its own ``GROUP BY`` and
+doing the work twice might slow down performance.
+
+Whether you use a table or a view, the important factor is whether your
+database is fast enough to serve it in an interactive fashion to provide
+a good user experience in Superset.
+
+
+How BIG can my data source be?
+------------------------------
+
+It can be gigantic! As mentioned above, the main criteria is whether your
+database can execute queries and return results in a time frame that is
+acceptable to your users. Many distributed databases out there can execute
+queries that scan through terabytes in an interactive fashion.
+
+
+How do I create my own visualization?
+-------------------------------------
+
+We are planning on making it easier to add new visualizations to the
+framework, in the meantime, we've tagged a few pull requests as
+``example`` to give people examples of how to contribute new
+visualizations.
+
+https://github.com/airbnb/superset/issues?q=label%3Aexample+is%3Aclosed
+
+
+Can I upload and visualize csv data?
+------------------------------------
+
+Yes, using the ``Upload a CSV`` button under the ``Sources`` menu item.
+This brings up a form that allows you specify required information.
+After creating the table from CSV, it can then be loaded like any
+other on the ``Sources -> Tables`` page.
+
+
+Why are my queries timing out?
+------------------------------
+
+There are many reasons may cause long query timing out.
+
+
+- For running long query from Sql Lab, by default Superset allows it run as long as 6 hours before it being killed by celery. If you want to increase the time for running query, you can specify the timeout in configuration. For example:
+
+  ``SQLLAB_ASYNC_TIME_LIMIT_SEC = 60 * 60 * 6``
+
+
+- Superset is running on gunicorn web server, which may time out web requests. If you want to increase the default (50), you can specify the timeout when starting the web server with the ``-t`` flag, which is expressed in seconds.
+
+  ``superset runserver -t 300``
+
+- If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response from Superset server (which is processing long queries), these web servers will send 504 status code to clients directly. Superset has a client-side timeout limit to address this issue. If query didn't come back within clint-side timeout (60 seconds by default), Superset will display warning m [...]
+
+  ``SUPERSET_WEBSERVER_TIMEOUT = 60``
+
+
+Why is the map not visible in the mapbox visualization?
+-------------------------------------------------------
+
+You need to register to mapbox.com, get an API key and configure it as
+``MAPBOX_API_KEY`` in ``superset_config.py``.
+
+
+How to add dynamic filters to a dashboard?
+------------------------------------------
+
+It's easy: use the ``Filter Box`` widget, build a slice, and add it to your
+dashboard.
+
+The ``Filter Box`` widget allows you to define a query to populate dropdowns
+that can be used for filtering. To build the list of distinct values, we
+run a query, and sort the result by the metric you provide, sorting
+descending.
+
+The widget also has a checkbox ``Date Filter``, which enables time filtering
+capabilities to your dashboard. After checking the box and refreshing, you'll
+see a ``from`` and a ``to`` dropdown show up.
+
+By default, the filtering will be applied to all the slices that are built
+on top of a datasource that shares the column name that the filter is based
+on. It's also a requirement for that column to be checked as "filterable"
+in the column tab of the table editor.
+
+But what about if you don't want certain widgets to get filtered on your
+dashboard? You can do that by editing your dashboard, and in the form,
+edit the ``JSON Metadata`` field, more specifically the
+``filter_immune_slices`` key, that receives an array of sliceIds that should
+never be affected by any dashboard level filtering.
+
+
+.. code-block:: json
+
+    {
+        "filter_immune_slices": [324, 65, 92],
+        "expanded_slices": {},
+        "filter_immune_slice_fields": {
+            "177": ["country_name", "__time_range"],
+            "32": ["__time_range"]
+        },
+        "timed_refresh_immune_slices": [324]
+    }
+
+In the json blob above, slices 324, 65 and 92 won't be affected by any
+dashboard level filtering.
+
+Now note the ``filter_immune_slice_fields`` key. This one allows you to
+be more specific and define for a specific slice_id, which filter fields
+should be disregarded.
+
+Note the use of the ``__time_range`` keyword, which is reserved for dealing
+with the time boundary filtering mentioned above.
+
+But what happens with filtering when dealing with slices coming from
+different tables or databases? If the column name is shared, the filter will
+be applied, it's as simple as that.
+
+
+How to limit the timed refresh on a dashboard?
+----------------------------------------------
+By default, the dashboard timed refresh feature allows you to automatically re-query every slice
+on a dashboard according to a set schedule. Sometimes, however, you won't want all of the slices
+to be refreshed - especially if some data is slow moving, or run heavy queries. To exclude specific
+slices from the timed refresh process, add the ``timed_refresh_immune_slices`` key to the dashboard
+``JSON Metadata`` field:
+
+.. code-block:: json
+
+    {
+       "filter_immune_slices": [],
+        "expanded_slices": {},
+        "filter_immune_slice_fields": {},
+        "timed_refresh_immune_slices": [324]
+    }
+
+In the example above, if a timed refresh is set for the dashboard, then every slice except 324 will
+be automatically re-queried on schedule.
+
+Slice refresh will also be staggered over the specified period. You can turn off this staggering
+by setting the ``stagger_refresh`` to ``false`` and modify the stagger period by setting
+``stagger_time`` to a value in milliseconds in the ``JSON Metadata`` field:
+
+.. code-block:: json
+
+    {
+        "stagger_refresh": false,
+        "stagger_time": 2500
+    }
+
+Here, the entire dashboard will refresh at once if periodic refresh is on. The stagger time of
+2.5 seconds is ignored.
+
+Why does 'flask fab' or superset freezed/hung/not responding when started (my home directory is NFS mounted)?
+-------------------------------------------------------------------------------------------------------------
+By default, superset creates and uses an sqlite database at ``~/.superset/superset.db``. Sqlite is known to `don't work well if used on NFS`__ due to broken file locking implementation on NFS.
+
+__ https://www.sqlite.org/lockingv3.html
+
+You can override this path using the ``SUPERSET_HOME`` environment variable.
+
+Another work around is to change where superset stores the sqlite database by adding ``SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db'`` in superset_config.py (create the file if needed), then adding the directory where superset_config.py lives to PYTHONPATH environment variable (e.g. ``export PYTHONPATH=/opt/logs/sandbox/airbnb/``).
+
+What if the table schema changed?
+---------------------------------
+
+Table schemas evolve, and Superset needs to reflect that. It's pretty common
+in the life cycle of a dashboard to want to add a new dimension or metric.
+To get Superset to discover your new columns, all you have to do is to
+go to ``Menu -> Sources -> Tables``, click the ``edit`` icon next to the
+table who's schema has changed, and hit ``Save`` from the ``Detail`` tab.
+Behind the scene, the new columns will get merged it. Following this,
+you may want to
+re-edit the table afterwards to configure the ``Column`` tab, check the
+appropriate boxes and save again.
+
+How do I go about developing a new visualization type?
+------------------------------------------------------
+Here's an example as a Github PR with comments that describe what the
+different sections of the code do:
+https://github.com/airbnb/superset/pull/3013
+
+What database engine can I use as a backend for Superset?
+---------------------------------------------------------
+
+To clarify, the *database backend* is an OLTP database used by Superset to store its internal
+information like your list of users, slices and dashboard definitions.
+
+Superset is tested using Mysql, Postgresql and Sqlite for its backend. It's recommended you
+install Superset on one of these database server for production.
+
+Using a column-store, non-OLTP databases like Vertica, Redshift or Presto as a database backend simply won't work as these databases are not designed for this type of workload. Installation on Oracle, Microsoft SQL Server, or other OLTP databases may work but isn't tested.
+
+Please note that pretty much any databases that have a SqlAlchemy integration should work perfectly fine as a datasource for Superset, just not as the OLTP backend.
+
+How can i configure OAuth authentication and authorization?
+-----------------------------------------------------------
+
+You can take a look at this Flask-AppBuilder `configuration example
+<https://github.com/dpgaspar/Flask-AppBuilder/blob/master/examples/oauth/config.py>`_.
+
+How can I set a default filter on my dashboard?
+-----------------------------------------------
+
+Easy. Simply apply the filter and save the dashboard while the filter
+is active.
+
+How do I get Superset to refresh the schema of my table?
+--------------------------------------------------------
+
+When adding columns to a table, you can have Superset detect and merge the
+new columns in by using the "Refresh Metadata" action in the
+``Source -> Tables`` page. Simply check the box next to the tables
+you want the schema refreshed, and click ``Actions -> Refresh Metadata``.
+
+Is there a way to force the use specific colors?
+------------------------------------------------
+
+It is possible on a per-dashboard basis by providing a mapping of
+labels to colors in the ``JSON Metadata`` attribute using the
+``label_colors`` key.
+
+.. code-block:: json
+
+    {
+        "label_colors": {
+            "Girls": "#FF69B4",
+            "Boys": "#ADD8E6"
+        }
+    }
+
+Does Superset work with [insert database engine here]?
+------------------------------------------------------
+
+The community over time has curated a list of databases that work well with
+Superset in the :ref:`ref_database_deps` section of the docs. Database
+engines not listed in this page may work too. We rely on the
+community to contribute to this knowledge base.
+
+.. _SQLAlchemy dialect: https://docs.sqlalchemy.org/en/latest/dialects/
+.. _DBAPI driver: https://www.python.org/dev/peps/pep-0249/
+
+For a database engine to be supported in Superset through the
+SQLAlchemy connector, it requires having a Python compliant
+`SQLAlchemy dialect`_ as well as a
+`DBAPI driver`_ defined.
+Database that have limited SQL support may
+work as well. For instance it's possible to connect
+to Druid through the SQLAlchemy connector even though Druid does not support
+joins and subqueries. Another key element for a database to be supported is through
+the Superset `Database Engine Specification
+<https://github.com/apache/incubator-superset/blob/master/superset/db_engine_specs.py>`_
+interface. This interface allows for defining database-specific configurations
+and logic
+that go beyond the SQLAlchemy and DBAPI scope. This includes features like:
+
+
+* date-related SQL function that allow Superset to fetch different
+  time granularities when running time-series queries
+* whether the engine supports subqueries. If false, Superset may run 2-phase
+  queries to compensate for the limitation
+* methods around processing logs and inferring the percentage of completion
+  of a query
+* technicalities as to how to handle cursors and connections if the driver
+  is not standard DBAPI
+* more, read the code for more details
+
+Beyond the SQLAlchemy connector, it's also possible, though much more
+involved, to extend Superset and write
+your own connector. The only example of this at the moment is the Druid
+connector, which is getting superseded by Druid's growing SQL support and
+the recent availability of a DBAPI and SQLAlchemy driver. If the database
+you are considering integrating has any kind of of SQL support, it's probably
+preferable to go the SQLAlchemy route. Note that for a native connector to
+be possible the database needs to have support for running OLAP-type queries
+and should be able to things that are typical in basic SQL:
+
+- aggregate data
+- apply filters (==, !=, >, <, >=, <=, IN, ...)
+- apply HAVING-type filters
+- be schema-aware, expose columns and types
+
+
+Does Superset offer a public API?
+---------------------------------
+
+Yes, a public REST API, and the surface of that API formal
+is expanding steadily. Some of the original vision for the collection
+of endpoints under `/api/v1` was originally specified in
+[SIP-17](https://github.com/apache/incubator-superset/issues/7259) and
+constant progress has been made to cover more and more use cases.
+
+The API available is documented using [Swagger](https://swagger.io/)
+and the documentation
+can be made available under `/swagger/v1` by enabling
+the `FAB_API_SWAGGER_UI = True` configuration flag.
+
+There are other undocumented [private] ways to interact with Superset
+programmatically that offer no guarantees and are not recommended but
+may fit your use case temporarily:
+
+- using the ORM (SQLAlchemy) directly
+- using the internal FAB ModelView API (to be deprecated in Superset)
+- altering the source code in your fork
diff --git a/_sources/faq.txt b/_sources/faq.txt
new file mode 100644
index 0000000..82280ed
--- /dev/null
+++ b/_sources/faq.txt
@@ -0,0 +1,198 @@
+FAQ
+===
+
+
+Can I query/join multiple tables at one time?
+---------------------------------------------
+Not directly no. A Superset SQLAlchemy datasource can only be a single table
+or a view.
+
+When working with tables, the solution would be to materialize
+a table that contains all the fields needed for your analysis, most likely
+through some scheduled batch process.
+
+A view is a simple logical layer that abstract an arbitrary SQL queries as
+a virtual table. This can allow you to join and union multiple tables, and
+to apply some transformation using arbitrary SQL expressions. The limitation
+there is your database performance as Superset effectively will run a query
+on top of your query (view). A good practice may be to limit yourself to
+joining your main large table to one or many small tables only, and avoid
+using ``GROUP BY`` where possible as Superset will do its own ``GROUP BY`` and
+doing the work twice might slow down performance.
+
+Whether you use a table or a view, the important factor is whether your
+database is fast enough to serve it in an interactive fashion to provide
+a good user experience in Superset.
+
+
+How BIG can my data source be?
+------------------------------
+
+It can be gigantic! As mentioned above, the main criteria is whether your
+database can execute queries and return results in a time frame that is
+acceptable to your users. Many distributed databases out there can execute
+queries that scan through terabytes in an interactive fashion.
+
+
+How do I create my own visualization?
+-------------------------------------
+
+We are planning on making it easier to add new visualizations to the
+framework, in the meantime, we've tagged a few pull requests as
+``example`` to give people examples of how to contribute new
+visualizations.
+
+https://github.com/airbnb/superset/issues?q=label%3Aexample+is%3Aclosed
+
+
+Why are my queries timing out?
+------------------------------
+
+There are many reasons may cause long query timing out.
+
+
+- For running long query from Sql Lab, by default Superset allows it run as long as 6 hours before it being killed by celery. If you want to increase the time for running query, you can specify the timeout in configuration. For example:
+
+  ``SQLLAB_ASYNC_TIME_LIMIT_SEC = 60 * 60 * 6``
+
+
+- Superset is running on gunicorn web server, which may time out web requests. If you want to increase the default (50), you can specify the timeout when starting the web server with the ``-t`` flag, which is expressed in seconds.
+
+  ``superset runserver -t 300``
+
+- If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response from Superset server (which is processing long queries), these web servers will send 504 status code to clients directly. Superset has a client-side timeout limit to address this issue. If query didn't come back within clint-side timeout (45 seconds by default), Superset will display warning m [...]
+
+  ``export const QUERY_TIMEOUT_THRESHOLD = 45000;``
+
+
+Why is the map not visible in the mapbox visualization?
+-------------------------------------------------------
+
+You need to register to mapbox.com, get an API key and configure it as
+``MAPBOX_API_KEY`` in ``superset_config.py``.
+
+
+How to add dynamic filters to a dashboard?
+------------------------------------------
+
+It's easy: use the ``Filter Box`` widget, build a slice, and add it to your
+dashboard.
+
+The ``Filter Box`` widget allows you to define a query to populate dropdowns
+that can be use for filtering. To build the list of distinct values, we
+run a query, and sort the result by the metric you provide, sorting
+descending.
+
+The widget also has a checkbox ``Date Filter``, which enables time filtering
+capabilities to your dashboard. After checking the box and refreshing, you'll
+see a ``from`` and a ``to`` dropdown show up.
+
+By default, the filtering will be applied to all the slices that are built
+on top of a datasource that shares the column name that the filter is based
+on. It's also a requirement for that column to be checked as "filterable"
+in the column tab of the table editor.
+
+But what about if you don't want certain widgets to get filtered on your
+dashboard? You can do that by editing your dashboard, and in the form,
+edit the ``JSON Metadata`` field, more specifically the
+``filter_immune_slices`` key, that receives an array of sliceIds that should
+never be affected by any dashboard level filtering.
+
+
+..code::
+
+    {
+        "filter_immune_slices": [324, 65, 92],
+        "expanded_slices": {},
+        "filter_immune_slice_fields": {
+            "177": ["country_name", "__from", "__to"],
+            "32": ["__from", "__to"]
+        },
+        "timed_refresh_immune_slices": [324]
+    }
+
+In the json blob above, slices 324, 65 and 92 won't be affected by any
+dashboard level filtering.
+
+Now note the ``filter_immune_slice_fields`` key. This one allows you to
+be more specific and define for a specific slice_id, which filter fields
+should be disregarded.
+
+Note the use of the ``__from`` and ``__to`` keywords, those are reserved
+for dealing with the time boundary filtering mentioned above.
+
+But what happens with filtering when dealing with slices coming from
+different tables or databases? If the column name is shared, the filter will
+be applied, it's as simple as that.
+
+
+How to limit the timed refresh on a dashboard?
+----------------------------------------------
+By default, the dashboard timed refresh feature allows you to automatically requery every slice on a dashboard according to a set schedule. Sometimes, however, you won't want all of the slices to be refreshed - especially if some data is slow moving, or run heavy queries.
+To exclude specific slices from the timed refresh process, add the ``timed_refresh_immune_slices`` key to the dashboard ``JSON Metadata`` field:
+
+..code::
+
+    {
+       "filter_immune_slices": [],
+        "expanded_slices": {},
+        "filter_immune_slice_fields": {},
+        "timed_refresh_immune_slices": [324]
+    }
+
+In the example above, if a timed refresh is set for the dashboard, then every slice except 324 will be automatically requeried on schedule.
+
+
+Why does fabmanager or superset freezed/hung/not responding when started (my home directory is NFS mounted)?
+-----------------------------------------------------------------------------------------
+By default, superset creates and uses an sqlite database at ``~/.superset/superset.db``. Sqlite is known to `don't work well if used on NFS`__ due to broken file locking implementation on NFS.
+
+__ https://www.sqlite.org/lockingv3.html
+
+You can override this path using the ``SUPERSET_HOME`` environment variable.
+
+Another work around is to change where superset stores the sqlite database by adding ``SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db'`` in superset_config.py (create the file if needed), then adding the directory where superset_config.py lives to PYTHONPATH environment variable (e.g. ``export PYTHONPATH=/opt/logs/sandbox/airbnb/``).
+
+How do I add new columns to an existing table
+---------------------------------------------
+
+Table schemas evolve, and Superset needs to reflect that. It's pretty common
+in the life cycle of a dashboard to want to add a new dimension or metric.
+To get Superset to discover your new columns, all you have to do is to
+go to ``Menu -> Sources -> Tables``, click the ``edit`` icon next to the
+table who's schema has changed, and hit ``Save`` from the ``Detail`` tab.
+Behind the scene, the new columns will get merged it. Following this,
+you may want to
+re-edit the table afterwards to configure the ``Column`` tab, check the
+appropriate boxes and save again.
+
+How do I go about developing a new visualization type?
+------------------------------------------------------
+Here's an example as a Github PR with comments that describe what the
+different sections of the code do:
+https://github.com/airbnb/superset/pull/3013
+
+What database engine can I use as a backend for Superset?
+---------------------------------------------------------
+
+To clarify, the *database backend* is an OLTP database used by Superset to store its internal
+information like your list of users, slices and dashboard definitions.
+
+Superset is tested using Mysql, Postgresql and Sqlite for its backend. It's recommended you
+install Superset on one of these database server for production.
+
+Using a column-store, non-OLTP databases like Vertica, Redshift or Presto as a database backend simply won't work as these databases are not designed for this type of workload. Installation on Oracle, Microsoft SQL Server, or other OLTP databases may work but isn't tested.
+
+Please note that pretty much any databases that have a SqlAlchemy integration should work perfectly fine as a datasource for Superset, just not as the OLTP backend.
+
+How can i configure OAuth authentication and authorization?
+-----------------------------------------------------------
+
+You can take a look at this Flask-AppBuilder `configuration example 
+<https://github.com/dpgaspar/Flask-AppBuilder/blob/master/examples/oauth/config.py>`_.
+
+How can I set a default filter on my dashboard?
+-----------------------------------------------
+
+Easy. Simply apply the filter and save the dashboard while the filter
+is active.
diff --git a/_sources/gallery.rst.txt b/_sources/gallery.rst.txt
new file mode 100644
index 0000000..4009af4
--- /dev/null
+++ b/_sources/gallery.rst.txt
@@ -0,0 +1,206 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Visualizations Gallery
+======================
+
+.. image:: _static/images/viz_thumbnails/area.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/bar.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/big_number.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/big_number_total.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/box_plot.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/bubble.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/bullet.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/cal_heatmap.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/chord.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/compare.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/country_map.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_arc.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_geojson.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_grid.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_hex.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_multi.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_path.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_polygon.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_scatter.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/deck_screengrid.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/directed_force.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/dist_bar.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/dual_line.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/event_flow.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/filter_box.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/heatmap.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/histogram.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/horizon.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/iframe.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/line.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/mapbox.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/markup.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/paired_ttest.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/para.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/partition.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/pie.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/pivot_table.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/rose.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/sankey.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/separator.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/sunburst.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/table.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/time_pivot.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/time_table.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/treemap.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/word_cloud.png
+   :scale: 25 %
+
+
+.. image:: _static/images/viz_thumbnails/world_map.png
+   :scale: 25 %
diff --git a/_sources/gallery.txt b/_sources/gallery.txt
new file mode 100644
index 0000000..f0c7dfa
--- /dev/null
+++ b/_sources/gallery.txt
@@ -0,0 +1,89 @@
+Gallery
+=======
+
+.. image:: _static/img/viz_thumbnails/line.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/bubble.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/table.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/pie.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/bar.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/world_map.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/sankey.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/word_cloud.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/filter_box.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/pivot_table.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/directed_force.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/compare.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/sunburst.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/area.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/big_number.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/big_number_total.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/bullet.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/dist_bar.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/heatmap.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/markup.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/para.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/iframe.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/box_plot.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/treemap.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/cal_heatmap.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/horizon.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/mapbox.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/separator.png
+   :scale: 25 %
+
+.. image:: _static/img/viz_thumbnails/histogram.png
+   :scale: 25 %
diff --git a/_sources/import_export_datasources.rst.txt b/_sources/import_export_datasources.rst.txt
new file mode 100644
index 0000000..9c786ce
--- /dev/null
+++ b/_sources/import_export_datasources.rst.txt
@@ -0,0 +1,125 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Importing and Exporting Datasources
+===================================
+
+The superset cli allows you to import and export datasources from and to YAML.
+Datasources include both databases and druid clusters. The data is expected to be organized in the following hierarchy: ::
+
+    .
+    ├──databases
+    |  ├──database_1
+    |  |  ├──table_1
+    |  |  |  ├──columns
+    |  |  |  |  ├──column_1
+    |  |  |  |  ├──column_2
+    |  |  |  |  └──... (more columns)
+    |  |  |  └──metrics
+    |  |  |     ├──metric_1
+    |  |  |     ├──metric_2
+    |  |  |     └──... (more metrics)
+    |  |  └── ... (more tables)
+    |  └── ... (more databases)
+    └──druid_clusters
+       ├──cluster_1
+       |  ├──datasource_1
+       |  |  ├──columns
+       |  |  |  ├──column_1
+       |  |  |  ├──column_2
+       |  |  |  └──... (more columns)
+       |  |  └──metrics
+       |  |     ├──metric_1
+       |  |     ├──metric_2
+       |  |     └──... (more metrics)
+       |  └── ... (more datasources)
+       └── ... (more clusters)
+
+
+Exporting Datasources to YAML
+-----------------------------
+You can print your current datasources to stdout by running: ::
+
+    superset export_datasources
+
+
+To save your datasources to a file run: ::
+
+    superset export_datasources -f <filename>
+
+
+By default, default (null) values will be omitted. Use the ``-d`` flag to include them.
+If you want back references to be included (e.g. a column to include the table id
+it belongs to) use the ``-b`` flag.
+
+Alternatively, you can export datasources using the UI:
+
+1. Open **Sources** -> **Databases** to export all tables associated to a
+   single or multiple databases. (**Tables** for one or more tables,
+   **Druid Clusters** for clusters, **Druid Datasources** for datasources)
+#. Select the items you would like to export
+#. Click **Actions** -> **Export to YAML**
+#. If you want to import an item that you exported through the UI, you
+   will need to nest it inside its parent element, e.g. a `database`
+   needs to be nested under `databases` a `table` needs to be
+   nested inside a `database` element.
+
+Exporting the complete supported YAML schema
+--------------------------------------------
+In order to obtain an exhaustive list of all fields you can import using the YAML import run: ::
+
+    superset export_datasource_schema
+
+Again, you can use the ``-b`` flag to include back references.
+
+
+Importing Datasources from YAML
+-------------------------------
+In order to import datasources from a YAML file(s), run: ::
+
+    superset import_datasources -p <path or filename>
+
+If you supply a path all files ending with ``*.yaml`` or ``*.yml`` will be parsed.
+You can apply additional flags e.g.: ::
+
+    superset import_datasources -p <path> -r
+
+Will search the supplied path recursively.
+
+The sync flag ``-s`` takes parameters in order to sync the supplied elements with
+your file. Be careful this can delete the contents of your meta database. Example:
+
+   superset import_datasources -p <path / filename> -s columns,metrics
+
+This will sync all ``metrics`` and ``columns`` for all datasources found in the
+``<path / filename>`` in the Superset meta database. This means columns and metrics
+not specified in YAML will be deleted. If you would add ``tables`` to ``columns,metrics``
+those would be synchronised as well.
+
+
+If you don't supply the sync flag (``-s``) importing will only add and update (override) fields.
+E.g. you can add a ``verbose_name`` to the column ``ds`` in the table ``random_time_series`` from the example datasets
+by saving the following YAML to file and then running the ``import_datasources`` command. ::
+
+    databases:
+    - database_name: main
+      tables:
+      - table_name: random_time_series
+        columns:
+        - column_name: ds
+          verbose_name: datetime
+
diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt
new file mode 100644
index 0000000..6a592fe
--- /dev/null
+++ b/_sources/index.rst.txt
@@ -0,0 +1,175 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+|apache_img| |superset_img|
+
+.. |apache_img| image:: _static/images/apache_feather.png
+   :width: 7%
+   :target: http://www.apache.org/
+   :alt: The Apache Software Foundation
+
+.. |superset_img| image:: _static/images/s.png
+   :width: 25%
+
+Apache Superset (incubating)
+''''''''''''''''''''''''''''
+
+Apache Superset (incubating) is a modern, enterprise-ready business
+intelligence web application
+
+
+----------------
+
+.. important::
+
+    **Disclaimer**: Apache Superset is an effort undergoing incubation at The
+    Apache Software Foundation (ASF), sponsored by the Apache Incubator.
+    Incubation is required of all newly accepted projects until a further
+    review indicates that the infrastructure, communications, and
+    decision making process have stabilized in a manner consistent with
+    other successful ASF projects. While incubation status is not
+    necessarily a reflection of the completeness or stability of
+    the code, it does indicate that the project has yet to be fully
+    endorsed by the ASF.
+
+.. note:: Apache Superset, Superset, Apache, the Apache feather logo, and
+    the Apache Superset project logo are either registered trademarks or
+    trademarks of The Apache Software Foundation in the United States
+    and other countries.
+
+Superset Resources
+==================
+- Versioned versions of this documentation: https://readthedocs.org/projects/apache-superset/
+- `Superset's Github <https://github.com/apache/incubator-superset>`_, note
+  that `we use Github for issue tracking <https://github.com/apache/incubator-superset/issues>`_
+- Superset's
+  `contribution guidelines <https://github.com/apache/incubator-superset/blob/master/CONTRIBUTING.md>`_
+  and
+  `code of conduct <https://github.com/apache/incubator-superset/blob/master/CODE_OF_CONDUCT.md>`_
+  on Github.
+- Our `mailing list archives <ht...@superset.apache.org>`_.
+  To subscribe, send an email to ``dev-subscribe@superset.apache.org``
+- `Join our Slack <https://join.slack.com/t/apache-superset/shared_invite/enQtNDMxMDY5NjM4MDU0LWJmOTcxYjlhZTRhYmEyYTMzOWYxOWEwMjcwZDZiNWRiNDY2NDUwNzcwMDFhNzE1ZmMxZTZlZWY0ZTQ2MzMyNTU>`_
+
+Apache Software Foundation Resources
+====================================
+- `The Apache Software Foundation Website <http://www.apache.org>`_
+- `Current Events <http://www.apache.org/events/current-event>`_
+- `License <https://www.apache.org/licenses/>`_
+- `Thanks <https://www.apache.org/foundation/thanks.html>`_ to the ASF's sponsors
+- `Sponsor Apache! <http://www.apache.org/foundation/sponsorship.html>`_
+
+Overview
+========
+
+Features
+--------
+
+- A rich set of data visualizations
+- An easy-to-use interface for exploring and visualizing data
+- Create and share dashboards
+- Enterprise-ready authentication with integration with major authentication
+  providers (database, OpenID, LDAP, OAuth & REMOTE_USER through
+  Flask AppBuilder)
+- An extensible, high-granularity security/permission model allowing
+  intricate rules on who can access individual features and the dataset
+- A simple semantic layer, allowing users to control how data sources are
+  displayed in the UI by defining which fields should show up in which
+  drop-down and which aggregation and function metrics are made available
+  to the user
+- Integration with most SQL-speaking RDBMS through SQLAlchemy
+- Deep integration with Druid.io
+
+Databases
+---------
+
+The following RDBMS are currently supported:
+
+- `Amazon Athena <https://aws.amazon.com/athena/>`_
+- `Amazon Redshift <https://aws.amazon.com/redshift/>`_
+- `Apache Drill <https://drill.apache.org/>`_
+- `Apache Druid <http://druid.io/>`_
+- `Apache Hive <https://hive.apache.org/>`_
+- `Apache Impala <https://impala.apache.org/>`_
+- `Apache Kylin <http://kylin.apache.org/>`_
+- `Apache Pinot <https://pinot.incubator.apache.org/>`_
+- `Apache Spark SQL <https://spark.apache.org/sql/>`_
+- `BigQuery <https://cloud.google.com/bigquery/>`_
+- `ClickHouse <https://clickhouse.tech/>`_
+- `CockroachDB <https://www.cockroachlabs.com/>`_
+- `Dremio <https://dremio.com/>`_
+- `Elasticsearch <https://www.elastic.co/elasticsearch/>`_
+- `Exasol <https://www.exasol.com/>`_
+- `Google Sheets <https://www.google.com/sheets/about/>`_
+- `Greenplum <https://greenplum.org/>`_
+- `IBM Db2 <https://www.ibm.com/analytics/db2/>`_
+- `MySQL <https://www.mysql.com/>`_
+- `Oracle <https://www.oracle.com/database/>`_
+- `PostgreSQL <https://www.postgresql.org/>`_
+- `Presto <http://prestodb.github.io/>`_
+- `Snowflake <https://www.snowflake.com/>`_
+- `SQLite <https://www.sqlite.org/>`_
+- `SQL Server <https://www.microsoft.com/en-us/sql-server/>`_
+- `Teradata <https://www.teradata.com/>`_
+- `Vertica <https://www.vertica.com/>`_
+- `Hana <https://www.sap.com/products/hana.html>`_
+
+Other database engines with a proper DB-API driver and SQLAlchemy dialect should
+be supported as well.
+
+Screenshots
+-----------
+
+.. image:: _static/images/screenshots/bank_dash.png
+
+------
+
+.. image:: _static/images/screenshots/explore.png
+
+------
+
+.. image:: _static/images/screenshots/sqllab.png
+
+------
+
+.. image:: _static/images/screenshots/deckgl_dash.png
+
+------
+
+
+Contents
+--------
+
+.. toctree::
+    :maxdepth: 2
+
+    installation
+    tutorials
+    security
+    sqllab
+    gallery
+    druid
+    misc
+    faq
+
+
+Indices and tables
+------------------
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/_sources/index.txt b/_sources/index.txt
new file mode 100644
index 0000000..eba2e94
--- /dev/null
+++ b/_sources/index.txt
@@ -0,0 +1,86 @@
+.. image:: _static/img/s.png
+
+Apache Superset (incubating)
+''''''''''''''''''''''''''''
+
+Apache Superset (incubating) is a modern, enterprise-ready business
+intelligence web application
+
+
+----------------
+
+.. warning:: This project was originally named Panoramix, was renamed to
+    Caravel in March 2016, and is currently named Superset as of November 2016
+
+.. important::
+
+    **Disclaimer**: Apache Superset is an effort undergoing incubation at The
+    Apache Software Foundation (ASF), sponsored by the Apache Incubator.
+    Incubation is required of all newly accepted projects until a further
+    review indicates that the infrastructure, communications, and
+    decision making process have stabilized in a manner consistent with
+    other successful ASF projects. While incubation status is not
+    necessarily a reflection of the completeness or stability of
+    the code, it does indicate that the project has yet to be fully
+    endorsed by the ASF.
+
+Overview
+=======================================
+
+Features
+---------
+
+- A rich set of data visualizations
+- An easy-to-use interface for exploring and visualizing data
+- Create and share dashboards
+- Enterprise-ready authentication with integration with major authentication
+  providers (database, OpenID, LDAP, OAuth & REMOTE_USER through
+  Flask AppBuilder)
+- An extensible, high-granularity security/permission model allowing
+  intricate rules on who can access individual features and the dataset
+- A simple semantic layer, allowing users to control how data sources are
+  displayed in the UI by defining which fields should show up in which
+  drop-down and which aggregation and function metrics are made available
+  to the user
+- Integration with most SQL-speaking RDBMS through SQLAlchemy
+- Deep integration with Druid.io
+
+------
+
+.. image:: https://camo.githubusercontent.com/82e264ef777ba06e1858766fe3b8817ee108eb7e/687474703a2f2f672e7265636f726469742e636f2f784658537661475574732e676966
+
+------
+
+.. image:: https://camo.githubusercontent.com/4991ff37a0005ea4e4267919a52786fda82d2d21/687474703a2f2f672e7265636f726469742e636f2f755a6767594f645235672e676966
+
+------
+
+.. image:: https://camo.githubusercontent.com/a389af15ac1e32a3d0fee941b4c62c850b1d583b/687474703a2f2f672e7265636f726469742e636f2f55373046574c704c76682e676966
+
+------
+
+
+Contents
+---------
+
+.. toctree::
+    :maxdepth: 2
+
+    installation
+    tutorial
+    security
+    sqllab
+    visualization
+    videos
+    gallery
+    druid
+    faq
+
+
+Indices and tables
+------------------
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+
diff --git a/_sources/installation.rst.txt b/_sources/installation.rst.txt
new file mode 100644
index 0000000..623c4ef
--- /dev/null
+++ b/_sources/installation.rst.txt
@@ -0,0 +1,1581 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Installation & Configuration
+============================
+
+Getting Started
+---------------
+
+Superset has deprecated support for Python ``2.*`` and supports
+only ``~=3.6`` to take advantage of the newer Python features and reduce
+the burden of supporting previous versions. We run our test suite
+against ``3.6``, but ``3.7`` is fully supported as well.
+
+Cloud-native!
+-------------
+
+Superset is designed to be highly available. It is
+"cloud-native" as it has been designed scale out in large,
+distributed environments, and works well inside containers.
+While you can easily
+test drive Superset on a modest setup or simply on your laptop,
+there's virtually no limit around scaling out the platform.
+Superset is also cloud-native in the sense that it is
+flexible and lets you choose your web server (Gunicorn, Nginx, Apache),
+your metadata database engine (MySQL, Postgres, MariaDB, ...),
+your message queue (Redis, RabbitMQ, SQS, ...),
+your results backend (S3, Redis, Memcached, ...), your caching layer
+(Memcached, Redis, ...), works well with services like NewRelic, StatsD and
+DataDog, and has the ability to run analytic workloads against
+most popular database technologies.
+
+Superset is battle tested in large environments with hundreds
+of concurrent users. Airbnb's production environment runs inside
+Kubernetes and serves 600+ daily active users viewing over 100K charts a
+day.
+
+The Superset web server and the Superset Celery workers (optional)
+are stateless, so you can scale out by running on as many servers
+as needed.
+
+Start with Docker
+-----------------
+
+.. note ::
+    The Docker-related files and documentation are actively maintained and
+    managed by the core committers working on the project. Help and contributions
+    around Docker are welcomed!
+
+If you know docker, then you're lucky, we have shortcut road for you to
+initialize development environment: ::
+
+    git clone https://github.com/apache/incubator-superset/
+    cd incubator-superset
+    # you can run this command everytime you need to start superset now:
+    docker-compose up
+
+After several minutes for superset initialization to finish, you can open
+a browser and view `http://localhost:8088` to start your journey. By default
+the system configures an admin user with the username of `admin` and a password
+of `admin` - if you are in a non-local environment it is highly recommended to
+change this username and password at your earliest convenience.
+
+From there, the container server will reload on modification of the superset python
+and javascript source code.
+Don't forget to reload the page to take the new frontend into account though.
+
+See also `CONTRIBUTING.md#building <https://github.com/apache/incubator-superset/blob/master/CONTRIBUTING.md#building>`_,
+for alternative way of serving the frontend.
+
+It is currently not recommended to run docker-compose in production.
+
+If you are attempting to build on a Mac and it exits with 137 you need to increase your docker resources.
+OSX instructions: https://docs.docker.com/docker-for-mac/#advanced (Search for memory)
+
+Or if you're curious and want to install superset from bottom up, then go ahead.
+
+See also `docker/README.md <https://github.com/apache/incubator-superset/blob/master/docker/README.md>`_
+
+OS dependencies
+---------------
+
+Superset stores database connection information in its metadata database.
+For that purpose, we use the ``cryptography`` Python library to encrypt
+connection passwords. Unfortunately, this library has OS level dependencies.
+
+You may want to attempt the next step
+("Superset installation and initialization") and come back to this step if
+you encounter an error.
+
+Here's how to install them:
+
+For **Debian** and **Ubuntu**, the following command will ensure that
+the required dependencies are installed: ::
+
+    sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip libsasl2-dev libldap2-dev
+
+**Ubuntu 18.04** If you have python3.6 installed alongside with python2.7, as is default on **Ubuntu 18.04 LTS**, run this command also: ::
+
+    sudo apt-get install build-essential libssl-dev libffi-dev python3.6-dev python-pip libsasl2-dev libldap2-dev
+
+otherwise build for ``cryptography`` fails.
+
+For **Fedora** and **RHEL-derivatives**, the following command will ensure
+that the required dependencies are installed: ::
+
+    sudo yum upgrade python-setuptools
+    sudo yum install gcc gcc-c++ libffi-devel python-devel python-pip python-wheel openssl-devel cyrus-sasl-devel openldap-devel
+
+**Mac OS X** If possible, you should upgrade to the latest version of OS X as issues are more likely to be resolved for that version.
+You *will likely need* the latest version of XCode available for your installed version of OS X. You should also install
+the XCode command line tools: ::
+
+    xcode-select --install
+
+System python is not recommended. Homebrew's python also ships with pip: ::
+
+    brew install pkg-config libffi openssl python
+    env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography==2.4.2
+
+**Windows** isn't officially supported at this point, but if you want to
+attempt it, download `get-pip.py <https://bootstrap.pypa.io/get-pip.py>`_, and run ``python get-pip.py`` which may need admin access. Then run the following: ::
+
+    C:\> pip install cryptography
+
+    # You may also have to create C:\Temp
+    C:\> md C:\Temp
+
+Python virtualenv
+-----------------
+It is recommended to install Superset inside a virtualenv. Python 3 already ships virtualenv.
+But if it's not installed in your environment for some reason, you can install it
+via the package for your operating systems, otherwise you can install from pip: ::
+
+    pip install virtualenv
+
+You can create and activate a virtualenv by: ::
+
+    # virtualenv is shipped in Python 3.6+ as venv instead of pyvenv.
+    # See https://docs.python.org/3.6/library/venv.html
+    python3 -m venv venv
+    . venv/bin/activate
+
+On Windows the syntax for activating it is a bit different: ::
+
+    venv\Scripts\activate
+
+Once you activated your virtualenv everything you are doing is confined inside the virtualenv.
+To exit a virtualenv just type ``deactivate``.
+
+Python's setup tools and pip
+----------------------------
+Put all the chances on your side by getting the very latest ``pip``
+and ``setuptools`` libraries.::
+
+    pip install --upgrade setuptools pip
+
+Superset installation and initialization
+----------------------------------------
+Follow these few simple steps to install Superset.::
+
+    # Install superset
+    pip install apache-superset
+
+    # Initialize the database
+    superset db upgrade
+
+    # Create an admin user (you will be prompted to set a username, first and last name before setting a password)
+    $ export FLASK_APP=superset
+    superset fab create-admin
+
+    # Load some data to play with
+    superset load_examples
+
+    # Create default roles and permissions
+    superset init
+
+    # To start a development web server on port 8088, use -p to bind to another port
+    superset run -p 8088 --with-threads --reload --debugger
+
+After installation, you should be able to point your browser to the right
+hostname:port `http://localhost:8088 <http://localhost:8088>`_, login using
+the credential you entered while creating the admin account, and navigate to
+`Menu -> Admin -> Refresh Metadata`. This action should bring in all of
+your datasources for Superset to be aware of, and they should show up in
+`Menu -> Datasources`, from where you can start playing with your data!
+
+A proper WSGI HTTP Server
+-------------------------
+
+While you can setup Superset to run on Nginx or Apache, many use
+Gunicorn, preferably in **async mode**, which allows for impressive
+concurrency even and is fairly easy to install and configure. Please
+refer to the
+documentation of your preferred technology to set up this Flask WSGI
+application in a way that works well in your environment. Here's an **async**
+setup known to work well in production: ::
+
+  gunicorn \
+        -w 10 \
+        -k gevent \
+        --timeout 120 \
+        -b  0.0.0.0:6666 \
+        --limit-request-line 0 \
+        --limit-request-field_size 0 \
+        --statsd-host localhost:8125 \
+        "superset.app:create_app()"
+
+Refer to the
+`Gunicorn documentation <https://docs.gunicorn.org/en/stable/design.html>`_
+for more information.
+
+Note that the development web
+server (`superset run` or `flask run`) is not intended for production use.
+
+If not using gunicorn, you may want to disable the use of flask-compress
+by setting `COMPRESS_REGISTER = False` in your `superset_config.py`
+
+Flask-AppBuilder Permissions
+----------------------------
+
+By default, every time the Flask-AppBuilder (FAB) app is initialized the
+permissions and views are added automatically to the backend and associated with
+the ‘Admin’ role. The issue, however, is when you are running multiple concurrent
+workers this creates a lot of contention and race conditions when defining
+permissions and views.
+
+To alleviate this issue, the automatic updating of permissions can be disabled
+by setting `FAB_UPDATE_PERMS = False` (defaults to True).
+
+In a production environment initialization could take on the following form:
+
+  superset init
+  gunicorn -w 10 ... superset:app
+
+Configuration behind a load balancer
+------------------------------------
+
+If you are running superset behind a load balancer or reverse proxy (e.g. NGINX
+or ELB on AWS), you may need to utilise a healthcheck endpoint so that your
+load balancer knows if your superset instance is running. This is provided
+at ``/health`` which will return a 200 response containing "OK" if the
+the webserver is running.
+
+If the load balancer is inserting X-Forwarded-For/X-Forwarded-Proto headers, you
+should set `ENABLE_PROXY_FIX = True` in the superset config file to extract and use
+the headers.
+
+In case that the reverse proxy is used for providing ssl encryption,
+an explicit definition of the `X-Forwarded-Proto` may be required.
+For the Apache webserver this can be set as follows: ::
+
+    RequestHeader set X-Forwarded-Proto "https"
+
+Configuration
+-------------
+
+To configure your application, you need to create a file (module)
+``superset_config.py`` and make sure it is in your PYTHONPATH. Here are some
+of the parameters you can copy / paste in that configuration module: ::
+
+    #---------------------------------------------------------
+    # Superset specific config
+    #---------------------------------------------------------
+    ROW_LIMIT = 5000
+
+    SUPERSET_WEBSERVER_PORT = 8088
+    #---------------------------------------------------------
+
+    #---------------------------------------------------------
+    # Flask App Builder configuration
+    #---------------------------------------------------------
+    # Your App secret key
+    SECRET_KEY = '\2\1thisismyscretkey\1\2\e\y\y\h'
+
+    # The SQLAlchemy connection string to your database backend
+    # This connection defines the path to the database that stores your
+    # superset metadata (slices, connections, tables, dashboards, ...).
+    # Note that the connection information to connect to the datasources
+    # you want to explore are managed directly in the web UI
+    SQLALCHEMY_DATABASE_URI = 'sqlite:////path/to/superset.db'
+
+    # Flask-WTF flag for CSRF
+    WTF_CSRF_ENABLED = True
+    # Add endpoints that need to be exempt from CSRF protection
+    WTF_CSRF_EXEMPT_LIST = []
+    # A CSRF token that expires in 1 year
+    WTF_CSRF_TIME_LIMIT = 60 * 60 * 24 * 365
+
+    # Set this API key to enable Mapbox visualizations
+    MAPBOX_API_KEY = ''
+
+All the parameters and default values defined in
+https://github.com/apache/incubator-superset/blob/master/superset/config.py
+can be altered in your local ``superset_config.py`` .
+Administrators will want to
+read through the file to understand what can be configured locally
+as well as the default values in place.
+
+Since ``superset_config.py`` acts as a Flask configuration module, it
+can be used to alter the settings Flask itself,
+as well as Flask extensions like ``flask-wtf``, ``flask-cache``,
+``flask-migrate``, and ``flask-appbuilder``. Flask App Builder, the web
+framework used by Superset offers many configuration settings. Please consult
+the `Flask App Builder Documentation
+<https://flask-appbuilder.readthedocs.org/en/latest/config.html>`_
+for more information on how to configure it.
+
+Make sure to change:
+
+* *SQLALCHEMY_DATABASE_URI*, by default it is stored at *~/.superset/superset.db*
+* *SECRET_KEY*, to a long random string
+
+In case you need to exempt endpoints from CSRF, e.g. you are running a custom
+auth postback endpoint, you can add them to *WTF_CSRF_EXEMPT_LIST*
+
+     WTF_CSRF_EXEMPT_LIST = ['']
+
+
+.. _ref_database_deps:
+
+Caching
+-------
+
+Superset uses `Flask-Cache <https://pythonhosted.org/Flask-Cache/>`_ for
+caching purpose. Configuring your caching backend is as easy as providing
+a ``CACHE_CONFIG``, constant in your ``superset_config.py`` that
+complies with the Flask-Cache specifications.
+
+Flask-Cache supports multiple caching backends (Redis, Memcached,
+SimpleCache (in-memory), or the local filesystem). If you are going to use
+Memcached please use the `pylibmc` client library as `python-memcached` does
+not handle storing binary data correctly. If you use Redis, please install
+the `redis <https://pypi.python.org/pypi/redis>`_ Python package: ::
+
+    pip install redis
+
+For setting your timeouts, this is done in the Superset metadata and goes
+up the "timeout searchpath", from your slice configuration, to your
+data source's configuration, to your database's and ultimately falls back
+into your global default defined in ``CACHE_CONFIG``.
+
+.. code-block:: python
+
+    CACHE_CONFIG = {
+        'CACHE_TYPE': 'redis',
+        'CACHE_DEFAULT_TIMEOUT': 60 * 60 * 24, # 1 day default (in secs)
+        'CACHE_KEY_PREFIX': 'superset_results',
+        'CACHE_REDIS_URL': 'redis://localhost:6379/0',
+    }
+
+It is also possible to pass a custom cache initialization function in the
+config to handle additional caching use cases. The function must return an
+object that is compatible with the `Flask-Cache <https://pythonhosted.org/Flask-Cache/>`_ API.
+
+.. code-block:: python
+
+    from custom_caching import CustomCache
+
+    def init_cache(app):
+        """Takes an app instance and returns a custom cache backend"""
+        config = {
+            'CACHE_DEFAULT_TIMEOUT': 60 * 60 * 24, # 1 day default (in secs)
+            'CACHE_KEY_PREFIX': 'superset_results',
+        }
+        return CustomCache(app, config)
+
+    CACHE_CONFIG = init_cache
+
+Superset has a Celery task that will periodically warm up the cache based on
+different strategies. To use it, add the following to the `CELERYBEAT_SCHEDULE`
+section in `config.py`:
+
+.. code-block:: python
+
+    CELERYBEAT_SCHEDULE = {
+        'cache-warmup-hourly': {
+            'task': 'cache-warmup',
+            'schedule': crontab(minute=0, hour='*'),  # hourly
+            'kwargs': {
+                'strategy_name': 'top_n_dashboards',
+                'top_n': 5,
+                'since': '7 days ago',
+            },
+        },
+    }
+
+This will cache all the charts in the top 5 most popular dashboards every hour.
+For other strategies, check the `superset/tasks/cache.py` file.
+
+Caching Thumbnails
+------------------
+
+This is an optional feature that can be turned on by activating it's feature flag on config:
+
+.. code-block:: python
+
+    FEATURE_FLAGS = {
+        "THUMBNAILS": True,
+        "THUMBNAILS_SQLA_LISTENERS": True,
+    }
+
+
+For this feature you will need a cache system and celery workers. All thumbnails are store on cache and are processed
+asynchronously by the workers.
+
+An example config where images are stored on S3 could be:
+
+.. code-block:: python
+
+    from flask import Flask
+    from s3cache.s3cache import S3Cache
+
+    ...
+
+    class CeleryConfig(object):
+        BROKER_URL = "redis://localhost:6379/0"
+        CELERY_IMPORTS = ("superset.sql_lab", "superset.tasks", "superset.tasks.thumbnails")
+        CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
+        CELERYD_PREFETCH_MULTIPLIER = 10
+        CELERY_ACKS_LATE = True
+
+
+    CELERY_CONFIG = CeleryConfig
+
+    def init_thumbnail_cache(app: Flask) -> S3Cache:
+        return S3Cache("bucket_name", 'thumbs_cache/')
+
+
+    THUMBNAIL_CACHE_CONFIG = init_thumbnail_cache
+    # Async selenium thumbnail task will use the following user
+    THUMBNAIL_SELENIUM_USER = "Admin"
+
+Using the above example cache keys for dashboards will be `superset_thumb__dashboard__{ID}`
+
+You can override the base URL for selenium using:
+
+.. code-block:: python
+
+    WEBDRIVER_BASEURL = "https://superset.company.com"
+
+
+Additional selenium web drive config can be set using `WEBDRIVER_CONFIGURATION`
+
+You can implement a custom function to authenticate selenium, the default uses flask-login session cookie.
+An example of a custom function signature:
+
+.. code-block:: python
+
+    def auth_driver(driver: WebDriver, user: "User") -> WebDriver:
+        pass
+
+
+Then on config:
+
+.. code-block:: python
+
+    WEBDRIVER_AUTH_FUNC = auth_driver
+
+Database dependencies
+---------------------
+
+Superset does not ship bundled with connectivity to databases, except
+for Sqlite, which is part of the Python standard library.
+You'll need to install the required packages for the database you
+want to use as your metadata database as well as the packages needed to
+connect to the databases you want to access through Superset.
+
+Here's a list of some of the recommended packages.
+
++------------------+---------------------------------------+-------------------------------------------------+
+| database         | pypi package                          | SQLAlchemy URI prefix                           |
++==================+=======================================+=================================================+
+| Amazon Athena    | ``pip install "PyAthenaJDBC>1.0.9"``  | ``awsathena+jdbc://``                           |
++------------------+---------------------------------------+-------------------------------------------------+
+| Amazon Athena    | ``pip install "PyAthena>1.2.0"``      | ``awsathena+rest://``                           |
++------------------+---------------------------------------+-------------------------------------------------+
+| Amazon Redshift  | ``pip install sqlalchemy-redshift``   | ``redshift+psycopg2://``                        |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Drill     | ``pip install sqlalchemy-drill``      | For the REST API:``                             |
+|                  |                                       | ``drill+sadrill://``                            |
+|                  |                                       | For JDBC                                        |
+|                  |                                       | ``drill+jdbc://``                               |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Druid     | ``pip install pydruid``               | ``druid://``                                    |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Hive      | ``pip install pyhive``                | ``hive://``                                     |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Impala    | ``pip install impyla``                | ``impala://``                                   |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Kylin     | ``pip install kylinpy``               | ``kylin://``                                    |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Pinot     | ``pip install pinotdb``               | ``pinot+http://CONTROLLER:5436/``               |
+|                  |                                       | ``query?server=http://CONTROLLER:5983/``        |
++------------------+---------------------------------------+-------------------------------------------------+
+| Apache Spark SQL | ``pip install pyhive``                | ``jdbc+hive://``                                |
++------------------+---------------------------------------+-------------------------------------------------+
+| BigQuery         | ``pip install pybigquery``            | ``bigquery://``                                 |
++------------------+---------------------------------------+-------------------------------------------------+
+| ClickHouse       | ``pip install sqlalchemy-clickhouse`` |                                                 |
++------------------+---------------------------------------+-------------------------------------------------+
+| CockroachDB      | ``pip install cockroachdb``           | ``cockroachdb://``                              |
++------------------+---------------------------------------+-------------------------------------------------+
+| Dremio           | ``pip install sqlalchemy_dremio``     | ``dremio://``                                   |
++------------------+---------------------------------------+-------------------------------------------------+
+| Elasticsearch    | ``pip install elasticsearch-dbapi``   | ``elasticsearch+http://``                       |
++------------------+---------------------------------------+-------------------------------------------------+
+| Exasol           | ``pip install sqlalchemy-exasol``     | ``exa+pyodbc://``                               |
++------------------+---------------------------------------+-------------------------------------------------+
+| Google Sheets    | ``pip install gsheetsdb``             | ``gsheets://``                                  |
++------------------+---------------------------------------+-------------------------------------------------+
+| IBM Db2          | ``pip install ibm_db_sa``             | ``db2+ibm_db://``                               |
++------------------+---------------------------------------+-------------------------------------------------+
+| MySQL            | ``pip install mysqlclient``           | ``mysql://``                                    |
++------------------+---------------------------------------+-------------------------------------------------+
+| Oracle           | ``pip install cx_Oracle``             | ``oracle://``                                   |
++------------------+---------------------------------------+-------------------------------------------------+
+| PostgreSQL       | ``pip install psycopg2``              | ``postgresql+psycopg2://``                      |
++------------------+---------------------------------------+-------------------------------------------------+
+| Presto           | ``pip install pyhive``                | ``presto://``                                   |
++------------------+---------------------------------------+-------------------------------------------------+
+| Snowflake        | ``pip install snowflake-sqlalchemy``  | ``snowflake://``                                |
++------------------+---------------------------------------+-------------------------------------------------+
+| SQLite           |                                       | ``sqlite://``                                   |
++------------------+---------------------------------------+-------------------------------------------------+
+| SQL Server       | ``pip install pymssql``               | ``mssql://``                                    |
++------------------+---------------------------------------+-------------------------------------------------+
+| Teradata         | ``pip install sqlalchemy-teradata``   | ``teradata://``                                 |
++------------------+---------------------------------------+-------------------------------------------------+
+| Vertica          | ``pip install                         |  ``vertica+vertica_python://``                  |
+|                  | sqlalchemy-vertica-python``           |                                                 |
++------------------+---------------------------------------+-------------------------------------------------+
+| Hana             | ``pip install hdbcli sqlalchemy-hana``|  ``hana://``                                    |
+|                  | or                                    |                                                 |
+|                  | ``pip install apache-superset[hana]`` |                                                 |
++------------------+---------------------------------------+-------------------------------------------------+
+
+
+Note that many other databases are supported, the main criteria being the
+existence of a functional SqlAlchemy dialect and Python driver. Googling
+the keyword ``sqlalchemy`` in addition of a keyword that describes the
+database you want to connect to should get you to the right place.
+
+PostgreSQL
+------------
+
+The connection string for PostgreSQL looks like this ::
+
+    postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}
+
+Additional  may be configured via the ``extra`` field under ``engine_params``.
+If you would like to enable mutual SSL here is a sample configuration:
+
+.. code-block:: json
+
+    {
+        "metadata_params": {},
+        "engine_params": {
+              "connect_args":{
+                    "sslmode": "require",
+                    "sslrootcert": "/path/to/root_cert"
+            }
+         }
+    }
+
+If the key ``sslrootcert`` is present the server's certificate will be verified to be signed by the same Certificate Authority (CA).
+
+If you would like to enable mutual SSL here is a sample configuration:
+
+.. code-block:: json
+
+    {
+        "metadata_params": {},
+        "engine_params": {
+              "connect_args":{
+                    "sslmode": "require",
+                    "sslcert": "/path/to/client_cert",
+                    "sslkey": "/path/to/client_key",
+                    "sslrootcert": "/path/to/root_cert"
+            }
+         }
+    }
+
+See `psycopg2 SQLAlchemy <https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2>`_.
+
+Hana
+------------
+
+The connection string for Hana looks like this ::
+
+    hana://{username}:{password}@{host}:{port}
+
+
+(AWS) Athena
+------------
+
+The connection string for Athena looks like this ::
+
+    awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
+
+Where you need to escape/encode at least the s3_staging_dir, i.e., ::
+
+    s3://... -> s3%3A//...
+
+You can also use `PyAthena` library(no java required) like this ::
+
+    awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
+
+See `PyAthena <https://github.com/laughingman7743/PyAthena#sqlalchemy>`_.
+
+(Google) BigQuery
+-----------------
+
+The connection string for BigQuery looks like this ::
+
+    bigquery://{project_id}
+
+Additionally, you will need to configure authentication via a
+Service Account. Create your Service Account via the Google
+Cloud Platform control panel, provide it access to the appropriate
+BigQuery datasets, and download the JSON configuration file
+for the service account. In Superset, Add a JSON blob to
+the "Secure Extra" field in the database configuration page
+with the following format ::
+
+    {
+        "credentials_info": <contents of credentials JSON file>
+    }
+
+The resulting file should have this structure ::
+
+    {
+        "credentials_info": {
+            "type": "service_account",
+            "project_id": "...",
+            "private_key_id": "...",
+            "private_key": "...",
+            "client_email": "...",
+            "client_id": "...",
+            "auth_uri": "...",
+            "token_uri": "...",
+            "auth_provider_x509_cert_url": "...",
+            "client_x509_cert_url": "...",
+        }
+    }
+
+You should then be able to connect to your BigQuery datasets.
+
+To be able to upload data, e.g. sample data, the python library `pandas_gbq` is required.
+
+
+Elasticsearch
+-------------
+
+The connection string for Elasticsearch looks like this ::
+
+    elasticsearch+http://{user}:{password}@{host}:9200/
+
+Using HTTPS ::
+
+    elasticsearch+https://{user}:{password}@{host}:9200/
+
+
+Elasticsearch as a default limit of 10000 rows, so you can increase this limit on your cluster
+or set Superset's row limit on config ::
+
+    ROW_LIMIT = 10000
+
+You can query multiple indices on SQLLab for example ::
+
+    select timestamp, agent from "logstash-*"
+
+But, to use visualizations for multiple indices you need to create an alias index on your cluster ::
+
+    POST /_aliases
+    {
+        "actions" : [
+            { "add" : { "index" : "logstash-**", "alias" : "logstash_all" } }
+        ]
+    }
+
+Then register your table with the ``alias`` name ``logstasg_all``
+
+Snowflake
+---------
+
+The connection string for Snowflake looks like this ::
+
+    snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}
+
+The schema is not necessary in the connection string, as it is defined per table/query.
+The role and warehouse can be omitted if defaults are defined for the user, i.e.
+
+    snowflake://{user}:{password}@{account}.{region}/{database}
+
+Make sure the user has privileges to access and use all required
+databases/schemas/tables/views/warehouses, as the Snowflake SQLAlchemy engine does
+not test for user/role rights during engine creation by default. However, when
+pressing the "Test Connection" button in the Create or Edit Database dialog,
+user/role credentials are validated by passing `"validate_default_parameters": True`
+to the `connect()` method during engine creation. If the user/role is not authorized
+to access the database, an error is recorded in the Superset logs.
+
+See `Snowflake SQLAlchemy <https://github.com/snowflakedb/snowflake-sqlalchemy>`_.
+
+Teradata
+---------
+
+The connection string for Teradata looks like this ::
+
+    teradata://{user}:{password}@{host}
+
+*Note*: Its required to have Teradata ODBC drivers installed and environment variables configured for proper work of sqlalchemy dialect. Teradata ODBC Drivers available here: https://downloads.teradata.com/download/connectivity/odbc-driver/linux
+
+Required environment variables: ::
+
+    export ODBCINI=/.../teradata/client/ODBC_64/odbc.ini
+    export ODBCINST=/.../teradata/client/ODBC_64/odbcinst.ini
+
+See `Teradata SQLAlchemy <https://github.com/Teradata/sqlalchemy-teradata>`_.
+
+Apache Drill
+------------
+At the time of writing, the SQLAlchemy Dialect is not available on pypi and must be downloaded here:
+`SQLAlchemy Drill <https://github.com/JohnOmernik/sqlalchemy-drill>`_
+
+Alternatively, you can install it completely from the command line as follows: ::
+
+    git clone https://github.com/JohnOmernik/sqlalchemy-drill
+    cd sqlalchemy-drill
+    python3 setup.py install
+
+Once that is done, you can connect to Drill in two ways, either via the REST interface or by JDBC.  If you are connecting via JDBC, you must have the
+Drill JDBC Driver installed.
+
+The basic connection string for Drill looks like this ::
+
+    drill+sadrill://{username}:{password}@{host}:{port}/{storage_plugin}?use_ssl=True
+
+If you are using JDBC to connect to Drill, the connection string looks like this: ::
+
+    drill+jdbc://{username}:{password}@{host}:{port}/{storage_plugin}
+
+For a complete tutorial about how to use Apache Drill with Superset, see this tutorial:
+`Visualize Anything with Superset and Drill <http://thedataist.com/visualize-anything-with-superset-and-drill/>`_
+
+Deeper SQLAlchemy integration
+-----------------------------
+
+It is possible to tweak the database connection information using the
+parameters exposed by SQLAlchemy. In the ``Database`` edit view, you will
+find an ``extra`` field as a ``JSON`` blob.
+
+.. image:: _static/images/tutorial/add_db.png
+   :scale: 30 %
+
+This JSON string contains extra configuration elements. The ``engine_params``
+object gets unpacked into the
+`sqlalchemy.create_engine <https://docs.sqlalchemy.org/en/latest/core/engines.html#sqlalchemy.create_engine>`_ call,
+while the ``metadata_params`` get unpacked into the
+`sqlalchemy.MetaData <https://docs.sqlalchemy.org/en/rel_1_2/core/metadata.html#sqlalchemy.schema.MetaData>`_ call. Refer to the SQLAlchemy docs for more information.
+
+.. note:: If your using CTAS on SQLLab and PostgreSQL
+    take a look at :ref:`ref_ctas_engine_config` for specific ``engine_params``.
+
+
+Schemas (Postgres & Redshift)
+-----------------------------
+
+Postgres and Redshift, as well as other databases,
+use the concept of **schema** as a logical entity
+on top of the **database**. For Superset to connect to a specific schema,
+there's a **schema** parameter you can set in the table form.
+
+
+External Password store for SQLAlchemy connections
+--------------------------------------------------
+It is possible to use an external store for you database passwords. This is
+useful if you a running a custom secret distribution framework and do not wish
+to store secrets in Superset's meta database.
+
+Example:
+Write a function that takes a single argument of type ``sqla.engine.url`` and returns
+the password for the given connection string. Then set ``SQLALCHEMY_CUSTOM_PASSWORD_STORE``
+in your config file to point to that function. ::
+
+    def example_lookup_password(url):
+        secret = <<get password from external framework>>
+        return 'secret'
+
+    SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_lookup_password
+
+A common pattern is to use environment variables to make secrets available.
+``SQLALCHEMY_CUSTOM_PASSWORD_STORE`` can also be used for that purpose. ::
+
+    def example_password_as_env_var(url):
+        # assuming the uri looks like
+        # mysql://localhost?superset_user:{SUPERSET_PASSWORD}
+        return url.password.format(os.environ)
+
+    SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_password_as_env_var
+
+
+SSL Access to databases
+-----------------------
+This example worked with a MySQL database that requires SSL. The configuration
+may differ with other backends. This is what was put in the ``extra``
+parameter ::
+
+    {
+        "metadata_params": {},
+        "engine_params": {
+              "connect_args":{
+                  "sslmode":"require",
+                  "sslrootcert": "/path/to/my/pem"
+            }
+         }
+    }
+
+
+Druid
+-----
+
+The native Druid connector (behind the ``DRUID_IS_ACTIVE`` feature flag)
+is slowly getting deprecated in favor of the SQLAlchemy/DBAPI connector made
+available in the ``pydruid`` library.
+
+To use a custom SSL certificate to validate HTTPS requests, the certificate
+contents can be entered in the ``Root Certificate`` field in the Database
+dialog. When using a custom certificate, ``pydruid`` will automatically use
+``https`` scheme. To disable SSL verification add the following to extras:
+``engine_params": {"connect_args": {"scheme": "https", "ssl_verify_cert": false}}``
+
+Dremio
+------
+
+Install the following dependencies to connect to Dremio:
+
+* Dremio SQLAlchemy: ``pip install sqlalchemy_dremio``
+
+  * If you receive any errors during the installation of ``sqlalchemy_dremio``, make sure to install the prerequisites for PyODBC properly by following the instructions for your OS here: https://github.com/narendrans/sqlalchemy_dremio#installation
+* Dremio's ODBC driver: https://www.dremio.com/drivers/
+
+Example SQLAlchemy URI: ``dremio://dremio:dremio123@localhost:31010/dremio``
+
+Presto
+------
+
+By default Superset assumes the most recent version of Presto is being used when
+querying the datasource. If you're using an older version of presto, you can configure
+it in the ``extra`` parameter::
+
+    {
+        "version": "0.123"
+    }
+
+
+Exasol
+---------
+
+The connection string for Exasol looks like this ::
+
+    exa+pyodbc://{user}:{password}@{host}
+
+*Note*: It's required to have Exasol ODBC drivers installed for the sqlalchemy dialect to work properly. Exasol ODBC Drivers available are here: https://www.exasol.com/portal/display/DOWNLOAD/Exasol+Download+Section
+
+Example config (odbcinst.ini can be left empty) ::
+
+    $ cat $/.../path/to/odbc.ini
+    [EXAODBC]
+    DRIVER = /.../path/to/driver/EXASOL_driver.so
+    EXAHOST = host:8563
+    EXASCHEMA = main
+
+See `SQLAlchemy for Exasol <https://github.com/blue-yonder/sqlalchemy_exasol>`_.
+
+CORS
+----
+
+The extra CORS Dependency must be installed:
+
+.. code-block:: text
+
+    pip install apache-superset[cors]
+
+The following keys in `superset_config.py` can be specified to configure CORS:
+
+
+* ``ENABLE_CORS``: Must be set to True in order to enable CORS
+* ``CORS_OPTIONS``: options passed to Flask-CORS (`documentation <https://flask-cors.corydolphin.com/en/latest/api.html#extension>`)
+
+
+Domain Sharding
+---------------
+
+Chrome allows up to 6 open connections per domain at a time. When there are more
+than 6 slices in dashboard, a lot of time fetch requests are queued up and wait for
+next available socket. `PR 5039 <https://github.com/apache/incubator-superset/pull/5039>`_ adds domain sharding to Superset,
+and this feature will be enabled by configuration only (by default Superset
+doesn't allow cross-domain request).
+
+* ``SUPERSET_WEBSERVER_DOMAINS``: list of allowed hostnames for domain sharding feature. default `None`
+
+
+Middleware
+----------
+
+Superset allows you to add your own middleware. To add your own middleware, update the ``ADDITIONAL_MIDDLEWARE`` key in
+your `superset_config.py`. ``ADDITIONAL_MIDDLEWARE`` should be a list of your additional middleware classes.
+
+For example, to use AUTH_REMOTE_USER from behind a proxy server like nginx, you have to add a simple middleware class to
+add the value of ``HTTP_X_PROXY_REMOTE_USER`` (or any other custom header from the proxy) to Gunicorn's ``REMOTE_USER``
+environment variable: ::
+
+    class RemoteUserMiddleware(object):
+        def __init__(self, app):
+            self.app = app
+        def __call__(self, environ, start_response):
+            user = environ.pop('HTTP_X_PROXY_REMOTE_USER', None)
+            environ['REMOTE_USER'] = user
+            return self.app(environ, start_response)
+
+    ADDITIONAL_MIDDLEWARE = [RemoteUserMiddleware, ]
+
+*Adapted from http://flask.pocoo.org/snippets/69/*
+
+Event Logging
+-------------
+
+Superset by default logs special action event on it's database. These log can be accessed on the UI navigating to
+"Security" -> "Action Log". You can freely customize these logs by implementing your own event log class.
+
+Example of a simple JSON to Stdout class::
+
+    class JSONStdOutEventLogger(AbstractEventLogger):
+
+        def log(self, user_id, action, *args, **kwargs):
+            records = kwargs.get('records', list())
+            dashboard_id = kwargs.get('dashboard_id')
+            slice_id = kwargs.get('slice_id')
+            duration_ms = kwargs.get('duration_ms')
+            referrer = kwargs.get('referrer')
+
+            for record in records:
+                log = dict(
+                    action=action,
+                    json=record,
+                    dashboard_id=dashboard_id,
+                    slice_id=slice_id,
+                    duration_ms=duration_ms,
+                    referrer=referrer,
+                    user_id=user_id
+                )
+                print(json.dumps(log))
+
+
+Then on Superset's config pass an instance of the logger type you want to use.
+
+    EVENT_LOGGER = JSONStdOutEventLogger()
+
+
+Upgrading
+---------
+
+Upgrading should be as straightforward as running::
+
+    pip install apache-superset --upgrade
+    superset db upgrade
+    superset init
+
+We recommend to follow standard best practices when upgrading Superset, such
+as taking a database backup prior to the upgrade, upgrading a staging
+environment prior to upgrading production, and upgrading production while less
+users are active on the platform.
+
+.. note ::
+   Some upgrades may contain backward-incompatible changes, or require
+   scheduling downtime, when that is the case, contributors attach notes in
+   ``UPDATING.md`` in the repository. It's recommended to review this
+   file prior to running an upgrade.
+
+
+Celery Tasks
+------------
+
+On large analytic databases, it's common to run queries that
+execute for minutes or hours.
+To enable support for long running queries that
+execute beyond the typical web request's timeout (30-60 seconds), it is
+necessary to configure an asynchronous backend for Superset which consists of:
+
+* one or many Superset workers (which is implemented as a Celery worker), and
+  can be started with the ``celery worker`` command, run
+  ``celery worker --help`` to view the related options.
+* a celery broker (message queue) for which we recommend using Redis
+  or RabbitMQ
+* a results backend that defines where the worker will persist the query
+  results
+
+Configuring Celery requires defining a ``CELERY_CONFIG`` in your
+``superset_config.py``. Both the worker and web server processes should
+have the same configuration.
+
+.. code-block:: python
+
+    class CeleryConfig(object):
+        BROKER_URL = 'redis://localhost:6379/0'
+        CELERY_IMPORTS = (
+            'superset.sql_lab',
+            'superset.tasks',
+        )
+        CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
+        CELERYD_LOG_LEVEL = 'DEBUG'
+        CELERYD_PREFETCH_MULTIPLIER = 10
+        CELERY_ACKS_LATE = True
+        CELERY_ANNOTATIONS = {
+            'sql_lab.get_sql_results': {
+                'rate_limit': '100/s',
+            },
+            'email_reports.send': {
+                'rate_limit': '1/s',
+                'time_limit': 120,
+                'soft_time_limit': 150,
+                'ignore_result': True,
+            },
+        }
+        CELERYBEAT_SCHEDULE = {
+            'email_reports.schedule_hourly': {
+                'task': 'email_reports.schedule_hourly',
+                'schedule': crontab(minute=1, hour='*'),
+            },
+        }
+
+    CELERY_CONFIG = CeleryConfig
+
+* To start a Celery worker to leverage the configuration run: ::
+
+    celery worker --app=superset.tasks.celery_app:app --pool=prefork -O fair -c 4
+
+* To start a job which schedules periodic background jobs, run ::
+
+    celery beat --app=superset.tasks.celery_app:app
+
+To setup a result backend, you need to pass an instance of a derivative
+of ``from cachelib.base.BaseCache`` to the ``RESULTS_BACKEND``
+configuration key in your ``superset_config.py``. It's possible to use
+Memcached, Redis, S3 (https://pypi.python.org/pypi/s3werkzeugcache),
+memory or the file system (in a single server-type setup or for testing),
+or to write your own caching interface. Your ``superset_config.py`` may
+look something like:
+
+.. code-block:: python
+
+    # On S3
+    from s3cache.s3cache import S3Cache
+    S3_CACHE_BUCKET = 'foobar-superset'
+    S3_CACHE_KEY_PREFIX = 'sql_lab_result'
+    RESULTS_BACKEND = S3Cache(S3_CACHE_BUCKET, S3_CACHE_KEY_PREFIX)
+
+    # On Redis
+    from cachelib.redis import RedisCache
+    RESULTS_BACKEND = RedisCache(
+        host='localhost', port=6379, key_prefix='superset_results')
+
+For performance gains, `MessagePack <https://github.com/msgpack/msgpack-python>`_
+and `PyArrow <https://arrow.apache.org/docs/python/>`_ are now used for results
+serialization. This can be disabled by setting ``RESULTS_BACKEND_USE_MSGPACK = False``
+in your configuration, should any issues arise. Please clear your existing results
+cache store when upgrading an existing environment.
+
+**Important notes**
+
+* It is important that all the worker nodes and web servers in
+  the Superset cluster share a common metadata database.
+  This means that SQLite will not work in this context since it has
+  limited support for concurrency and
+  typically lives on the local file system.
+
+* There should only be one instance of ``celery beat`` running in your
+  entire setup. If not, background jobs can get scheduled multiple times
+  resulting in weird behaviors like duplicate delivery of reports,
+  higher than expected load / traffic etc.
+
+* SQL Lab will only run your queries asynchronously if you enable
+  "Asynchronous Query Execution" in your database settings.
+
+
+Email Reports
+-------------
+Email reports allow users to schedule email reports for
+
+* chart and dashboard visualization (Attachment or inline)
+* chart data (CSV attachment on inline table)
+
+**Setup**
+
+Make sure you enable email reports in your configuration file
+
+.. code-block:: python
+
+    ENABLE_SCHEDULED_EMAIL_REPORTS = True
+
+Now you will find two new items in the navigation bar that allow you to schedule email
+reports
+
+* Manage -> Dashboard Emails
+* Manage -> Chart Email Schedules
+
+Schedules are defined in crontab format and each schedule
+can have a list of recipients (all of them can receive a single mail,
+or separate mails). For audit purposes, all outgoing mails can have a
+mandatory bcc.
+
+In order get picked up you need to configure a celery worker and a celery beat
+(see section above "Celery Tasks"). Your celery configuration also
+needs an entry ``email_reports.schedule_hourly`` for ``CELERYBEAT_SCHEDULE``.
+
+To send emails you need to configure SMTP settings in your configuration file. e.g.
+
+.. code-block:: python
+
+    EMAIL_NOTIFICATIONS = True
+
+    SMTP_HOST = "email-smtp.eu-west-1.amazonaws.com"
+    SMTP_STARTTLS = True
+    SMTP_SSL = False
+    SMTP_USER = "smtp_username"
+    SMTP_PORT = 25
+    SMTP_PASSWORD = os.environ.get("SMTP_PASSWORD")
+    SMTP_MAIL_FROM = "insights@komoot.com"
+
+
+To render dashboards you need to install a local browser on your superset instance
+
+  * `geckodriver <https://github.com/mozilla/geckodriver>`_ and Firefox is preferred
+  * `chromedriver <http://chromedriver.chromium.org/>`_ is a good option too
+
+You need to adjust the ``EMAIL_REPORTS_WEBDRIVER`` accordingly in your configuration.
+
+You also need to specify on behalf of which username to render the dashboards. In general
+dashboards and charts are not accessible to unauthorized requests, that is why the
+worker needs to take over credentials of an existing user to take a snapshot. ::
+
+    EMAIL_REPORTS_USER = 'username_with_permission_to_access_dashboards'
+
+
+**Important notes**
+
+* Be mindful of the concurrency setting for celery (using ``-c 4``).
+  Selenium/webdriver instances can consume a lot of CPU / memory on your servers.
+
+* In some cases, if you notice a lot of leaked ``geckodriver`` processes, try running
+  your celery processes with ::
+
+    celery worker --pool=prefork --max-tasks-per-child=128 ...
+
+* It is recommended to run separate workers for ``sql_lab`` and
+  ``email_reports`` tasks. Can be done by using ``queue`` field in ``CELERY_ANNOTATIONS``
+
+* Adjust ``WEBDRIVER_BASEURL`` in your config if celery workers can't access superset via its
+  default value ``http://0.0.0.0:8080/`` (notice the port number 8080, many other setups use
+  port 8088).
+
+SQL Lab
+-------
+SQL Lab is a powerful SQL IDE that works with all SQLAlchemy compatible
+databases. By default, queries are executed in the scope of a web
+request so they may eventually timeout as queries exceed the maximum duration of a web
+request in your environment, whether it'd be a reverse proxy or the Superset
+server itself. In such cases, it is preferred to use ``celery`` to run the queries
+in the background. Please follow the examples/notes mentioned above to get your
+celery setup working.
+
+Also note that SQL Lab supports Jinja templating in queries and that it's
+possible to overload
+the default Jinja context in your environment by defining the
+``JINJA_CONTEXT_ADDONS`` in your superset configuration. Objects referenced
+in this dictionary are made available for users to use in their SQL.
+
+.. code-block:: python
+
+    JINJA_CONTEXT_ADDONS = {
+        'my_crazy_macro': lambda x: x*2,
+    }
+
+Besides default Jinja templating, SQL lab also supports self-defined template
+processor by setting the ``CUSTOM_TEMPLATE_PROCESSORS`` in your superset configuration.
+The values in this dictionary overwrite the default Jinja template processors of the
+specified database engine.
+The example below configures a custom presto template processor which implements
+its own logic of processing macro template with regex parsing. It uses ``$`` style
+macro instead of ``{{ }}`` style in Jinja templating. By configuring it with
+``CUSTOM_TEMPLATE_PROCESSORS``, sql template on presto database is processed
+by the custom one rather than the default one.
+
+.. code-block:: python
+
+    def DATE(
+        ts: datetime, day_offset: SupportsInt = 0, hour_offset: SupportsInt = 0
+    ) -> str:
+        """Current day as a string."""
+        day_offset, hour_offset = int(day_offset), int(hour_offset)
+        offset_day = (ts + timedelta(days=day_offset, hours=hour_offset)).date()
+        return str(offset_day)
+
+    class CustomPrestoTemplateProcessor(PrestoTemplateProcessor):
+        """A custom presto template processor."""
+
+        engine = "presto"
+
+        def process_template(self, sql: str, **kwargs) -> str:
+            """Processes a sql template with $ style macro using regex."""
+            # Add custom macros functions.
+            macros = {
+                "DATE": partial(DATE, datetime.utcnow())
+            }  # type: Dict[str, Any]
+            # Update with macros defined in context and kwargs.
+            macros.update(self.context)
+            macros.update(kwargs)
+
+            def replacer(match):
+                """Expand $ style macros with corresponding function calls."""
+                macro_name, args_str = match.groups()
+                args = [a.strip() for a in args_str.split(",")]
+                if args == [""]:
+                    args = []
+                f = macros[macro_name[1:]]
+                return f(*args)
+
+            macro_names = ["$" + name for name in macros.keys()]
+            pattern = r"(%s)\s*\(([^()]*)\)" % "|".join(map(re.escape, macro_names))
+            return re.sub(pattern, replacer, sql)
+
+    CUSTOM_TEMPLATE_PROCESSORS = {
+        CustomPrestoTemplateProcessor.engine: CustomPrestoTemplateProcessor
+    }
+
+
+SQL Lab also includes a live query validation feature with pluggable backends.
+You can configure which validation implementation is used with which database
+engine by adding a block like the following to your config.py:
+
+.. code-block:: python
+
+     FEATURE_FLAGS = {
+         'SQL_VALIDATORS_BY_ENGINE': {
+             'presto': 'PrestoDBSQLValidator',
+         }
+     }
+
+The available validators and names can be found in `sql_validators/`.
+
+**Scheduling queries**
+
+You can optionally allow your users to schedule queries directly in SQL Lab.
+This is done by addding extra metadata to saved queries, which are then picked
+up by an external scheduled (like [Apache Airflow](https://airflow.apache.org/)).
+
+To allow scheduled queries, add the following to your `config.py`:
+
+.. code-block:: python
+
+    FEATURE_FLAGS = {
+        # Configuration for scheduling queries from SQL Lab. This information is
+        # collected when the user clicks "Schedule query", and saved into the `extra`
+        # field of saved queries.
+        # See: https://github.com/mozilla-services/react-jsonschema-form
+        'SCHEDULED_QUERIES': {
+            'JSONSCHEMA': {
+                'title': 'Schedule',
+                'description': (
+                    'In order to schedule a query, you need to specify when it '
+                    'should start running, when it should stop running, and how '
+                    'often it should run. You can also optionally specify '
+                    'dependencies that should be met before the query is '
+                    'executed. Please read the documentation for best practices '
+                    'and more information on how to specify dependencies.'
+                ),
+                'type': 'object',
+                'properties': {
+                    'output_table': {
+                        'type': 'string',
+                        'title': 'Output table name',
+                    },
+                    'start_date': {
+                        'type': 'string',
+                        'title': 'Start date',
+                        # date-time is parsed using the chrono library, see
+                        # https://www.npmjs.com/package/chrono-node#usage
+                        'format': 'date-time',
+                        'default': 'tomorrow at 9am',
+                    },
+                    'end_date': {
+                        'type': 'string',
+                        'title': 'End date',
+                        # date-time is parsed using the chrono library, see
+                        # https://www.npmjs.com/package/chrono-node#usage
+                        'format': 'date-time',
+                        'default': '9am in 30 days',
+                    },
+                    'schedule_interval': {
+                        'type': 'string',
+                        'title': 'Schedule interval',
+                    },
+                    'dependencies': {
+                        'type': 'array',
+                        'title': 'Dependencies',
+                        'items': {
+                            'type': 'string',
+                        },
+                    },
+                },
+            },
+            'UISCHEMA': {
+                'schedule_interval': {
+                    'ui:placeholder': '@daily, @weekly, etc.',
+                },
+                'dependencies': {
+                    'ui:help': (
+                        'Check the documentation for the correct format when '
+                        'defining dependencies.'
+                    ),
+                },
+            },
+            'VALIDATION': [
+                # ensure that start_date <= end_date
+                {
+                    'name': 'less_equal',
+                    'arguments': ['start_date', 'end_date'],
+                    'message': 'End date cannot be before start date',
+                    # this is where the error message is shown
+                    'container': 'end_date',
+                },
+            ],
+            # link to the scheduler; this example links to an Airflow pipeline
+            # that uses the query id and the output table as its name
+            'linkback': (
+                'https://airflow.example.com/admin/airflow/tree?'
+                'dag_id=query_${id}_${extra_json.schedule_info.output_table}'
+            ),
+        },
+    }
+
+This feature flag is based on [react-jsonschema-form](https://github.com/mozilla-services/react-jsonschema-form),
+and will add a button called "Schedule Query" to SQL Lab. When the button is
+clicked, a modal will show up where the user can add the metadata required for
+scheduling the query.
+
+This information can then be retrieved from the endpoint `/savedqueryviewapi/api/read`
+and used to schedule the queries that have `scheduled_queries` in their JSON
+metadata. For schedulers other than Airflow, additional fields can be easily
+added to the configuration file above.
+
+Celery Flower
+-------------
+Flower is a web based tool for monitoring the Celery cluster which you can
+install from pip: ::
+
+    pip install flower
+
+and run via: ::
+
+    celery flower --app=superset.tasks.celery_app:app
+
+Building from source
+---------------------
+
+More advanced users may want to build Superset from sources. That
+would be the case if you fork the project to add features specific to
+your environment. See `CONTRIBUTING.md#setup-local-environment-for-development <https://github.com/apache/incubator-superset/blob/master/CONTRIBUTING.md#setup-local-environment-for-development>`_.
+
+Blueprints
+----------
+
+`Blueprints are Flask's reusable apps <https://flask.palletsprojects.com/en/1.0.x/tutorial/views/>`_.
+Superset allows you to specify an array of Blueprints
+in your ``superset_config`` module. Here's
+an example of how this can work with a simple Blueprint. By doing
+so, you can expect Superset to serve a page that says "OK"
+at the ``/simple_page`` url. This can allow you to run other things such
+as custom data visualization applications alongside Superset, on the
+same server.
+
+.. code-block:: python
+
+    from flask import Blueprint
+    simple_page = Blueprint('simple_page', __name__,
+                                    template_folder='templates')
+    @simple_page.route('/', defaults={'page': 'index'})
+    @simple_page.route('/<page>')
+    def show(page):
+        return "Ok"
+
+    BLUEPRINTS = [simple_page]
+
+StatsD logging
+--------------
+
+Superset is instrumented to log events to StatsD if desired. Most endpoints hit
+are logged as well as key events like query start and end in SQL Lab.
+
+To setup StatsD logging, it's a matter of configuring the logger in your
+``superset_config.py``.
+
+.. code-block:: python
+
+    from superset.stats_logger import StatsdStatsLogger
+    STATS_LOGGER = StatsdStatsLogger(host='localhost', port=8125, prefix='superset')
+
+Note that it's also possible to implement you own logger by deriving
+``superset.stats_logger.BaseStatsLogger``.
+
+
+Install Superset with helm in Kubernetes
+----------------------------------------
+
+You can install Superset into Kubernetes with Helm <https://helm.sh/>. The chart is
+located in ``install/helm``.
+
+To install Superset into your Kubernetes:
+
+.. code-block:: bash
+
+    helm upgrade --install superset ./install/helm/superset
+
+Note that the above command will install Superset into ``default`` namespace of your Kubernetes cluster.
+
+Custom OAuth2 configuration
+---------------------------
+
+Beyond FAB supported providers (github, twitter, linkedin, google, azure), its easy to connect Superset with other OAuth2 Authorization Server implementations that support "code" authorization.
+
+The first step: Configure authorization in Superset ``superset_config.py``.
+
+.. code-block:: python
+
+    AUTH_TYPE = AUTH_OAUTH
+    OAUTH_PROVIDERS = [
+        {   'name':'egaSSO',
+            'token_key':'access_token', # Name of the token in the response of access_token_url
+            'icon':'fa-address-card',   # Icon for the provider
+            'remote_app': {
+                'consumer_key':'myClientId',  # Client Id (Identify Superset application)
+                'consumer_secret':'MySecret', # Secret for this Client Id (Identify Superset application)
+                'request_token_params':{
+                    'scope': 'read'               # Scope for the Authorization
+                },
+                'access_token_method':'POST',    # HTTP Method to call access_token_url
+                'access_token_params':{        # Additional parameters for calls to access_token_url
+                    'client_id':'myClientId'
+                },
+                'access_token_headers':{    # Additional headers for calls to access_token_url
+                    'Authorization': 'Basic Base64EncodedClientIdAndSecret'
+                },
+                'base_url':'https://myAuthorizationServer/oauth2AuthorizationServer/',
+                'access_token_url':'https://myAuthorizationServer/oauth2AuthorizationServer/token',
+                'authorize_url':'https://myAuthorizationServer/oauth2AuthorizationServer/authorize'
+            }
+        }
+    ]
+
+    # Will allow user self registration, allowing to create Flask users from Authorized User
+    AUTH_USER_REGISTRATION = True
+
+    # The default user self registration role
+    AUTH_USER_REGISTRATION_ROLE = "Public"
+
+Second step: Create a `CustomSsoSecurityManager` that extends `SupersetSecurityManager` and overrides `oauth_user_info`:
+
+.. code-block:: python
+
+    from superset.security import SupersetSecurityManager
+
+    class CustomSsoSecurityManager(SupersetSecurityManager):
+
+        def oauth_user_info(self, provider, response=None):
+            logging.debug("Oauth2 provider: {0}.".format(provider))
+            if provider == 'egaSSO':
+                # As example, this line request a GET to base_url + '/' + userDetails with Bearer  Authentication,
+        # and expects that authorization server checks the token, and response with user details
+                me = self.appbuilder.sm.oauth_remotes[provider].get('userDetails').data
+                logging.debug("user_data: {0}".format(me))
+                return { 'name' : me['name'], 'email' : me['email'], 'id' : me['user_name'], 'username' : me['user_name'], 'first_name':'', 'last_name':''}
+        ...
+
+This file must be located at the same directory than ``superset_config.py`` with the name ``custom_sso_security_manager.py``.
+
+Then we can add this two lines to ``superset_config.py``:
+
+.. code-block:: python
+
+  from custom_sso_security_manager import CustomSsoSecurityManager
+  CUSTOM_SECURITY_MANAGER = CustomSsoSecurityManager
+
+Feature Flags
+-------------
+
+Because of a wide variety of users, Superset has some features that are not enabled by default. For example, some users have stronger security restrictions, while some others may not. So Superset allow users to enable or disable some features by config. For feature owners, you can add optional functionalities in Superset, but will be only affected by a subset of users.
+
+You can enable or disable features with flag from ``superset_config.py``:
+
+.. code-block:: python
+
+     DEFAULT_FEATURE_FLAGS = {
+         'CLIENT_CACHE': False,
+         'ENABLE_EXPLORE_JSON_CSRF_PROTECTION': False,
+         'PRESTO_EXPAND_DATA': False,
+     }
+
+Here is a list of flags and descriptions:
+
+* ENABLE_EXPLORE_JSON_CSRF_PROTECTION
+
+  * For some security concerns, you may need to enforce CSRF protection on all query request to explore_json endpoint. In Superset, we use `flask-csrf <https://sjl.bitbucket.io/flask-csrf/>`_ add csrf protection for all POST requests, but this protection doesn't apply to GET method.
+
+  * When ENABLE_EXPLORE_JSON_CSRF_PROTECTION is set to true, your users cannot make GET request to explore_json. The default value for this feature False (current behavior), explore_json accepts both GET and POST request. See `PR 7935 <https://github.com/apache/incubator-superset/pull/7935>`_ for more details.
+
+* PRESTO_EXPAND_DATA
+
+  * When this feature is enabled, nested types in Presto will be expanded into extra columns and/or arrays. This is experimental, and doesn't work with all nested types.
+
+
+SIP-15
+------
+
+`SIP-15 <https://github.com/apache/incubator-superset/issues/6360>`_ aims to ensure that time intervals are handled in a consistent and transparent manner for both the Druid and SQLAlchemy connectors.
+
+Prior to SIP-15 SQLAlchemy used inclusive endpoints however these may behave like exclusive for string columns (due to lexicographical ordering) if no formatting was defined and the column formatting did not conform to an ISO 8601 date-time (refer to the SIP for details).
+
+To remedy this rather than having to define the date/time format for every non-IS0 8601 date-time column, once can define a default column mapping on a per database level via the ``extra`` parameter ::
+
+    {
+        "python_date_format_by_column_name": {
+            "ds": "%Y-%m-%d"
+        }
+    }
+
+**New deployments**
+
+All new Superset deployments should enable SIP-15 via,
+
+.. code-block:: python
+
+    SIP_15_ENABLED = True
+
+**Existing deployments**
+
+Given that it is not apparent whether the chart creator was aware of the time range inconsistencies (and adjusted the endpoints accordingly) changing the behavior of all charts is overly aggressive. Instead SIP-15 proivides a soft transistion allowing producers (chart owners) to see the impact of the proposed change and adjust their charts accordingly.
+
+Prior to enabling SIP-15 existing deployments should communicate to their users the impact of the change and define a grace period end date (exclusive of course) after which all charts will conform to the [start, end) interval, i.e.,
+
+.. code-block:: python
+
+    from dateime import date
+
+    SIP_15_ENABLED = True
+    SIP_15_GRACE_PERIOD_END = date(<YYYY>, <MM>, <DD>)
+
+To aid with transparency the current endpoint behavior is explicitly called out in the chart time range (post SIP-15 this will be [start, end) for all connectors and databases). One can override the defaults on a per database level via the ``extra``
+parameter ::
+
+    {
+        "time_range_endpoints": ["inclusive", "inclusive"]
+    }
+
+
+Note in a future release the interim SIP-15 logic will be removed (including the ``time_grain_endpoints`` form-data field) via a code change and Alembic migration.
diff --git a/_sources/installation.txt b/_sources/installation.txt
new file mode 100644
index 0000000..76f9c00
--- /dev/null
+++ b/_sources/installation.txt
@@ -0,0 +1,552 @@
+Installation & Configuration
+============================
+
+Getting Started
+---------------
+
+Superset is tested against Python ``2.7`` and Python ``3.4``.
+Airbnb currently uses 2.7.* in production. We do not plan on supporting
+Python ``2.6``.
+
+Cloud-native!
+-------------
+
+Superset is designed to be highly available. It is
+"cloud-native" as it has been designed scale out in large,
+distributed environments, and works well inside containers.
+While you can easily
+test drive Superset on a modest setup or simply on your laptop,
+there's virtually no limit around scaling out the platform.
+Superset is also cloud-native in the sense that it is
+flexible and lets you choose your web server (Gunicorn, Nginx, Apache),
+your metadata database engine (MySQL, Postgres, MariaDB, ...),
+your message queue (Redis, RabbitMQ, SQS, ...),
+your results backend (S3, Redis, Memcached, ...), your caching layer
+(memcached, Redis, ...), works well with services like NewRelic, StatsD and
+DataDog, and has the ability to run analytic workloads against
+most popular database technologies.
+
+Superset is battle tested in large environments with hundreds
+of concurrent users. Airbnb's production environment runs inside
+Kubernetes and serves 600+ daily active users viewing over 100K charts a
+day.
+
+The Superset web server and the Superset Celery workers (optional)
+are stateless, so you can scale out by running on as many servers
+as needed.
+
+OS dependencies
+---------------
+
+Superset stores database connection information in its metadata database.
+For that purpose, we use the ``cryptography`` Python library to encrypt
+connection passwords. Unfortunately this library has OS level dependencies.
+
+You may want to attempt the next step
+("Superset installation and initialization") and come back to this step if
+you encounter an error.
+
+Here's how to install them:
+
+For **Debian** and **Ubuntu**, the following command will ensure that
+the required dependencies are installed: ::
+
+    sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip libsasl2-dev libldap2-dev
+
+For **Fedora** and **RHEL-derivatives**, the following command will ensure
+that the required dependencies are installed: ::
+
+    sudo yum upgrade python-setuptools
+    sudo yum install gcc gcc-c++ libffi-devel python-devel python-pip python-wheel openssl-devel libsasl2-devel openldap-devel
+
+**OSX**, system python is not recommended. brew's python also ships with pip  ::
+
+    brew install pkg-config libffi openssl python
+    env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography==1.7.2
+
+**Windows** isn't officially supported at this point, but if you want to
+attempt it, download `get-pip.py <https://bootstrap.pypa.io/get-pip.py>`_, and run ``python get-pip.py`` which may need admin access. Then run the following: ::
+
+    C:\> pip install cryptography
+
+    # You may also have to create C:\Temp
+    C:\> md C:\Temp
+
+Python virtualenv
+-----------------
+It is recommended to install Superset inside a virtualenv. Python 3 already ships virtualenv, for
+Python 2 you need to install it. If it's packaged for your operating systems install it from there
+otherwise you can install from pip: ::
+
+    pip install virtualenv
+
+You can create and activate a virtualenv by: ::
+
+    # virtualenv is shipped in Python 3 as pyvenv
+    virtualenv venv
+    . ./venv/bin/activate
+
+On windows the syntax for activating it is a bit different: ::
+
+    venv\Scripts\activate
+
+Once you activated your virtualenv everything you are doing is confined inside the virtualenv.
+To exit a virtualenv just type ``deactivate``.
+
+Python's setup tools and pip
+----------------------------
+Put all the chances on your side by getting the very latest ``pip``
+and ``setuptools`` libraries.::
+
+    pip install --upgrade setuptools pip
+
+Superset installation and initialization
+----------------------------------------
+Follow these few simple steps to install Superset.::
+
+    # Install superset
+    pip install superset
+
+    # Create an admin user (you will be prompted to set username, first and last name before setting a password)
+    fabmanager create-admin --app superset
+
+    # Initialize the database
+    superset db upgrade
+
+    # Load some data to play with
+    superset load_examples
+
+    # Create default roles and permissions
+    superset init
+
+    # Start the web server on port 8088, use -p to bind to another port
+    superset runserver
+
+    # To start a development web server, use the -d switch
+    # superset runserver -d
+
+
+After installation, you should be able to point your browser to the right
+hostname:port `http://localhost:8088 <http://localhost:8088>`_, login using
+the credential you entered while creating the admin account, and navigate to
+`Menu -> Admin -> Refresh Metadata`. This action should bring in all of
+your datasources for Superset to be aware of, and they should show up in
+`Menu -> Datasources`, from where you can start playing with your data!
+
+A proper WSGI HTTP Server
+-------------------------
+
+While you can setup Superset to run on Nginx or Apache, many use
+Gunicorn, preferably in **async mode**, which allows for impressive
+concurrency even and is fairly easy to install and configure. Please
+refer to the
+documentation of your preferred technology to set up this Flask WSGI
+application in a way that works well in your environment.
+
+While the `superset runserver` command act as an quick wrapper
+around `gunicorn`, it doesn't expose all the options you may need,
+so you'll want to craft your own `gunicorn` command in your production
+environment. Here's an **async** setup known to work well: ::
+
+	gunicorn \
+		-w 10 \
+		-k gevent \
+		--timeout 120 \
+		-b  0.0.0.0:6666 \
+		--limit-request-line 0 \
+		--limit-request-field_size 0 \
+		--statsd-host localhost:8125 \
+		superset:app
+
+Refer to the
+[Gunicorn documentation](http://docs.gunicorn.org/en/stable/design.html)
+for more information.
+
+Note that *gunicorn* does not
+work on Windows so the `superser runserver` command is not expected to work
+in that context. Also note that the development web
+server (`superset runserver -d`) is not intended for production use.
+
+
+Configuration behind a load balancer
+------------------------------------
+
+If you are running superset behind a load balancer or reverse proxy (e.g. NGINX
+or ELB on AWS), you may need to utilise a healthcheck endpoint so that your
+load balancer knows if your superset instance is running. This is provided
+at ``/health`` which will return a 200 response containing "OK" if the
+webserver is running.
+
+If the load balancer is inserting X-Forwarded-For/X-Forwarded-Proto headers, you
+should set `ENABLE_PROXY_FIX = True` in the superset config file to extract and use
+the headers.
+
+
+Configuration
+-------------
+
+To configure your application, you need to create a file (module)
+``superset_config.py`` and make sure it is in your PYTHONPATH. Here are some
+of the parameters you can copy / paste in that configuration module: ::
+
+    #---------------------------------------------------------
+    # Superset specific config
+    #---------------------------------------------------------
+    ROW_LIMIT = 5000
+    SUPERSET_WORKERS = 4
+
+    SUPERSET_WEBSERVER_PORT = 8088
+    #---------------------------------------------------------
+
+    #---------------------------------------------------------
+    # Flask App Builder configuration
+    #---------------------------------------------------------
+    # Your App secret key
+    SECRET_KEY = '\2\1thisismyscretkey\1\2\e\y\y\h'
+
+    # The SQLAlchemy connection string to your database backend
+    # This connection defines the path to the database that stores your
+    # superset metadata (slices, connections, tables, dashboards, ...).
+    # Note that the connection information to connect to the datasources
+    # you want to explore are managed directly in the web UI
+    SQLALCHEMY_DATABASE_URI = 'sqlite:////path/to/superset.db'
+
+    # Flask-WTF flag for CSRF
+    WTF_CSRF_ENABLED = True
+    # Add endpoints that need to be exempt from CSRF protection
+    WTF_CSRF_EXEMPT_LIST = []
+
+    # Set this API key to enable Mapbox visualizations
+    MAPBOX_API_KEY = ''
+
+This file also allows you to define configuration parameters used by
+Flask App Builder, the web framework used by Superset. Please consult
+the `Flask App Builder Documentation
+<http://flask-appbuilder.readthedocs.org/en/latest/config.html>`_
+for more information on how to configure Superset.
+
+Please make sure to change:
+
+* *SQLALCHEMY_DATABASE_URI*, by default it is stored at *~/.superset/superset.db*
+* *SECRET_KEY*, to a long random string
+
+In case you need to exempt endpoints from CSRF, e.g. you are running a custom
+auth postback endpoint, you can add them to *WTF_CSRF_EXEMPT_LIST*
+
+     WTF_CSRF_EXEMPT_LIST = ['']
+
+Database dependencies
+---------------------
+
+Superset does not ship bundled with connectivity to databases, except
+for Sqlite, which is part of the Python standard library.
+You'll need to install the required packages for the database you
+want to use as your metadata database as well as the packages needed to
+connect to the databases you want to access through Superset.
+
+Here's a list of some of the recommended packages.
+
++---------------+-------------------------------------+-------------------------------------------------+
+| database      | pypi package                        | SQLAlchemy URI prefix                           |
++===============+=====================================+=================================================+
+|  MySQL        | ``pip install mysqlclient``         | ``mysql://``                                    |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Postgres     | ``pip install psycopg2``            | ``postgresql+psycopg2://``                      |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Presto       | ``pip install pyhive``              | ``presto://``                                   |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Oracle       | ``pip install cx_Oracle``           | ``oracle://``                                   |
++---------------+-------------------------------------+-------------------------------------------------+
+|  sqlite       |                                     | ``sqlite://``                                   |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Redshift     | ``pip install sqlalchemy-redshift`` | ``postgresql+psycopg2://``                      |
++---------------+-------------------------------------+-------------------------------------------------+
+|  MSSQL        | ``pip install pymssql``             | ``mssql://``                                    |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Impala       | ``pip install impyla``              | ``impala://``                                   |
++---------------+-------------------------------------+-------------------------------------------------+
+|  SparkSQL     | ``pip install pyhive``              | ``jdbc+hive://``                                |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Greenplum    | ``pip install psycopg2``            | ``postgresql+psycopg2://``                      |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Athena       | ``pip install "PyAthenaJDBC>1.0.9"``| ``awsathena+jdbc://``                           |
++---------------+-------------------------------------+-------------------------------------------------+
+|  Vertica      | ``pip install                       |  ``vertica+vertica_python://``                  |
+|               | sqlalchemy-vertica-python``         |                                                 |
++---------------+-------------------------------------+-------------------------------------------------+
+|  ClickHouse   | ``pip install                       | ``clickhouse://``                               |
+|               | sqlalchemy-clickhouse``             |                                                 |
++---------------+-------------------------------------+-------------------------------------------------+
+
+Note that many other database are supported, the main criteria being the
+existence of a functional SqlAlchemy dialect and Python driver. Googling
+the keyword ``sqlalchemy`` in addition of a keyword that describes the
+database you want to connect to should get you to the right place.
+
+(AWS) Athena
+------------
+
+The connection string for Athena looks like this ::
+
+    awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
+
+Where you need to escape/encode at least the s3_staging_dir, i.e., ::
+
+    s3://... -> s3%3A//...
+
+
+Caching
+-------
+
+Superset uses `Flask-Cache <https://pythonhosted.org/Flask-Cache/>`_ for
+caching purpose. Configuring your caching backend is as easy as providing
+a ``CACHE_CONFIG``, constant in your ``superset_config.py`` that
+complies with the Flask-Cache specifications.
+
+Flask-Cache supports multiple caching backends (Redis, Memcached,
+SimpleCache (in-memory), or the local filesystem). If you are going to use
+Memcached please use the `pylibmc` client library as `python-memcached` does
+not handle storing binary data correctly. If you use Redis, please install
+the `redis <https://pypi.python.org/pypi/redis>`_ Python package: ::
+
+    pip install redis
+
+For setting your timeouts, this is done in the Superset metadata and goes
+up the "timeout searchpath", from your slice configuration, to your
+data source's configuration, to your database's and ultimately falls back
+into your global default defined in ``CACHE_CONFIG``.
+
+
+Deeper SQLAlchemy integration
+-----------------------------
+
+It is possible to tweak the database connection information using the
+parameters exposed by SQLAlchemy. In the ``Database`` edit view, you will
+find an ``extra`` field as a ``JSON`` blob.
+
+.. image:: _static/img/tutorial/add_db.png
+   :scale: 30 %
+
+This JSON string contains extra configuration elements. The ``engine_params``
+object gets unpacked into the
+`sqlalchemy.create_engine <http://docs.sqlalchemy.org/en/latest/core/engines.html#sqlalchemy.create_engine>`_ call,
+while the ``metadata_params`` get unpacked into the
+`sqlalchemy.MetaData <http://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html#sqlalchemy.schema.MetaData>`_ call. Refer to the SQLAlchemy docs for more information.
+
+
+Schemas (Postgres & Redshift)
+-----------------------------
+
+Postgres and Redshift, as well as other database,
+use the concept of **schema** as a logical entity
+on top of the **database**. For Superset to connect to a specific schema,
+there's a **schema** parameter you can set in the table form.
+
+
+External Password store for SQLAlchemy connections
+--------------------------------------------------
+It is possible to use an external store for you database passwords. This is
+useful if you a running a custom secret distribution framework and do not wish
+to store secrets in Superset's meta database.
+
+Example:
+Write a function that takes a single argument of type ``sqla.engine.url`` and returns
+the password for the given connection string. Then set ``SQLALCHEMY_CUSTOM_PASSWORD_STORE``
+in your config file to point to that function. ::
+
+    def example_lookup_password(url):
+        secret = <<get password from external framework>>
+        return 'secret'
+
+    SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_lookup_password
+
+
+SSL Access to databases
+-----------------------
+This example worked with a MySQL database that requires SSL. The configuration
+may differ with other backends. This is what was put in the ``extra``
+parameter ::
+
+    {
+        "metadata_params": {},
+        "engine_params": {
+              "connect_args":{
+                  "sslmode":"require",
+                  "sslrootcert": "/path/to/my/pem"
+            }
+         }
+    }
+
+
+Druid
+-----
+
+* From the UI, enter the information about your clusters in the
+  ``Admin->Clusters`` menu by hitting the + sign.
+
+* Once the Druid cluster connection information is entered, hit the
+  ``Admin->Refresh Metadata`` menu item to populate
+
+* Navigate to your datasources
+
+Note that you can run the ``superset refresh_druid`` command to refresh the
+metadata from your Druid cluster(s)
+
+
+CORS
+-----
+
+The extra CORS Dependency must be installed:
+
+    superset[cors]
+
+
+The following keys in `superset_config.py` can be specified to configure CORS:
+
+
+* ``ENABLE_CORS``: Must be set to True in order to enable CORS
+* ``CORS_OPTIONS``: options passed to Flask-CORS (`documentation <http://flask-cors.corydolphin.com/en/latest/api.html#extension>`)
+
+
+MIDDLEWARE
+----------
+
+Superset allows you to add your own middleware. To add your own middleware, update the ``ADDITIONAL_MIDDLEWARE`` key in
+your `superset_config.py`. ``ADDITIONAL_MIDDLEWARE`` should be a list of your additional middleware classes.
+
+For example, to use AUTH_REMOTE_USER from behind a proxy server like nginx, you have to add a simple middleware class to
+add the value of ``HTTP_X_PROXY_REMOTE_USER`` (or any other custom header from the proxy) to Gunicorn's ``REMOTE_USER``
+environment variable: ::
+
+    class RemoteUserMiddleware(object):
+        def __init__(self, app):
+            self.app = app
+        def __call__(self, environ, start_response):
+            user = environ.pop('HTTP_X_PROXY_REMOTE_USER', None)
+            environ['REMOTE_USER'] = user
+            return self.app(environ, start_response)
+
+    ADDITIONAL_MIDDLEWARE = [RemoteUserMiddleware, ]
+
+*Adapted from http://flask.pocoo.org/snippets/69/*
+
+
+Upgrading
+---------
+
+Upgrading should be as straightforward as running::
+
+    pip install superset --upgrade
+    superset db upgrade
+    superset init
+
+SQL Lab
+-------
+SQL Lab is a powerful SQL IDE that works with all SQLAlchemy compatible
+databases. By default, queries are executed in the scope of a web
+request so they
+may eventually timeout as queries exceed the maximum duration of a web
+request in your environment, whether it'd be a reverse proxy or the Superset
+server itself.
+
+On large analytic databases, it's common to run queries that
+execute for minutes or hours.
+To enable support for long running queries that
+execute beyond the typical web request's timeout (30-60 seconds), it is
+necessary to configure an asynchronous backend for Superset which consist of:
+
+* one or many Superset worker (which is implemented as a Celery worker), and
+  can be started with the ``superset worker`` command, run
+  ``superset worker --help`` to view the related options
+* a celery broker (message queue) for which we recommend using Redis
+  or RabbitMQ
+* a results backend that defines where the worker will persist the query
+  results
+
+Configuring Celery requires defining a ``CELERY_CONFIG`` in your
+``superset_config.py``. Both the worker and web server processes should
+have the same configuration.
+
+.. code-block:: python
+
+    class CeleryConfig(object):
+        BROKER_URL = 'redis://localhost:6379/0'
+        CELERY_IMPORTS = ('superset.sql_lab', )
+        CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
+        CELERY_ANNOTATIONS = {'tasks.add': {'rate_limit': '10/s'}}
+
+    CELERY_CONFIG = CeleryConfig
+
+To setup a result backend, you need to pass an instance of a derivative
+of ``werkzeug.contrib.cache.BaseCache`` to the ``RESULTS_BACKEND``
+configuration key in your ``superset_config.py``. It's possible to use
+Memcached, Redis, S3 (https://pypi.python.org/pypi/s3werkzeugcache),
+memory or the file system (in a single server-type setup or for testing),
+or to write your own caching interface. Your ``superset_config.py`` may
+look something like:
+
+.. code-block:: python
+
+    # On S3
+    from s3cache.s3cache import S3Cache
+    S3_CACHE_BUCKET = 'foobar-superset'
+    S3_CACHE_KEY_PREFIX = 'sql_lab_result'
+    RESULTS_BACKEND = S3Cache(S3_CACHE_BUCKET, S3_CACHE_KEY_PREFIX)
+
+    # On Redis
+    from werkzeug.contrib.cache import RedisCache
+    RESULTS_BACKEND = RedisCache(
+        host='localhost', port=6379, key_prefix='superset_results')
+
+
+Also note that SQL Lab supports Jinja templating in queries, and that it's
+possible to overload
+the default Jinja context in your environment by defining the
+``JINJA_CONTEXT_ADDONS`` in your superset configuration. Objects referenced
+in this dictionary are made available for users to use in their SQL.
+
+.. code-block:: python
+
+    JINJA_CONTEXT_ADDONS = {
+        'my_crazy_macro': lambda x: x*2,
+    }
+
+
+Making your own build
+---------------------
+
+For more advanced users, you may want to build Superset from sources. That
+would be the case if you fork the project to add features specific to
+your environment.::
+
+    # assuming $SUPERSET_HOME as the root of the repo
+    cd $SUPERSET_HOME/superset/assets
+    yarn
+    yarn run build
+    cd $SUPERSET_HOME
+    python setup.py install
+
+
+Blueprints
+----------
+
+`Blueprints are Flask's reusable apps <http://flask.pocoo.org/docs/0.12/blueprints/>`_.
+Superset allows you to specify an array of Blueprints
+in your ``superset_config`` module. Here's
+an example on how this can work with a simple Blueprint. By doing
+so, you can expect Superset to serve a page that says "OK"
+at the ``/simple_page`` url. This can allow you to run other things such
+as custom data visualization applications alongside Superset, on the
+same server.
+
+..code ::
+
+    from flask import Blueprint
+    simple_page = Blueprint('simple_page', __name__,
+                                    template_folder='templates')
+    @simple_page.route('/', defaults={'page': 'index'})
+    @simple_page.route('/<page>')
+    def show(page):
+        return "Ok"
+
+    BLUEPRINTS = [simple_page]
diff --git a/_sources/issue_code_reference.rst.txt b/_sources/issue_code_reference.rst.txt
new file mode 100644
index 0000000..ef89d1e
--- /dev/null
+++ b/_sources/issue_code_reference.rst.txt
@@ -0,0 +1,39 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Issue Code Reference
+====================
+
+This page lists issue codes that may be displayed in Superset and provides additional context.
+
+Issue 1000
+""""""""""
+
+.. code-block:: text
+
+    The datasource is too large to query.
+
+It's likely your datasource has grown too large to run the current query, and is timing out. You can resolve this by reducing the size of your datasource or by modifying your query to only process a subset of your data.
+
+Issue 1001
+""""""""""
+
+.. code-block:: text
+
+    The database is under an unusual load.
+
+Your query may have timed out because of unusually high load on the database engine. You can make your query simpler, or wait until the database is under less load and try again.
diff --git a/_sources/misc.rst.txt b/_sources/misc.rst.txt
new file mode 100644
index 0000000..840f17b
--- /dev/null
+++ b/_sources/misc.rst.txt
@@ -0,0 +1,27 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+
+Misc
+----
+
+.. toctree::
+    :maxdepth: 2
+
+    visualization
+    videos
+    import_export_datasources
diff --git a/_sources/security.rst.txt b/_sources/security.rst.txt
new file mode 100644
index 0000000..911aabe
--- /dev/null
+++ b/_sources/security.rst.txt
@@ -0,0 +1,178 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Security
+========
+Security in Superset is handled by Flask AppBuilder (FAB). FAB is a
+"Simple and rapid application development framework, built on top of Flask.".
+FAB provides authentication, user management, permissions and roles.
+Please read its `Security documentation
+<https://flask-appbuilder.readthedocs.io/en/latest/security.html>`_.
+
+Provided Roles
+--------------
+Superset ships with a set of roles that are handled by Superset itself.
+You can assume that these roles will stay up-to-date as Superset evolves.
+Even though it's possible for ``Admin`` users to do so, it is not recommended
+that you alter these roles in any way by removing
+or adding permissions to them as these roles will be re-synchronized to
+their original values as you run your next ``superset init`` command.
+
+Since it's not recommended to alter the roles described here, it's right
+to assume that your security strategy should be to compose user access based
+on these base roles and roles that you create. For instance you could
+create a role ``Financial Analyst`` that would be made of a set of permissions
+to a set of data sources (tables) and/or databases. Users would then be
+granted ``Gamma``, ``Financial Analyst``, and perhaps ``sql_lab``.
+
+Admin
+"""""
+Admins have all possible rights, including granting or revoking rights from
+other users and altering other people's slices and dashboards.
+
+Alpha
+"""""
+Alpha users have access to all data sources, but they cannot grant or revoke access
+from other users. They are also limited to altering the objects that they
+own. Alpha users can add and alter data sources.
+
+Gamma
+"""""
+Gamma users have limited access. They can only consume data coming from data sources
+they have been given access to through another complementary role.
+They only have access to view the slices and
+dashboards made from data sources that they have access to. Currently Gamma
+users are not able to alter or add data sources. We assume that they are
+mostly content consumers, though they can create slices and dashboards.
+
+Also note that when Gamma users look at the dashboards and slices list view,
+they will only see the objects that they have access to.
+
+sql_lab
+"""""""
+The ``sql_lab`` role grants access to SQL Lab. Note that while ``Admin``
+users have access to all databases by default, both ``Alpha`` and ``Gamma``
+users need to be given access on a per database basis.
+
+Public
+""""""
+It's possible to allow logged out users to access some Superset features.
+
+By setting ``PUBLIC_ROLE_LIKE_GAMMA = True`` in your ``superset_config.py``,
+you grant public role the same set of permissions as for the GAMMA role.
+This is useful if one wants to enable anonymous users to view
+dashboards. Explicit grant on specific datasets is still required, meaning
+that you need to edit the ``Public`` role and add the Public data sources
+to the role manually.
+
+
+Managing Gamma per data source access
+-------------------------------------
+Here's how to provide users access to only specific datasets. First make
+sure the users with limited access have [only] the Gamma role assigned to
+them. Second, create a new role (``Menu -> Security -> List Roles``) and
+click the ``+`` sign.
+
+.. image:: _static/images/create_role.png
+   :scale: 50 %
+
+This new window allows you to give this new role a name, attribute it to users
+and select the tables in the ``Permissions`` dropdown. To select the data
+sources you want to associate with this role, simply click on the dropdown
+and use the typeahead to search for your table names.
+
+You can then confirm with your Gamma users that they see the objects
+(dashboards and slices) associated with the tables related to their roles.
+
+
+Customizing
+-----------
+
+The permissions exposed by FAB are very granular and allow for a great level
+of customization. FAB creates many permissions automagically for each model
+that is created (can_add, can_delete, can_show, can_edit, ...) as well as for
+each view. On top of that, Superset can expose more granular permissions like
+``all_datasource_access``.
+
+We do not recommend altering the 3 base roles as there
+are a set of assumptions that Superset is built upon. It is possible though for
+you to create your own roles, and union them to existing ones.
+
+Permissions
+"""""""""""
+
+Roles are composed of a set of permissions, and Superset has many categories
+of permissions. Here are the different categories of permissions:
+
+- **Model & action**: models are entities like ``Dashboard``,
+  ``Slice``, or ``User``. Each model has a fixed set of permissions, like
+  ``can_edit``, ``can_show``, ``can_delete``, ``can_list``, ``can_add``, and
+  so on. By adding ``can_delete on Dashboard`` to a role, and granting that
+  role to a user, this user will be able to delete dashboards.
+- **Views**: views are individual web pages, like the ``explore`` view or the
+  ``SQL Lab`` view. When granted to a user, he/she will see that view in its menu items, and be able to load that page.
+- **Data source**: For each data source, a permission is created. If the user
+  does not have the ``all_datasource_access`` permission granted, the user
+  will only be able to see Slices or explore the data sources that are granted
+  to them
+- **Database**: Granting access to a database allows for the user to access
+  all data sources within that database, and will enable the user to query
+  that database in SQL Lab, provided that the SQL Lab specific permission
+  have been granted to the user
+
+
+Restricting access to a subset of data sources
+""""""""""""""""""""""""""""""""""""""""""""""
+
+The best way to go is probably to give user ``Gamma`` plus one or many other
+roles that would add access to specific data sources. We recommend that you
+create individual roles for each access profile. Say people in your finance
+department might have access to a set of databases and data sources, and
+these permissions can be consolidated in a single role. Users with this
+profile then need to be attributed ``Gamma`` as a foundation to the models
+and views they can access, and that ``Finance`` role that is a collection
+of permissions to data objects.
+
+One user can have many roles, so a finance executive could be granted
+``Gamma``, ``Finance``, and perhaps another ``Executive`` role that gather
+a set of data sources that power dashboards only made available to executives.
+When looking at its dashboard list, this user will only see the
+list of dashboards it has access to, based on the roles and
+permissions that were attributed.
+
+
+Restricting access to a subset of a particular table
+""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+Using ``Row level security filters`` (under the ``Security`` menu) you can create 
+filters that are assigned to a particular table, as well as a set of roles. 
+Say people in your finance department should only have access to rows where 
+``department = "finance"``.  You could create a ``Row level security filter`` 
+with that clause, and assign it to your ``Finance`` role, as well as the 
+applicable table.
+
+The ``clause`` field can contain arbitrary text which is then added to the generated 
+SQL statement's ``WHERE`` clause.  So you could even do something like create a 
+filter for the last 30 days and apply it to a specific role, with a clause like 
+``date_field > DATE_SUB(NOW(), INTERVAL 30 DAY)``.  It can also support multiple 
+conditions: ``client_id = 6 AND advertiser="foo"``, etc. 
+
+All relevant ``Row level security filters`` will be ANDed together, so it's 
+possible to create a situation where two roles conflict in such a way as to 
+limit a table subset to empty.  For example, the filters ``client_id=4`` and 
+and ``client_id=5``, applied to a role, will result in users of that role having 
+``client_id=4 AND client_id=5`` added to their query, which can never be true.
\ No newline at end of file
diff --git a/_sources/security.txt b/_sources/security.txt
new file mode 100644
index 0000000..afc00cb
--- /dev/null
+++ b/_sources/security.txt
@@ -0,0 +1,162 @@
+Security
+========
+Security in Superset is handled by Flask AppBuilder (FAB). FAB is a
+"Simple and rapid application development framework, built on top of Flask.".
+FAB provides authentication, user management, permissions and roles.
+Please read its `Security documentation 
+<http://flask-appbuilder.readthedocs.io/en/latest/security.html>`_.
+
+Provided Roles
+--------------
+Superset ships with a set of roles that are handled by Superset itself.
+You can assume that these roles will stay up-to-date as Superset evolves.
+Even though it's possible for ``Admin`` usrs to do so, it is not recommended
+that you alter these roles in any way by removing
+or adding permissions to them as these roles will be re-synchronized to
+their original values as you run your next ``superset init`` command.
+
+Since it's not recommended to alter the roles described here, it's right
+to assume that your security strategy should be to compose user access based
+on these base roles and roles that you create. For instance you could
+create a role ``Financial Analyst`` that would be made of set of permissions
+to a set of data sources (tables) and/or databases. Users would then be
+granted ``Gamma``, ``Financial Analyst``, and perhaps ``sql_lab``.
+
+Admin
+"""""
+Admins have all possible rights, including granting or revoking rights from
+other users and altering other people's slices and dashboards.
+
+Alpha
+"""""
+Alpha have access to all data sources, but they cannot grant or revoke access
+from other users. They are also limited to altering the objects that they
+own. Alpha users can add and alter data sources.
+
+Gamma
+"""""
+Gamma have limited access. They can only consume data coming from data sources
+they have been given access to through another complementary role.
+They only have access to view the slices and
+dashboards made from data sources that they have access to. Currently Gamma
+users are not able to alter or add data sources. We assume that they are
+mostly content consumers, though they can create slices and dashboards.
+
+Also note that when Gamma users look at the dashboards and slices list view,
+they will only see the objects that they have access to.
+
+sql_lab
+"""""""
+The ``sql_lab`` role grants access to SQL Lab. Note that while ``Admin``
+users have access to all databases by default, both ``Alpha`` and ``Gamma``
+users need to be given access on a per database basis.
+
+Public
+""""""
+It's possible to allow logged out users to access some Superset features.
+
+By setting ``PUBLIC_ROLE_LIKE_GAMMA = True`` in your ``superset_config.py``,
+you grant public role the same set of permissions as for the GAMMA role.
+This is useful if one wants to enable anonymous users to view
+dashboards. Explicit grant on specific datasets is still required, meaning
+that you need to edit the ``Public`` role and add the Public data sources
+to the role manually.
+
+
+Managing Gamma per data source access
+-------------------------------------
+Here's how to provide users access to only specific datasets. First make
+sure the users with limited access have [only] the Gamma role assigned to
+them. Second, create a new role (``Menu -> Security -> List Roles``) and
+click the ``+`` sign.
+
+.. image:: _static/img/create_role.png
+   :scale: 50 %
+
+This new window allows you to give this new role a name, attribute it to users
+and select the tables in the ``Permissions`` dropdown. To select the data
+sources you want to associate with this role, simply click in the dropdown
+and use the typeahead to search for your table names.
+
+You can then confirm with your Gamma users that they see the objects
+(dashboards and slices) associated with the tables related to their roles.
+
+
+Customizing
+-----------
+
+The permissions exposed by FAB are very granular and allow for a great level
+of customization. FAB creates many permissions automagically for each model
+that is create (can_add, can_delete, can_show, can_edit, ...) as well as for
+each view. On top of that, Superset can expose more granular permissions like
+``all_datasource_access``.
+
+We do not recommend altering the 3 base roles as there
+are a set of assumptions that Superset build upon. It is possible though for
+you to create your own roles, and union them to existing ones.
+
+Permissions
+"""""""""""
+
+Roles are composed of a set of permissions, and Superset has many categories
+of permissions. Here are the different categories of permissions:
+
+- **Model & action**: models are entities like ``Dashboard``,
+  ``Slice``, or ``User``. Each model has a fixed set of permissions, like
+  ``can_edit``, ``can_show``, ``can_delete``, ``can_list``, ``can_add``, and
+  so on. By adding ``can_delete on Dashboard`` to a role, and granting that
+  role to a user, this user will be able to delete dashboards.
+- **Views**: views are individual web pages, like the ``explore`` view or the
+  ``SQL Lab`` view. When granted to a user, he/she will see that view in
+  the its menu items, and be able to load that page.
+- **Data source**: For each data source, a permission is created. If the user
+  does not have the ``all_datasource_access`` permission granted, the user
+  will only be able to see Slices or explore the data sources that are granted
+  to them
+- **Database**: Granting access to a database allows for the user to access
+  all data sources within that database, and will enable the user to query
+  that database in SQL Lab, provided that the SQL Lab specific permission
+  have been granted to the user
+
+
+Restricting access to a subset of data sources
+""""""""""""""""""""""""""""""""""""""""""""""
+
+The best way to go is probably to give user ``Gamma`` plus one or many other
+roles that would add access to specific data sources. We recommend that you
+create individual roles for each access profile. Say people in your finance
+department might have access to a set of databases and data sources, and
+these permissions can be consolidated in a single role. Users with this
+profile then need to be attributed ``Gamma`` as a foundation to the models
+and views they can access, and that ``Finance`` role that is a collection
+of permissions to data objects.
+
+One user can have many roles, so a finance executive could be granted
+``Gamma``, ``Finance``, and perhaps another ``Executive`` role that gather
+a set of data sources that power dashboards only made available to executives.
+When looking at its dashboard list, this user will only see the
+list of dashboards it has access to, based on the roles and
+permissions that were attributed.
+
+
+Restricting the access to some metrics
+""""""""""""""""""""""""""""""""""""""
+
+Sometimes some metrics are relatively sensitive (e.g. revenue).
+We may want to restrict those metrics to only a few roles.
+For example, assumed there is a metric ``[cluster1].[datasource1].[revenue]``
+and only Admin users are allowed to see it. Here’s how to restrict the access.
+
+1. Edit the datasource (``Menu -> Source -> Druid datasources -> edit the
+   record "datasource1"``) and go to the tab ``List Druid Metric``. Check
+   the checkbox ``Is Restricted`` in the row of the metric ``revenue``.
+
+2. Edit the role (``Menu -> Security -> List Roles -> edit the record
+   “Admin”``), in the permissions field, type-and-search the permission
+   ``metric access on [cluster1].[datasource1].[revenue] (id: 1)``, then
+   click the Save button on the bottom of the page.
+
+Any users without the permission will see the error message
+*Access to the metrics denied: revenue (Status: 500)* in the slices.
+It also happens when the user wants to access a post-aggregation metric that
+is dependent on revenue.
diff --git a/_sources/sqllab.rst.txt b/_sources/sqllab.rst.txt
new file mode 100644
index 0000000..b582c53
--- /dev/null
+++ b/_sources/sqllab.rst.txt
@@ -0,0 +1,177 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+SQL Lab
+=======
+
+SQL Lab is a modern, feature-rich SQL IDE written in
+`React <https://facebook.github.io/react/>`_.
+
+------
+
+.. image:: _static/images/screenshots/sqllab.png
+
+------
+
+Feature Overview
+----------------
+- Connects to just about any database backend
+- A multi-tab environment to work on multiple queries at a time
+- A smooth flow to visualize your query results using Superset's rich
+  visualization capabilities
+- Browse database metadata: tables, columns, indexes, partitions
+- Support for long-running queries
+
+  - uses the `Celery distributed queue <http://www.celeryproject.org/>`_
+    to dispatch query handling to workers
+  - supports defining a "results backend" to persist query results
+
+- A search engine to find queries executed in the past
+- Supports templating using the
+  `Jinja templating language <http://jinja.pocoo.org/docs/dev/>`_
+  which allows for using macros in your SQL code
+
+Extra features
+--------------
+- Hit ``alt + enter`` as a keyboard shortcut to run your query
+
+Templating with Jinja
+---------------------
+
+.. code-block:: sql
+
+    SELECT *
+    FROM some_table
+    WHERE partition_key = '{{ presto.first_latest_partition('some_table') }}'
+
+Templating unleashes the power and capabilities of a
+programming language within your SQL code.
+
+Templates can also be used to write generic queries that are
+parameterized so they can be re-used easily.
+
+
+Available macros
+''''''''''''''''
+
+We expose certain modules from Python's standard library in
+Superset's Jinja context:
+
+- ``time``: ``time``
+- ``datetime``: ``datetime.datetime``
+- ``uuid``: ``uuid``
+- ``random``: ``random``
+- ``relativedelta``: ``dateutil.relativedelta.relativedelta``
+
+`Jinja's builtin filters <http://jinja.pocoo.org/docs/dev/templates/>`_ can be also be applied where needed.
+
+.. autoclass:: superset.jinja_context.ExtraCache
+    :members:
+
+.. autofunction:: superset.jinja_context.filter_values
+
+.. autoclass:: superset.jinja_context.PrestoTemplateProcessor
+    :members:
+
+.. autoclass:: superset.jinja_context.HiveTemplateProcessor
+    :members:
+
+Extending macros
+''''''''''''''''
+
+As mentioned in the `Installation & Configuration <https://superset.incubator.apache.org/installation.html#installation-configuration>`_ documentation,
+it's possible for administrators to expose more more macros in their
+environment using the configuration variable ``JINJA_CONTEXT_ADDONS``.
+All objects referenced in this dictionary will become available for users
+to integrate in their queries in **SQL Lab**.
+
+Customize templating
+''''''''''''''''''''
+
+As mentioned in the `Installation & Configuration <https://superset.incubator.apache.org/installation.html#sql-lab>`__ documentation,
+it's possible for administrators to overwrite Jinja templating with your customized
+template processor using the configuration variable ``CUSTOM_TEMPLATE_PROCESSORS``.
+The template processors referenced in the dictionary will overwrite default Jinja template processors
+of the specified database engines.
+
+Query cost estimation
+'''''''''''''''''''''
+
+Some databases support ``EXPLAIN`` queries that allow users to estimate the cost
+of queries before executing this. Currently, Presto is supported in SQL Lab. To
+enable query cost estimation, add the following keys to the "Extra" field in the
+database configuration:
+
+.. code-block:: text
+
+    {
+        "version": "0.319",
+        "cost_estimate_enabled": true
+        ...
+    }
+
+Here, "version" should be the version of your Presto cluster. Support for this
+functionality was introduced in Presto 0.319.
+
+You also need to enable the feature flag in your `superset_config.py`, and you
+can optionally specify a custom formatter. Eg:
+
+.. code-block:: python
+
+    def presto_query_cost_formatter(cost_estimate: List[Dict[str, float]]) -> List[Dict[str, str]]:
+        """
+        Format cost estimate returned by Presto.
+
+        :param cost_estimate: JSON estimate from Presto
+        :return: Human readable cost estimate
+        """
+        # Convert cost to dollars based on CPU and network cost. These coefficients are just
+        # examples, they need to be estimated based on your infrastructure.
+        cpu_coefficient = 2e-12
+        network_coefficient = 1e-12
+
+        cost = 0
+        for row in cost_estimate:
+            cost += row.get("cpuCost", 0) * cpu_coefficient
+            cost += row.get("networkCost", 0) * network_coefficient
+
+        return [{"Cost": f"US$ {cost:.2f}"}]
+
+
+    DEFAULT_FEATURE_FLAGS = {
+        "ESTIMATE_QUERY_COST": True,
+        "QUERY_COST_FORMATTERS_BY_ENGINE": {"presto": presto_query_cost_formatter},
+    }
+
+.. _ref_ctas_engine_config:
+
+Create Table As (CTAS)
+''''''''''''''''''''''
+
+You can use ``CREATE TABLE AS SELECT ...`` statements on SQLLab. This feature can be toggled on
+and off at the database configuration level.
+
+Note that since ``CREATE TABLE..`` belongs to a SQL DDL category. Specifically on PostgreSQL, DDL is transactional,
+this means that to properly use this feature you have to set ``autocommit`` to true on your engine parameters:
+
+.. code-block:: text
+
+    {
+        ...
+        "engine_params": {"isolation_level":"AUTOCOMMIT"},
+        ...
+    }
diff --git a/_sources/sqllab.txt b/_sources/sqllab.txt
new file mode 100644
index 0000000..a1da6c7
--- /dev/null
+++ b/_sources/sqllab.txt
@@ -0,0 +1,64 @@
+SQL Lab
+=======
+
+SQL Lab is a modern, feature-rich SQL IDE written in
+`React <https://facebook.github.io/react/>`_.
+
+
+Feature Overview
+----------------
+- Connects to just about any database backend
+- A multi-tab environment to work on multiple queries at a time
+- A smooth flow to visualize your query results using Superset's rich
+  visualization capabilities
+- Browse database metadata: tables, columns, indexes, partitions
+- Support for long-running queries
+
+  - uses the `Celery distributed queue <http://www.python.org/>`_
+    to dispatch query handling to workers
+  - supports defining a "results backend" to persist query results
+
+- A search engine to find queries executed in the past
+- Supports templating using the
+  `Jinja templating language <http://jinja.pocoo.org/docs/dev/>`_
+  which allows for using macros in your SQL code
+
+Extra features
+--------------
+- Hit ``alt + enter`` as a keyboard shortcut to run your query
+
+Templating with Jinja
+---------------------
+
+.. code-block:: sql
+
+    SELECT *
+    FROM some_table
+    WHERE partition_key = '{{ presto.latest_partition('some_table') }}'
+
+Templating unleashes the power and capabilities of a
+programming language within your SQL code.
+
+Templates can also be used to write generic queries that are
+parameterized so they can be re-used easily.
+
+
+Available macros
+''''''''''''''''
+
+We expose certain modules from Python's standard library in
+Superset's Jinja context:
+- ``time``: ``time``
+- ``datetime``: ``datetime.datetime``
+- ``uuid``: ``uuid``
+- ``random``: ``random``
+- ``relativedelta``: ``dateutil.relativedelta.relativedelta``
+- more to come!
+
+`Jinja's builtin filters <http://jinja.pocoo.org/docs/dev/templates/>`_ can be also be applied where needed.
+
+
+.. autoclass:: superset.jinja_context.PrestoTemplateProcessor
+    :members:
+
+.. autofunction:: superset.jinja_context.url_param
diff --git a/_sources/tutorial.rst.txt b/_sources/tutorial.rst.txt
new file mode 100644
index 0000000..8273398
--- /dev/null
+++ b/_sources/tutorial.rst.txt
@@ -0,0 +1,325 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Tutorial - Creating your first dashboard
+========================================
+
+This tutorial targets someone who wants to create charts and dashboards
+in Superset. We'll show you how to connect Superset
+to a new database and configure a table in that database for analysis. You'll
+also explore the data you've exposed and add a visualization to a dashboard
+so that you get a feel for the end-to-end user experience.
+
+Connecting to a new database
+----------------------------
+
+We assume you already have a database configured and can connect to it from the 
+instance on which you’re running Superset. If you’re just testing Superset and
+want to explore sample data, you can load some
+`sample PostgreSQL datasets <https://wiki.postgresql.org/wiki/Sample_Databases>`_
+into a fresh DB, or configure the
+`example weather data <https://github.com/dylburger/noaa-ghcn-weather-data>`_
+we use here.
+
+Under the **Sources** menu, select the *Databases* option:
+
+.. image:: images/tutorial/tutorial_01_sources_database.png
+   :scale: 70%
+
+On the resulting page, click on the green plus sign, near the top right:
+
+.. image:: images/tutorial/tutorial_02_add_database.png
+   :scale: 70%
+
+You can configure a number of advanced options on this page, but for 
+this walkthrough, you’ll only need to do **two things**:
+
+1. Name your database connection:
+
+.. image:: images/tutorial/tutorial_03_database_name.png
+   :scale: 70%
+
+2. Provide the SQLAlchemy Connection URI and test the connection:
+
+.. image:: images/tutorial/tutorial_04_sqlalchemy_connection_string.png
+   :scale: 70%
+
+This example shows the connection string for our test weather database. 
+As noted in the text below the URI, you should refer to the SQLAlchemy 
+documentation on 
+`creating new connection URIs <https://docs.sqlalchemy.org/en/rel_1_2/core/engines.html#database-urls>`_
+for your target database.
+
+Click the **Test Connection** button to confirm things work end to end. 
+Once Superset can successfully connect and authenticate, you should see 
+a popup like this:
+
+.. image:: images/tutorial/tutorial_05_connection_popup.png
+   :scale: 50%
+
+Moreover, you should also see the list of tables Superset can read from 
+the schema you’re connected to, at the bottom of the page:
+
+.. image:: images/tutorial/tutorial_06_list_of_tables.png
+   :scale: 70%
+
+If the connection looks good, save the configuration by clicking the **Save** 
+button at the bottom of the page:
+
+.. image:: images/tutorial/tutorial_07_save_button.png
+   :scale: 70%
+
+Adding a new table
+------------------
+
+Now that you’ve configured a database, you’ll need to add specific tables 
+to Superset that you’d like to query.
+
+Under the **Sources** menu, select the *Tables* option:
+
+.. image:: images/tutorial/tutorial_08_sources_tables.png
+   :scale: 70%
+
+On the resulting page, click on the green plus sign, near the top left:
+
+.. image:: images/tutorial/tutorial_09_add_new_table.png
+   :scale: 70%
+
+You only need a few pieces of information to add a new table to Superset:
+
+* The name of the table
+
+.. image:: images/tutorial/tutorial_10_table_name.png
+   :scale: 70%
+
+* The target database from the **Database** drop-down menu (i.e. the one 
+  you just added above)
+
+.. image:: images/tutorial/tutorial_11_choose_db.png
+   :scale: 70%
+
+* Optionally, the database schema. If the table exists in the “default” schema 
+  (e.g. the *public* schema in PostgreSQL or Redshift), you can leave the schema 
+  field blank.
+
+Click on the **Save** button to save the configuration:
+
+.. image:: images/tutorial/tutorial_07_save_button.png
+   :scale: 70%
+
+When redirected back to the list of tables, you should see a message indicating 
+that your table was created:
+
+.. image:: images/tutorial/tutorial_12_table_creation_success_msg.png
+   :scale: 70%
+
+This message also directs you to edit the table configuration. We’ll edit a limited 
+portion of the configuration now - just to get you started - and leave the rest for 
+a more advanced tutorial.
+
+Click on the edit button next to the table you’ve created:
+
+.. image:: images/tutorial/tutorial_13_edit_table_config.png
+   :scale: 70%
+
+On the resulting page, click on the **List Table Column** tab. Here, you’ll define the 
+way you can use specific columns of your table when exploring your data. We’ll run 
+through these options to describe their purpose:
+
+* If you want users to group metrics by a specific field, mark it as **Groupable**.
+* If you need to filter on a specific field, mark it as **Filterable**.
+* Is this field something you’d like to get the distinct count of? Check the **Count 
+  Distinct** box.
+* Is this a metric you want to sum, or get basic summary statistics for? The **Sum**, 
+  **Min**, and **Max** columns will help.
+* The **is temporal** field should be checked for any date or time fields. We’ll cover 
+  how this manifests itself in analyses in a moment.
+
+Here’s how we’ve configured fields for the weather data. Even for measures like the 
+weather measurements (precipitation, snowfall, etc.), it’s ideal to group and filter 
+by these values:
+
+.. image:: images/tutorial/tutorial_14_field_config.png
+
+As with the configurations above, click the **Save** button to save these settings.
+
+Exploring your data
+-------------------
+
+To start exploring your data, simply click on the table name you just created in 
+the list of available tables:
+
+.. image:: images/tutorial/tutorial_15_click_table_name.png
+
+By default, you’ll be presented with a Table View:
+
+.. image:: images/tutorial/tutorial_16_datasource_chart_type.png
+
+Let’s walk through a basic query to get the count of all records in our table. 
+First, we’ll need to change the **Since** filter to capture the range of our data. 
+You can use simple phrases to apply these filters, like "3 years ago":
+
+.. image:: images/tutorial/tutorial_17_choose_time_range.png
+
+The upper limit for time, the **Until** filter, defaults to "now", which may or may 
+not be what you want.
+
+Look for the Metrics section under the **GROUP BY** header, and start typing "Count" 
+- you’ll see a list of metrics matching what you type:
+
+.. image:: images/tutorial/tutorial_18_choose_metric.png
+
+Select the *COUNT(\*)* metric, then click the green **Query** button near the top 
+of the explore:
+
+.. image:: images/tutorial/tutorial_19_click_query.png
+
+You’ll see your results in the table:
+
+.. image:: images/tutorial/tutorial_20_count_star_result.png
+
+Let’s group this by the *weather_description* field to get the count of records by 
+the type of weather recorded by adding it to the *Group by* section:
+
+.. image:: images/tutorial/tutorial_21_group_by.png
+
+and run the query:
+
+.. image:: images/tutorial/tutorial_22_group_by_result.png
+
+Let’s find a more useful data point: the top 10 times and places that recorded the 
+highest temperature in 2015.
+
+We replace *weather_description* with *latitude*, *longitude* and *measurement_date* in the 
+*Group by* section:
+
+.. image:: images/tutorial/tutorial_23_group_by_more_dimensions.png
+
+And replace *COUNT(\*)* with *max__measurement_flag*:
+
+.. image:: images/tutorial/tutorial_24_max_metric.png
+
+The *max__measurement_flag* metric was created when we checked the box under **Max** and 
+next to the *measurement_flag* field, indicating that this field was numeric and that 
+we wanted to find its maximum value when grouped by specific fields.
+
+In our case, *measurement_flag* is the value of the measurement taken, which clearly 
+depends on the type of measurement (the researchers recorded different values for 
+precipitation and temperature). Therefore, we must filter our query only on records 
+where the *weather_description* is equal to "Maximum temperature", which we do in 
+the **Filters** section at the bottom of the explore:
+
+.. image:: images/tutorial/tutorial_25_max_temp_filter.png
+
+Finally, since we only care about the top 10 measurements, we limit our results to 
+10 records using the *Row limit* option under the **Options** header:
+
+.. image:: images/tutorial/tutorial_26_row_limit.png
+
+We click **Query** and get the following results:
+
+.. image:: images/tutorial/tutorial_27_top_10_max_temps.png
+
+In this dataset, the maximum temperature is recorded in tenths of a degree Celsius. 
+The top value of 1370, measured in the middle of Nevada, is equal to 137 C, or roughly 
+278 degrees F. It’s unlikely this value was correctly recorded. We’ve already been able 
+to investigate some outliers with Superset, but this just scratches the surface of what 
+we can do.
+
+You may want to do a couple more things with this measure:
+
+* The default formatting shows values like 1.37k, which may be difficult for some 
+  users to read. It’s likely you may want to see the full, comma-separated value. 
+  You can change the formatting of any measure by editing its config (*Edit Table 
+  Config > List Sql Metric > Edit Metric > D3Format*)
+* Moreover, you may want to see the temperature measurements in plain degrees C, 
+  not tenths of a degree. Or you may want to convert the temperature to degrees 
+  Fahrenheit. You can change the SQL that gets executed against the database, baking 
+  the logic into the measure itself (*Edit Table Config > List Sql Metric > Edit 
+  Metric > SQL Expression*)
+
+For now, though, let’s create a better visualization of these data and add it to 
+a dashboard.
+
+We change the Chart Type to "Distribution - Bar Chart":
+
+.. image:: images/tutorial/tutorial_28_bar_chart.png
+
+Our filter on Maximum temperature measurements was retained, but the query and 
+formatting options are dependent on the chart type, so you’ll have to set the 
+values again:
+
+.. image:: images/tutorial/tutorial_29_bar_chart_series_metrics.png
+
+You should note the extensive formatting options for this chart: the ability to 
+set axis labels, margins, ticks, etc. To make the data presentable to a broad 
+audience, you’ll want to apply many of these to slices that end up in dashboards. 
+For now, though, we run our query and get the following chart:
+
+.. image:: images/tutorial/tutorial_30_bar_chart_results.png
+   :scale: 70%
+
+Creating a slice and dashboard
+------------------------------
+
+This view might be interesting to researchers, so let’s save it. In Superset, 
+a saved query is called a **Slice**. 
+
+To create a slice, click the **Save as** button near the top-left of the 
+explore:
+
+.. image:: images/tutorial/tutorial_19_click_query.png
+
+A popup should appear, asking you to name the slice, and optionally add it to a 
+dashboard. Since we haven’t yet created any dashboards, we can create one and 
+immediately add our slice to it. Let’s do it:
+
+.. image:: images/tutorial/tutorial_31_save_slice_to_dashboard.png
+   :scale: 70%
+
+Click Save, which will direct you back to your original query. We see that 
+our slice and dashboard were successfully created:
+
+.. image:: images/tutorial/tutorial_32_save_slice_confirmation.png
+   :scale: 70%
+
+Let’s check out our new dashboard. We click on the **Dashboards** menu:
+
+.. image:: images/tutorial/tutorial_33_dashboard.png
+
+and find the dashboard we just created:
+
+.. image:: images/tutorial/tutorial_34_weather_dashboard.png
+
+Things seemed to have worked - our slice is here!
+
+.. image:: images/tutorial/tutorial_35_slice_on_dashboard.png
+   :scale: 70%
+
+But it’s a bit smaller than we might like. Luckily, you can adjust the size 
+of slices in a dashboard by clicking, holding and dragging the bottom-right 
+corner to your desired dimensions:
+
+.. image:: images/tutorial/tutorial_36_adjust_dimensions.gif
+   :scale: 120%
+
+After adjusting the size, you’ll be asked to click on the icon near the 
+top-right of the dashboard to save the new configuration.
+
+Congrats! You’ve successfully linked, analyzed, and visualized data in Superset. 
+There are a wealth of other table configuration and visualization options, so 
+please start exploring and creating slices and dashboards of your own.
diff --git a/_sources/tutorial.txt b/_sources/tutorial.txt
new file mode 100644
index 0000000..695057c
--- /dev/null
+++ b/_sources/tutorial.txt
@@ -0,0 +1,308 @@
+Tutorial for Superset Administrators
+====================================
+
+This tutorial targets a Superset administrator: someone configuring Superset 
+for an organization on behalf of users. We'll show you how to connect Superset 
+to a new database and configure a table in that database for analysis. You'll 
+also explore the data you've exposed and add a visualization to a dashboard 
+so that you get a feel for the end-to-end user experience.
+
+Connecting to a new database
+----------------------------
+
+We assume you already have a database configured and can connect to it from the 
+instance on which you’re running Superset. If you’re just testing Superset and 
+want to explore sample data, you can load some 
+`sample PostgreSQL datasets <https://wiki.postgresql.org/wiki/Sample_Databases>`_
+into a fresh DB, or configure the 
+`example weather data <https://github.com/dylburger/noaa-ghcn-weather-data>`_
+we use here.
+
+Under the **Sources** menu, select the *Databases* option:
+
+.. image:: _static/img/tutorial/tutorial_01_sources_database.png
+   :scale: 70%
+
+On the resulting page, click on the green plus sign, near the top left:
+
+.. image:: _static/img/tutorial/tutorial_02_add_database.png
+   :scale: 70%
+
+You can configure a number of advanced options on this page, but for 
+this walkthrough, you’ll only need to do **two things**:
+
+1. Name your database connection:
+
+.. image:: _static/img/tutorial/tutorial_03_database_name.png
+   :scale: 70%
+
+2. Provide the SQLAlchemy Connection URI and test the connection:
+
+.. image:: _static/img/tutorial/tutorial_04_sqlalchemy_connection_string.png
+   :scale: 70%
+
+This example shows the connection string for our test weather database. 
+As noted in the text below the URI, you should refer to the SQLAlchemy 
+documentation on 
+`creating new connection URIs <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#database-urls>`_
+for your target database.
+
+Click the **Test Connection** button to confirm things work end to end. 
+Once Superset can successfully connect and authenticate, you should see 
+a popup like this:
+
+.. image:: _static/img/tutorial/tutorial_05_connection_popup.png
+   :scale: 50%
+
+Moreover, you should also see the list of tables Superset can read from 
+the schema you’re connected to, at the bottom of the page:
+
+.. image:: _static/img/tutorial/tutorial_06_list_of_tables.png
+   :scale: 70%
+
+If the connection looks good, save the configuration by clicking the **Save** 
+button at the bottom of the page:
+
+.. image:: _static/img/tutorial/tutorial_07_save_button.png
+   :scale: 70%
+
+Adding a new table
+------------------
+
+Now that you’ve configured a database, you’ll need to add specific tables 
+to Superset that you’d like to query.
+
+Under the **Sources** menu, select the *Tables* option:
+
+.. image:: _static/img/tutorial/tutorial_08_sources_tables.png
+   :scale: 70%
+
+On the resulting page, click on the green plus sign, near the top left:
+
+.. image:: _static/img/tutorial/tutorial_09_add_new_table.png
+   :scale: 70%
+
+You only need a few pieces of information to add a new table to Superset:
+
+* The name of the table
+
+.. image:: _static/img/tutorial/tutorial_10_table_name.png
+   :scale: 70%
+
+* The target database from the **Database** drop-down menu (i.e. the one 
+  you just added above)
+
+.. image:: _static/img/tutorial/tutorial_11_choose_db.png
+   :scale: 70%
+
+* Optionally, the database schema. If the table exists in the “default” schema 
+  (e.g. the *public* schema in PostgreSQL or Redshift), you can leave the schema 
+  field blank.
+
+Click on the **Save** button to save the configuration:
+
+.. image:: _static/img/tutorial/tutorial_07_save_button.png
+   :scale: 70%
+
+When redirected back to the list of tables, you should see a message indicating 
+that your table was created:
+
+.. image:: _static/img/tutorial/tutorial_12_table_creation_success_msg.png
+   :scale: 70%
+
+This message also directs you to edit the table configuration. We’ll edit a limited 
+portion of the configuration now - just to get you started - and leave the rest for 
+a more advanced tutorial.
+
+Click on the edit button next to the table you’ve created:
+
+.. image:: _static/img/tutorial/tutorial_13_edit_table_config.png
+   :scale: 70%
+
+On the resulting page, click on the **List Table Column** tab. Here, you’ll define the 
+way you can use specific columns of your table when exploring your data. We’ll run 
+through these options to describe their purpose:
+
+* If you want users to group metrics by a specific field, mark it as **Groupable**.
+* If you need to filter on a specific field, mark it as **Filterable**.
+* Is this field something you’d like to get the distinct count of? Check the **Count 
+  Distinct** box.
+* Is this a metric you want to sum, or get basic summary statistics for? The **Sum**, 
+  **Min**, and **Max** columns will help.
+* The **is temporal** field should be checked for any date or time fields. We’ll cover 
+  how this manifests itself in analyses in a moment.
+
+Here’s how we’ve configured fields for the weather data. Even for measures like the 
+weather measurements (precipitation, snowfall, etc.), it’s ideal to group and filter 
+by these values:
+
+.. image:: _static/img/tutorial/tutorial_14_field_config.png
+
+As with the configurations above, click the **Save** button to save these settings.
+
+Exploring your data
+-------------------
+
+To start exploring your data, simply click on the table name you just created in 
+the list of available tables:
+
+.. image:: _static/img/tutorial/tutorial_15_click_table_name.png
+
+By default, you’ll be presented with a Table View:
+
+.. image:: _static/img/tutorial/tutorial_16_datasource_chart_type.png
+
+Let’s walk through a basic query to get the count of all records in our table. 
+First, we’ll need to change the **Since** filter to capture the range of our data. 
+You can use simple phrases to apply these filters, like "3 years ago":
+
+.. image:: _static/img/tutorial/tutorial_17_choose_time_range.png
+
+The upper limit for time, the **Until** filter, defaults to "now", which may or may 
+not be what you want.
+
+Look for the Metrics section under the **GROUP BY** header, and start typing "Count" 
+- you’ll see a list of metrics matching what you type:
+
+.. image:: _static/img/tutorial/tutorial_18_choose_metric.png
+
+Select the *COUNT(\*)* metric, then click the green **Query** button near the top 
+of the explore:
+
+.. image:: _static/img/tutorial/tutorial_19_click_query.png
+
+You’ll see your results in the table:
+
+.. image:: _static/img/tutorial/tutorial_20_count_star_result.png
+
+Let’s group this by the *weather_description* field to get the count of records by 
+the type of weather recorded by adding it to the *Group by* section:
+
+.. image:: _static/img/tutorial/tutorial_21_group_by.png
+
+and run the query:
+
+.. image:: _static/img/tutorial/tutorial_22_group_by_result.png
+
+Let’s find a more useful data point: the top 10 times and places that recorded the 
+highest temperature in 2015.
+
+We replace *weather_description* with *latitude*, *longitude* and *measurement_date* in the 
+*Group by* section:
+
+.. image:: _static/img/tutorial/tutorial_23_group_by_more_dimensions.png
+
+And replace *COUNT(\*)* with *max__measurement_flag*:
+
+.. image:: _static/img/tutorial/tutorial_24_max_metric.png
+
+The *max__measurement_flag* metric was created when we checked the box under **Max** and 
+next to the *measurement_flag* field, indicating that this field was numeric and that 
+we wanted to find its maximum value when grouped by specific fields.
+
+In our case, *measurement_flag* is the value of the measurement taken, which clearly 
+depends on the type of measurement (the researchers recorded different values for 
+precipitation and temperature). Therefore, we must filter our query only on records 
+where the *weather_description* is equal to "Maximum temperature", which we do in 
+the **Filters** section at the bottom of the explore:
+
+.. image:: _static/img/tutorial/tutorial_25_max_temp_filter.png
+
+Finally, since we only care about the top 10 measurements, we limit our results to 
+10 records using the *Row limit* option under the **Options** header:
+
+.. image:: _static/img/tutorial/tutorial_26_row_limit.png
+
+We click **Query** and get the following results:
+
+.. image:: _static/img/tutorial/tutorial_27_top_10_max_temps.png
+
+In this dataset, the maximum temperature is recorded in tenths of a degree Celsius. 
+The top value of 1370, measured in the middle of Nevada, is equal to 137 C, or roughly 
+278 degrees F. It’s unlikely this value was correctly recorded. We’ve already been able 
+to investigate some outliers with Superset, but this just scratches the surface of what 
+we can do.
+
+You may want to do a couple more things with this measure:
+
+* The default formatting shows values like 1.37k, which may be difficult for some 
+  users to read. It’s likely you may want to see the full, comma-separated value. 
+  You can change the formatting of any measure by editing its config (*Edit Table 
+  Config > List Sql Metric > Edit Metric > D3Format*)
+* Moreover, you may want to see the temperature measurements in plain degrees C, 
+  not tenths of a degree. Or you may want to convert the temperature to degrees 
+  Fahrenheit. You can change the SQL that gets executed agains the database, baking 
+  the logic into the measure itself (*Edit Table Config > List Sql Metric > Edit 
+  Metric > SQL Expression*)
+
+For now, though, let’s create a better visualization of these data and add it to 
+a dashboard.
+
+We change the Chart Type to "Distribution - Bar Chart":
+
+.. image:: _static/img/tutorial/tutorial_28_bar_chart.png
+
+Our filter on Maximum temperature measurements was retained, but the query and 
+formatting options are dependent on the chart type, so you’ll have to set the 
+values again:
+
+.. image:: _static/img/tutorial/tutorial_29_bar_chart_series_metrics.png
+
+You should note the extensive formatting options for this chart: the ability to 
+set axis labels, margins, ticks, etc. To make the data presentable to a broad 
+audience, you’ll want to apply many of these to slices that end up in dashboards. 
+For now, though, we run our query and get the following chart:
+
+.. image:: _static/img/tutorial/tutorial_30_bar_chart_results.png
+   :scale: 70%
+
+Creating a slice and dashboard
+------------------------------
+
+This view might be interesting to researchers, so let’s save it. In Superset, 
+a saved query is called a **Slice**. 
+
+To create a slice, click the **Save as** button near the top-left of the 
+explore:
+
+.. image:: _static/img/tutorial/tutorial_19_click_query.png
+
+A popup should appear, asking you to name the slice, and optionally add it to a 
+dashboard. Since we haven’t yet created any dashboards, we can create one and 
+immediately add our slice to it. Let’s do it:
+
+.. image:: _static/img/tutorial/tutorial_31_save_slice_to_dashboard.png
+   :scale: 70%
+
+Click Save, which will direct you back to your original query. We see that 
+our slice and dashboard were successfully created:
+
+.. image:: _static/img/tutorial/tutorial_32_save_slice_confirmation.png
+   :scale: 70%
+
+Let’s check out our new dashboard. We click on the **Dashboards** menu:
+
+.. image:: _static/img/tutorial/tutorial_33_dashboard.png
+
+and find the dashboard we just created:
+
+.. image:: _static/img/tutorial/tutorial_34_weather_dashboard.png
+
+Things seemed to have worked - our slice is here!
+
+.. image:: _static/img/tutorial/tutorial_35_slice_on_dashboard.png
+   :scale: 70%
+
+But it’s a bit smaller than we might like. Luckily, you can adjust the size 
+of slices in a dashboard by clicking, holding and dragging the bottom-right 
+corner to your desired dimensions:
+
+.. image:: _static/img/tutorial/tutorial_36_adjust_dimensions.gif
+   :scale: 120%
+
+After adjusting the size, you’ll be asked to click on the icon near the 
+top-right of the dashboard to save the new configuration.
+
+Congrats! You’ve successfully linked, analyzed, and visualized data in Superset. 
+There are a wealth of other table configuration and visualization options, so 
+please start exploring and creating slices and dashboards of your own.
diff --git a/_sources/tutorials.rst.txt b/_sources/tutorials.rst.txt
new file mode 100644
index 0000000..9edd148
--- /dev/null
+++ b/_sources/tutorials.rst.txt
@@ -0,0 +1,25 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Tutorials
+---------
+
+.. toctree::
+    :maxdepth: 2
+
+    admintutorial
+    usertutorial
diff --git a/_sources/usertutorial.rst.txt b/_sources/usertutorial.rst.txt
new file mode 100644
index 0000000..9c69262
--- /dev/null
+++ b/_sources/usertutorial.rst.txt
@@ -0,0 +1,507 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Exploring data with Apache Superset
+===================================
+
+In this tutorial, we will introduce key concepts in Apache Superset
+through the exploration of a real dataset which contains the flights
+made by employees of a UK-based organization in 2011. The following
+information about each flight is given:
+
+-  The traveller's department. For the purposes of this tutorial the
+   departments have been renamed Orange, Yellow and Purple.
+-  The cost of the ticket.
+-  The travel class (Economy, Premium Economy, Business and First
+   Class).
+-  Whether the ticket was a single or return.
+-  The date of travel.
+-  Information about the origin and destination.
+-  The distance between the origin and destination, in kilometers (km).
+
+Enabling Upload a CSV Functionality
+-----------------------------------
+
+You may need to enable the functionality to upload a CSV to your
+database. The following section explains how to enable this
+functionality for the examples database.
+
+In the top menu, select :menuselection:`Sources --> Databases`. Find the
+:guilabel:`examples` database in the list and select the edit record
+button.
+
+.. image:: _static/images/usertutorial/edit-record.png
+
+Within the :guilabel:`Edit Database` page, check the
+:guilabel:`Allow Csv Upload` checkbox.
+
+Finally, save by selecting :guilabel:`Save` at the bottom of the page.
+
+Obtaining and loading the data
+------------------------------
+
+Download the data for this tutorial to your computer from
+`Github <https://raw.githubusercontent.com/apache-superset/examples-data/master/tutorial_flights.csv>`__.
+
+In the top menu, select :menuselection:`Sources --> Upload a CSV`.
+
+.. image:: _static/images/usertutorial/upload_a_csv.png
+
+Then, enter the :guilabel:`Table name` as `tutorial_flights`
+and select the :guilabel:`CSV file` from your computer.
+
+.. image:: _static/images/usertutorial/csv_to_database_configuration.png
+
+Next enter the text `Travel Date` into the
+:guilabel:`Parse Dates` field.
+
+.. image:: _static/images/usertutorial/parse_dates_column.png
+
+Leaving all the other options in their default settings, select
+:guilabel:`Save` at the bottom of the page.
+
+Table Visualization
+-------------------
+
+In this section, we’ll create our first visualization: a table to show
+the number of flights and cost per travel class.
+
+To create a new chart, select the :menuselection:`New --> Chart`.
+
+.. image:: _static/images/usertutorial/add_new_chart.png
+
+Once in the :guilabel:`Create a new chart` dialogue, select
+:guilabel:`tutorial_flights` from the :guilabel:`Chose a datasource`
+dropdown.
+
+.. image:: _static/images/usertutorial/chose_a_datasource.png
+
+Next, select the visualization type as :guilabel:`Table`.
+
+.. image:: _static/images/usertutorial/select_table_visualization_type.png
+
+Then, select :guilabel:`Create new chart` to go into the chart view.
+
+By default, Apache Superset only shows the last week of data: in our
+example, we want to look at all the data in the dataset. No problem -
+within the :guilabel:`Time` section, remove the filter on
+:guilabel:`Time range` by selecting on :guilabel:`Last week` then
+changing the selection to :guilabel:`No filter`, with a final
+:guilabel:`OK` to confirm your selection.
+
+.. image:: _static/images/usertutorial/no_filter_on_time_filter.png
+
+Now, we want to specify the rows in our table by using the
+:guilabel:`Group by` option. Since in this example, we want to
+understand different Travel Classes, we select :guilabel:`Travel Class`
+in this menu.
+
+Next, we can specify the metrics we would like to see in our table with
+the :guilabel:`Metrics` option. :guilabel:`Count(*)`, which represents the number of
+rows in the table (in this case corresponding to the number of flights
+since we have a row per flight), is already there. To add cost, within
+:guilabel:`Metrics`, select :guilabel:`Cost`. :guilabel:`Save` the
+default aggregation option, which is to sum the column.
+
+.. image:: _static/images/usertutorial/sum_cost_column.png
+
+Finally, select :guilabel:`Run Query` to see the results of the table.
+
+.. image:: _static/images/usertutorial/tutorial_table.png
+
+Congratulations, you have created your first visualization in Apache
+Superset!
+
+To save the visualization, click on :guilabel:`Save` in the top left of
+the screen. Select the :guilabel:`Save as` option, and enter the chart
+name as Tutorial Table (you will be able to find it again through the
+:guilabel:`Charts` screen, accessible in the top menu). Similarly,
+select :guilabel:`Add to new dashboard` and enter `Tutorial Dashboard`.
+Finally, select :guilabel:`Save & go to dashboard`.
+
+.. image:: _static/images/usertutorial/save_tutorial_table.png
+
+Dashboard basics
+----------------
+
+Next, we are going to explore the dashboard interface. If you’ve
+followed the previous section, you should already have the dashboard
+open. Otherwise, you can navigate to the dashboard by selecting
+:guilabel:`Dashboards` on the top menu, then :guilabel:`Tutorial dashboard`
+from the list of dashboards.
+
+On this dashboard you should see the table you created in the previous
+section. Select :guilabel:`Edit dashboard` and then hover over the
+table. By selecting the bottom right hand corner of the table (the
+cursor will change too), you can resize it by dragging and dropping.
+
+.. image:: _static/images/usertutorial/resize_tutorial_table_on_dashboard.png
+
+Finally, save your changes by selecting :guilabel:`Save changes` in the
+top right.
+
+Pivot Table
+-----------
+
+In this section, we will extend our analysis using a more complex
+visualization, Pivot Table. By the end of this section, you will have
+created a table that shows the monthly spend on flights for the first
+six months, by department, by travel class.
+
+As before, create a new visualization by selecting
+:menuselection:`New --> Chart` on the top menu. Choose tutorial_flights
+again as a datasource, then click on the visualization type to get to
+the visualization menu. Select the :guilabel:`Pivot Table` visualization
+(you can filter by entering text in the search box) and then
+:guilabel:`Create a new chart`.
+
+In the :guilabel:`Time` section, keep the Time Column as Travel Date
+(this is selected automatically as we only have one time column in our
+dataset). Then select :guilabel:`Time Grain` to be month as having daily
+data would be too granular to see patterns from. Then select the time
+range to be the first six months of 2011 by click on Last week in the
+:guilabel:`Time Range` section, then in :guilabel:`Custom` selecting a
+:guilabel:`Start / end` of 1\ :sup:`st` January 2011 and 30\ :sup:`th`
+June 2011 respectively by either entering directly the dates or using
+the calendar widget (by selecting the month name and then the year, you
+can move more quickly to far away dates).
+
+.. image:: _static/images/usertutorial/select_dates_pivot_table.png
+
+Next, within the :guilabel:`Query` section, remove the default COUNT(*)
+and add Cost, keeping the default SUM aggregate. Note that Apache
+Superset will indicate the type of the metric by the symbol on the left
+hand column of the list (ABC for string, # for number, a clock face for
+time, etc.).
+
+In :guilabel:`Group by` select :guilabel:`Time`: this will automatically
+use the Time Column and Time Grain selections we defined in the Time
+section.
+
+Within :guilabel:`Columns`, select first :guilabel:`Department` and then
+:guilabel:`Travel Class`. All set – let’s :guilabel:`Run Query` to see
+some data!
+
+.. image:: _static/images/usertutorial/tutorial_pivot_table.png
+
+You should see months in the rows and Department and Travel Class in the
+columns. To get this in our dashboard, select :guilabel:`Save`, name the
+chart Tutorial Pivot and using
+:guilabel:`Add chart to existing dashboard` select
+:guilabel:`Tutorial Dashboard`, and then finally
+:guilabel:`Save & go to dashboard`.
+
+Line Chart
+----------
+
+In this section, we are going to create a line chart to understand the
+average price of a ticket by month across the entire dataset. As before,
+select :menuselection:`New --> Chart`, and then
+:guilabel:`tutorial_flights` as the datasource and
+:guilabel:`Line Chart` as the visualization type.
+
+In the Time section, as before, keep the :guilabel:`Time Column` as
+Travel Date and :guilabel:`Time Grain` as month but this time for the
+:guilabel:`Time range` select :guilabel:`No filter` as we want to look
+at entire dataset.
+
+Within :guilabel:`Metrics`, remove the default :guilabel:`COUNT(*)` and
+add :guilabel:`Cost`. This time, we want to change how this column is
+aggregated to show the mean value: we can do this by selecting
+:guilabel:`AVG` in the :guilabel:`aggregate` dropdown.
+
+.. image:: _static/images/usertutorial/average_aggregate_for_cost.png
+
+Next, select :guilabel:`Run Query` to show the data on the chart.
+
+How does this look? Well, we can see that the average cost goes up in
+December. However, perhaps it doesn’t make sense to combine both single
+and return tickets, but rather show two separate lines for each ticket
+type.
+
+Let’s do this by selecting :guilabel:`Ticket Single or Return` in the
+:guilabel:`Group by` box, and the selecting :guilabel:`Run Query` again.
+Nice! We can see that on average single tickets are cheaper than returns
+and that the big spike in December is caused by return tickets.
+
+Our chart is looking pretty good already, but let’s customize some more
+by going to the :guilabel:`Customize` tab on the left hand pane. Within
+this pane, try changing the :guilabel:`Color Scheme`, removing the range
+filter by selecting No in the :guilabel:`Show Range Filter` drop down
+and adding some labels using :guilabel:`X Axis Label` and
+:guilabel:`Y Axis Label`.
+
+.. image:: _static/images/usertutorial/tutorial_line_chart.png
+
+Once you’re done, :guilabel:`Save` as Tutorial Line Chart, use
+:guilabel:`Add chart to
+existing dashboard` to add this chart to the previous ones on the
+Tutorial Dashboard and then :guilabel:`Save & go to dashboard`.
+
+Markup
+------
+
+In this section, we will add some text to our dashboard. If you’re there
+already, you can navigate to the dashboard by selecting
+:guilabel:`Dashboards` on the top menu, then
+:guilabel:`Tutorial dashboard` from the list of dashboards. Got into
+edit mode by selecting :guilabel:`Edit dashboard`.
+
+Within the Insert components pane, drag and drop a :guilabel:`Markdown`
+box on the dashboard. Look for the blue lines which indicate the anchor
+where the box will go.
+
+.. image:: _static/images/usertutorial/blue_bar_insert_component.png
+
+Now, to edit the text, select the box. You can enter text, in markdown
+format (see `this Markdown
+Cheatsheet <https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet>`__
+for more information about this format). You can toggle between
+:guilabel:`Edit` and :guilabel:`Preview` using the menu on the top of
+the box.
+
+.. image:: _static/images/usertutorial/markdown.png
+
+To exit, select any other part of the dashboard. Finally, don’t forget
+to keep your changes using :guilabel:`Save changes`.
+
+Filter box
+----------
+
+In this section, you will learn how to add a filter to your dashboard.
+Specifically, we will create a filter that allows us to look at those
+flights that depart from a particular country.
+
+A filter box visualization can be created as any other visualization by
+selecting :menuselection:`New --> Chart`, and then
+:guilabel:`tutorial_flights` as the datasource and
+:guilabel:`Filter Box` as the visualization type.
+
+First of all, in the :guilabel:`Time` section, remove the filter from
+the :guilabel:`Time
+range` selection by selecting :guilabel:`No filter`.
+
+Next, in :guilabel:`Filters Configurations` first add a new filter by
+selecting the plus sign and then edit the newly created filter by
+selecting the pencil icon.
+
+For our use case, it makes most sense to present a list of countries in
+alphabetical order. First, enter the column as
+:guilabel:`Origin Country` and keep all other options the same and then
+select :guilabel:`Run Query`. This gives us a preview of our filter.
+
+Next, remove the date filter by unchecking the :guilabel:`Date Filter`
+checkbox.
+
+.. image:: _static/images/usertutorial/filter_on_origin_country.png
+
+Finally, select :guilabel:`Save`, name the chart as Tutorial Filter, add
+the chart to our existing Tutorial Dashboard and then
+:guilabel:`Save & go to
+dashboard`. Once on the Dashboard, try using the filter to show only
+those flights that departed from the United Kingdom – you will see the
+filter is applied to all of the other visualizations on the dashboard.
+
+Publishing your dashboard
+-------------------------
+
+If you have followed all of the steps outlined in the previous section,
+you should have a dashboard that looks like the below. If you would
+like, you can rearrange the elements of the dashboard by selecting
+:guilabel:`Edit dashboard` and dragging and dropping.
+
+If you would like to make your dashboard available to other users,
+simply select :guilabel:`Draft` next to the title of your dashboard on
+the top left to change your dashboard to be in :guilabel:`Published`
+state. You can also favorite this dashboard by selecting the star.
+
+.. image:: _static/images/usertutorial/publish_dashboard.png
+
+Taking your dashboard further
+-----------------------------
+
+In the following sections, we will look at more advanced Apache Superset
+topics.
+
+Annotations
+-----------
+
+Annotations allow you to add additional context to your chart. In this
+section, we will add an annotation to the Tutorial Line Chart we made in
+a previous section. Specifically, we will add the dates when some
+flights were cancelled by the UK's Civil Aviation Authority in response
+to the eruption of the Grímsvötn volcano in Iceland (23-25 May 2011).
+
+First, add an annotation layer by navigating to
+:menuselection:`Manage --> Annotation Layers`. Add a new annotation
+layer by selecting the green plus sign to add a new record. Enter the
+name Volcanic Eruptions and save. We can use this layer to refer to a
+number of different annotations.
+
+Next, add an annotation by navigating to
+:menuselection:`Manage --> Annotations` and then create a new annotation
+by selecting the green plus sign. Then, select the
+:guilabel:`Volcanic Eruptions` layer, add a short description Grímsvötn
+and the eruption dates (23-25 May 2011) before finally saving.
+
+.. image:: _static/images/usertutorial/edit_annotation.png
+
+Then, navigate to the line chart by going to :guilabel:`Charts` then
+selecting :guilabel:`Tutorial
+Line Chart` from the list. Next, go to the
+:guilabel:`Annotations and Layers` section and select
+:guilabel:`Add Annotation Layer`. Within this dialogue:
+
+- name the layer as `Volcanic Eruptions`
+- change the :guilabel:`Annotation Layer Type` to :guilabel:`Event`
+- set the :guilabel:`Annotation Source` as :guilabel:`Superset annotation`
+- specify the :guilabel:`Annotation Layer` as :guilabel:`Volcanic Eruptions`
+
+.. image:: _static/images/usertutorial/annotation_settings.png
+
+Select :guilabel:`Apply` to see your annotation shown on the chart.
+
+.. image:: _static/images/usertutorial/annotation.png
+
+If you wish, you can change how your annotation looks by changing the
+settings in the :guilabel:`Display configuration` section. Otherwise,
+select :guilabel:`OK` and finally :guilabel:`Save` to save your chart.
+If you keep the default selection to overwrite the chart, your
+annotation will be saved to the chart and also appear automatically in
+the Tutorial Dashboard.
+
+Advanced Analytics
+------------------
+
+In this section, we are going to explore the Advanced Analytics feature
+of Apache Superset that allows you to apply additional transformations
+to your data. The three types of transformation are:
+
+Moving Average
+  Select a rolling window [#f1]_, and then apply a calculation on it (mean,
+  sum or standard deviation). The fourth option, cumsum, calculates the
+  cumulative sum of the series [#f2]_.
+
+Time Comparison
+  Shift your data in time and, optionally, apply a calculation to compare the
+  shifted data with your actual data (e.g. calculate the absolute difference
+  between the two).
+
+Python Functions
+  Resample your data using one of a variety of methods [#f3]_.
+
+Setting up the base chart
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this section, we're going to set up a base chart which we can then
+apply the different Advanced Analytics features to. Start off by
+creating a new chart using the same :guilabel:`tutorial_flights`
+datasource and the :guilabel:`Line Chart` visualization type. Within the
+Time section, set the :guilabel:`Time Range` as 1\ :sup:`st` October
+2011 and 31\ :sup:`st` October 2011.
+
+Next, in the query section, change the :guilabel:`Metrics` to the sum of
+:guilabel:`Cost`. Select :guilabel:`Run Query` to show the chart. You
+should see the total cost per day for each month in October 2011.
+
+.. image:: _static/images/usertutorial/advanced_analytics_base.png
+
+Finally, save the visualization as Tutorial Advanced Analytics Base,
+adding it to the Tutorial Dashboard.
+
+Rolling mean
+~~~~~~~~~~~~
+
+There is quite a lot of variation in the data, which makes it difficult
+to identify any trend. One approach we can take is to show instead a
+rolling average of the time series. To do this, in the
+:guilabel:`Moving Average` subsection of :guilabel:`Advanced Analytics`,
+select mean in the :guilabel:`Rolling` box and enter 7 into both Periods
+and Min Periods. The period is the length of the rolling period
+expressed as a multiple of the :guilabel:`Time Grain`. In our example,
+the :guilabel:`Time Grain` is day, so the rolling period is 7 days, such
+that on the 7th October 2011 the value shown would correspond to the
+first seven days of October 2011. Lastly, by specifying
+:guilabel:`Min Periods` as 7, we ensure that our mean is always
+calculated on 7 days and we avoid any ramp up period.
+
+After displaying the chart by selecting :guilabel:`Run Query` you will
+see that the data is less variable and that the series starts later as
+the ramp up period is excluded.
+
+.. image:: _static/images/usertutorial/rolling_mean.png
+
+Save the chart as Tutorial Rolling Mean and add it to the Tutorial
+Dashboard.
+
+Time Comparison
+~~~~~~~~~~~~~~~
+
+In this section, we will compare values in our time series to the value
+a week before. Start off by opening the Tutorial Advanced Analytics Base
+chart, by going to :guilabel:`Charts` in the top menu and then selecting
+the visualization name in the list (alternatively, find the chart in the
+Tutorial Dashboard and select Explore chart from the menu for that
+visualization).
+
+Next, in the :guilabel:`Time Comparison` subsection of
+:guilabel:`Advanced Analytics`, enter the :guilabel:`Time Shift` by
+typing in "minus 1 week" (note this box accepts input in natural
+language). :guilabel:`Run Query` to see the new chart, which has an
+additional series with the same values, shifted a week back in time.
+
+.. image:: _static/images/usertutorial/time_comparison_two_series.png
+
+Then, change the :guilabel:`Calculation type` to
+:guilabel:`Absolute difference` and select :guilabel:`Run
+Query`. We can now see only one series again, this time showing the
+difference between the two series we saw previously.
+
+.. image:: _static/images/usertutorial/time_comparison_absolute_difference.png
+
+Save the chart as Tutorial Time Comparison and add it to the Tutorial
+Dashboard.
+
+Resampling the data
+~~~~~~~~~~~~~~~~~~~
+
+In this section, we'll resample the data so that rather than having
+daily data we have weekly data. As in the previous section, reopen the
+Tutorial Advanced Analytics Base chart.
+
+Next, in the :guilabel:`Python Functions` subsection of
+:guilabel:`Advanced Analytics`, enter 7D, corresponding to seven days,
+in the :guilabel:`Rule` and median as the :guilabel:`Method` and show
+the chart by selecting :guilabel:`Run Query`.
+
+.. image:: _static/images/usertutorial/resample.png
+
+Note that now we have a single data point every 7 days. In our case, the
+value showed corresponds to the median value within the seven daily data
+points. For more information on the meaning of the various options in
+this section, refer to the `Pandas
+documentation <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html>`__.
+
+Lastly, save your chart as Tutorial Resample and add it to the Tutorial
+Dashboard. Go to the tutorial dashboard to see the four charts side by
+side and compare the different outputs.
+
+.. rubric:: Footnotes
+
+.. [#f1] See the Pandas `rolling method documentation <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html>`_ for more information.
+.. [#f2] See the Pandas `cumsum method documentation <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.cumsum.html>`_ for more information.
+.. [#f3] See the Pandas `resample method documentation <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html>`_ for more information.
diff --git a/_sources/videos.rst.txt b/_sources/videos.rst.txt
new file mode 100644
index 0000000..ba41fd2
--- /dev/null
+++ b/_sources/videos.rst.txt
@@ -0,0 +1,22 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Videos
+======
+
+.. note ::
+    This section of the documentation has yet to be filled in.
diff --git a/_sources/videos.txt b/_sources/videos.txt
new file mode 100644
index 0000000..15ef91d
--- /dev/null
+++ b/_sources/videos.txt
@@ -0,0 +1,54 @@
+Videos
+======
+
+Here is a collection of short videos showing different aspect
+of Superset.
+
+Quick Intro
+'''''''''''
+This video demonstrates how Superset works at a high level, it shows how
+to navigate through datasets and dashboards that are already available.
+
+.. youtube:: https://www.youtube.com/watch?v=3Txm_nj_R7M
+
+Dashboard Creation
+''''''''''''''''''
+This video walk you through the creation of a simple dashboard as a
+collection of data slices.
+
+- Coming soon!
+
+Dashboard Filtering
+'''''''''''''''''''
+This video shows how to create dynamic filters on dashboards, how to
+immunize certain widgets from being affected by filters.
+
+- Coming soon!
+
+Customize CSS and dashboard themes
+''''''''''''''''''''''''''''''''''
+A quick walkthrough on how to apply existing CSS templates, alter them and
+create new ones.
+
+- Coming soon!
+
+Slice Annotations
+'''''''''''''''''
+A short video on how to annotate your charts, the markdown language and
+to toggle them on dashboards.
+
+- Coming soon!
+
+Adding a Table
+''''''''''''''
+This videos shows you how to expose a new table in Superset, and how to
+define the semantics on how this can be accessed by others in the ``Explore``
+and ``Dashboard`` views.
+
+- Coming soon!
+
+Define SQL Expressions
+''''''''''''''''''''''
+A walkthrough on how to create your own derived dimensions and metrics.
+
+- Coming soon!
diff --git a/_sources/visualization.rst.txt b/_sources/visualization.rst.txt
new file mode 100644
index 0000000..b56a979
--- /dev/null
+++ b/_sources/visualization.rst.txt
@@ -0,0 +1,2007 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Visualization Tools
+===================
+
+The data is visualized via the slices. These slices are visual components made with the D3.js. Some components can be completed or required inputs.
+
+Country Map Tools
+-----------------
+
+This tool is used in slices for visualization number or string by region, province or department of your countries.
+So, if you want to use tools, you need ISO 3166-2 code of region, province or department.
+
+ISO 3166-2 is part of the ISO 3166 standard published by the International Organization for Standardization (ISO), and defines codes for identifying the principal subdivisions (e.g., provinces or states) of all countries coded in ISO 3166-1
+
+The purpose of ISO 3166-2 is to establish an international standard of short and unique alphanumeric codes to represent the relevant administrative divisions and dependent territories of all countries in a more convenient and less ambiguous form than their full names. Each complete ISO 3166-2 code consists of two parts, separated by a hyphen:[1]
+
+The first part is the ISO 3166-1 alpha-2 code of the country;
+The second part is a string of up to three alphanumeric characters, which is usually obtained from national sources and stems from coding systems already in use in the country concerned, but may also be developed by the ISO itself.
+
+List of Countries
+-----------------
+
+* Belgium
+
++---------+-------------------+
+|  ISO    | Name of region    |
++=========+===================+
+|  BE-BRU |  Bruxelles        |
++---------+-------------------+
+|  BE-VAN |  Antwerpen        |
++---------+-------------------+
+|  BE-VLI |  Limburg          |
++---------+-------------------+
+|  BE-VOV |  Oost-Vlaanderen  |
++---------+-------------------+
+|  BE-VBR |  Vlaams Brabant   |
++---------+-------------------+
+|  BE-VWV |  West-Vlaanderen  |
++---------+-------------------+
+|  BE-WBR |  Brabant Wallon   |
++---------+-------------------+
+|  BE-WHT |  Hainaut          |
++---------+-------------------+
+|  BE-WLG |  Liège            |
++---------+-------------------+
+|  BE-VLI |  Limburg          |
++---------+-------------------+
+|  BE-WLX |  Luxembourg       |
++---------+-------------------+
+|  BE-WNA |  Namur            |
++---------+-------------------+
+
+
+
+* Brazil
+
++----------+-----------------------+
+|  ISO     | Name of region        |
++==========+=======================+
+|  BR-AC   |  Acre                 |
++----------+-----------------------+
+|  BR-AL   | Alagoas               |
++----------+-----------------------+
+|  BR-AP   | Amapá                 |
++----------+-----------------------+
+|  BR-AM   | Amazonas              |
++----------+-----------------------+
+|  BR-BA   | Bahia                 |
++----------+-----------------------+
+|  BR-CE   | Ceará                 |
++----------+-----------------------+
+|  BR-DF   | Distrito Federal      |
++----------+-----------------------+
+|  BR-ES   | Espírito Santo        |
++----------+-----------------------+
+|  BR-GO   | Goiás                 |
++----------+-----------------------+
+|  BR-MA   | Maranhão              |
++----------+-----------------------+
+|  BR-MS   | Mato Grosso do Sul    |
++----------+-----------------------+
+|  BR-MT   | Mato Grosso           |
++----------+-----------------------+
+|  BR-MG   | Minas Gerais          |
++----------+-----------------------+
+|  BR-PA   | Pará                  |
++----------+-----------------------+
+|  BR-PB   | Paraíba               |
++----------+-----------------------+
+|  BR-PR   | Paraná                |
++----------+-----------------------+
+|  BR-PE   | Pernambuco            |
++----------+-----------------------+
+|  BR-PI   | Piauí                 |
++----------+-----------------------+
+|  BR-RJ   | Rio de Janeiro        |
++----------+-----------------------+
+|  BR-RN   | Rio Grande do Norte   |
++----------+-----------------------+
+|  BR-RS   | Rio Grande do Sul     |
++----------+-----------------------+
+|  BR-RO   | Rondônia              |
++----------+-----------------------+
+|  BR-RR   | Roraima               |
++----------+-----------------------+
+|  BR-SP   | São Paulo             |
++----------+-----------------------+
+|  BR-SC   | Santa Catarina        |
++----------+-----------------------+
+|  BR-SE   | Sergipe               |
++----------+-----------------------+
+|  BR-TO   | Tocantins             |
++----------+-----------------------+
+
+* China
+
++---------+--------------------+
+|   ISO   | Name of region     |
++=========+====================+
+|   CN-34 |              Anhui |
++---------+--------------------+
+|   CN-11 |            Beijing |
++---------+--------------------+
+|   CN-50 |          Chongqing |
++---------+--------------------+
+|   CN-35 |             Fujian |
++---------+--------------------+
+|   CN-62 |              Gansu |
++---------+--------------------+
+|   CN-44 |          Guangdong |
++---------+--------------------+
+|   CN-45 |            Guangxi |
++---------+--------------------+
+|   CN-52 |            Guizhou |
++---------+--------------------+
+|   CN-46 |             Hainan |
++---------+--------------------+
+|   CN-13 |              Hebei |
++---------+--------------------+
+|   CN-23 |       Heilongjiang |
++---------+--------------------+
+|   CN-41 |              Henan |
++---------+--------------------+
+|   CN-42 |              Hubei |
++---------+--------------------+
+|   CN-43 |              Hunan |
++---------+--------------------+
+|   CN-32 |            Jiangsu |
++---------+--------------------+
+|   CN-36 |            Jiangxi |
++---------+--------------------+
+|   CN-22 |              Jilin |
++---------+--------------------+
+|   CN-21 |           Liaoning |
++---------+--------------------+
+|   CN-15 |         Nei Mongol |
++---------+--------------------+
+|   CN-64 |        Ningxia Hui |
++---------+--------------------+
+|   CN-63 |            Qinghai |
++---------+--------------------+
+|   CN-61 |            Shaanxi |
++---------+--------------------+
+|   CN-37 |           Shandong |
++---------+--------------------+
+|   CN-31 |           Shanghai |
++---------+--------------------+
+|   CN-14 |             Shanxi |
++---------+--------------------+
+|   CN-51 |            Sichuan |
++---------+--------------------+
+|   CN-12 |            Tianjin |
++---------+--------------------+
+|   CN-65 |     Xinjiang Uygur |
++---------+--------------------+
+|   CN-54 |             Xizang |
++---------+--------------------+
+|   CN-53 |             Yunnan |
++---------+--------------------+
+|   CN-33 |           Zhejiang |
++---------+--------------------+
+|   CN-71 |             Taiwan |
++---------+--------------------+
+|   CN-91 |          Hong Kong |
++---------+--------------------+
+|   CN-92 |              Macao |
++---------+--------------------+
+
+* Egypt
+
++---------+--------------------+
+|   ISO   | Name of region     |
++=========+====================+
+|   EG-DK |      Ad Daqahliyah |
++---------+--------------------+
+|   EG-BA |   Al Bahr al Ahmar |
++---------+--------------------+
+|   EG-BH |        Al Buhayrah |
++---------+--------------------+
+|   EG-FYM|          Al Fayyum |
++---------+--------------------+
+|   EG-GH |       Al Gharbiyah |
++---------+--------------------+
+|   EG-ALX|    Al Iskandariyah |
++---------+--------------------+
+|   EG-IS |     Al Isma iliyah |
++---------+--------------------+
+|   EG-GZ |           Al Jizah |
++---------+--------------------+
+|   EG-MNF|       Al Minufiyah |
++---------+--------------------+
+|   EG-MN |           Al Minya |
++---------+--------------------+
+|   EG-C  |         Al Qahirah |
++---------+--------------------+
+|   EG-KB |      Al Qalyubiyah |
++---------+--------------------+
+|   EG-LX |           Al Uqsur |
++---------+--------------------+
+|   EG-WAD|   Al Wadi al Jadid |
++---------+--------------------+
+|   EG-SUZ|          As Suways |
++---------+--------------------+
+|   EG-SHR|      Ash Sharqiyah |
++---------+--------------------+
+|   EG-ASN|              Aswan |
++---------+--------------------+
+|   EG-AST|              Asyut |
++---------+--------------------+
+|   EG-BNS|        Bani Suwayf |
++---------+--------------------+
+|   EG-PTS|          Bur Sa id |
++---------+--------------------+
+|   EG-DT |             Dumyat |
++---------+--------------------+
+|   EG-JS |        Janub Sina' |
++---------+--------------------+
+|   EG-KFS|    Kafr ash Shaykh |
++---------+--------------------+
+|   EG-MT |            Matrouh |
++---------+--------------------+
+|   EG-KN |               Qina |
++---------+--------------------+
+|   EG-SIN|       Shamal Sina' |
++---------+--------------------+
+|   EG-SHG|              Suhaj |
++---------+--------------------+
+
+
+* France
+
++---------+------------------------------+
+|   ISO   | Name of region               |
++=========+==============================+
+|   FR-67 |                     Bas-Rhin |
++---------+------------------------------+
+|   FR-68 |                    Haut-Rhin |
++---------+------------------------------+
+|   FR-24 |                     Dordogne |
++---------+------------------------------+
+|   FR-33 |                      Gironde |
++---------+------------------------------+
+|   FR-40 |                       Landes |
++---------+------------------------------+
+|   FR-47 |               Lot-et-Garonne |
++---------+------------------------------+
+|   FR-64 |         Pyrénées-Atlantiques |
++---------+------------------------------+
+|   FR-03 |                       Allier |
++---------+------------------------------+
+|   FR-15 |                       Cantal |
++---------+------------------------------+
+|   FR-43 |                  Haute-Loire |
++---------+------------------------------+
+|   FR-63 |                  Puy-de-Dôme |
++---------+------------------------------+
+|   FR-91 |                      Essonne |
++---------+------------------------------+
+|   FR-92 |               Hauts-de-Seine |
++---------+------------------------------+
+|   FR-75 |                        Paris |
++---------+------------------------------+
+|   FR-77 |               Seine-et-Marne |
++---------+------------------------------+
+|   FR-93 |            Seine-Saint-Denis |
++---------+------------------------------+
+|   FR-95 |                   Val-d'Oise |
++---------+------------------------------+
+|   FR-94 |                 Val-de-Marne |
++---------+------------------------------+
+|   FR-78 |                     Yvelines |
++---------+------------------------------+
+|   FR-14 |                     Calvados |
++---------+------------------------------+
+|   FR-50 |                       Manche |
++---------+------------------------------+
+|   FR-61 |                         Orne |
++---------+------------------------------+
+|   FR-21 |                    Côte-d'Or |
++---------+------------------------------+
+|   FR-58 |                       Nièvre |
++---------+------------------------------+
+|   FR-71 |               Saône-et-Loire |
++---------+------------------------------+
+|   FR-89 |                        Yonne |
++---------+------------------------------+
+|   FR-22 |                Côtes-d'Armor |
++---------+------------------------------+
+|   FR-29 |                    Finistère |
++---------+------------------------------+
+|   FR-35 |              Ille-et-Vilaine |
++---------+------------------------------+
+|   FR-56 |                     Morbihan |
++---------+------------------------------+
+|   FR-18 |                         Cher |
++---------+------------------------------+
+|   FR-28 |                 Eure-et-Loir |
++---------+------------------------------+
+|   FR-37 |               Indre-et-Loire |
++---------+------------------------------+
+|   FR-36 |                        Indre |
++---------+------------------------------+
+|   FR-41 |                 Loir-et-Cher |
++---------+------------------------------+
+|   FR-45 |                       Loiret |
++---------+------------------------------+
+|   FR-08 |                     Ardennes |
++---------+------------------------------+
+|   FR-10 |                         Aube |
++---------+------------------------------+
+|   FR-52 |                  Haute-Marne |
++---------+------------------------------+
+|   FR-51 |                        Marne |
++---------+------------------------------+
+|   FR-2A |                 Corse-du-Sud |
++---------+------------------------------+
+|   FR-2B |                  Haute-Corse |
++---------+------------------------------+
+|   FR-25 |                        Doubs |
++---------+------------------------------+
+|   FR-70 |                  Haute-Saône |
++---------+------------------------------+
+|   FR-39 |                         Jura |
++---------+------------------------------+
+|   FR-90 |        Territoire de Belfort |
++---------+------------------------------+
+|   FR-27 |                         Eure |
++---------+------------------------------+
+|   FR-76 |               Seine-Maritime |
++---------+------------------------------+
+|   FR-11 |                         Aude |
++---------+------------------------------+
+|   FR-30 |                         Gard |
++---------+------------------------------+
+|   FR-34 |                      Hérault |
++---------+------------------------------+
+|   FR-48 |                       Lozère |
++---------+------------------------------+
+|   FR-66 |          Pyrénées-Orientales |
++---------+------------------------------+
+|   FR-19 |                      Corrèze |
++---------+------------------------------+
+|   FR-23 |                       Creuse |
++---------+------------------------------+
+|   FR-87 |                 Haute-Vienne |
++---------+------------------------------+
+|   FR-54 |           Meurthe-et-Moselle |
++---------+------------------------------+
+|   FR-55 |                        Meuse |
++---------+------------------------------+
+|   FR-57 |                      Moselle |
++---------+------------------------------+
+|   FR-88 |                       Vosges |
++---------+------------------------------+
+|   FR-09 |                       Ariège |
++---------+------------------------------+
+|   FR-12 |                      Aveyron |
++---------+------------------------------+
+|   FR-32 |                         Gers |
++---------+------------------------------+
+|   FR-31 |                Haute-Garonne |
++---------+------------------------------+
+|   FR-65 |              Hautes-Pyrénées |
++---------+------------------------------+
+|   FR-46 |                          Lot |
++---------+------------------------------+
+|   FR-82 |              Tarn-et-Garonne |
++---------+------------------------------+
+|   FR-81 |                         Tarn |
++---------+------------------------------+
+|   FR-59 |                         Nord |
++---------+------------------------------+
+|   FR-62 |                Pas-de-Calais |
++---------+------------------------------+
+|   FR-44 |             Loire-Atlantique |
++---------+------------------------------+
+|   FR-49 |               Maine-et-Loire |
++---------+------------------------------+
+|   FR-53 |                      Mayenne |
++---------+------------------------------+
+|   FR-72 |                       Sarthe |
++---------+------------------------------+
+|   FR-85 |                       Vendée |
++---------+------------------------------+
+|   FR-02 |                        Aisne |
++---------+------------------------------+
+|   FR-60 |                         Oise |
++---------+------------------------------+
+|   FR-80 |                        Somme |
++---------+------------------------------+
+|   FR-17 |            Charente-Maritime |
++---------+------------------------------+
+|   FR-16 |                     Charente |
++---------+------------------------------+
+|   FR-79 |                  Deux-Sèvres |
++---------+------------------------------+
+|   FR-86 |                       Vienne |
++---------+------------------------------+
+|   FR-04 |      Alpes-de-Haute-Provence |
++---------+------------------------------+
+|   FR-06 |              Alpes-Maritimes |
++---------+------------------------------+
+|   FR-13 |             Bouches-du-Rhône |
++---------+------------------------------+
+|   FR-05 |                 Hautes-Alpes |
++---------+------------------------------+
+|   FR-83 |                          Var |
++---------+------------------------------+
+|   FR-84 |                     Vaucluse |
++---------+------------------------------+
+|   FR-01 |                          Ain |
++---------+------------------------------+
+|   FR-07 |                      Ardèche |
++---------+------------------------------+
+|   FR-26 |                        Drôme |
++---------+------------------------------+
+|   FR-74 |                 Haute-Savoie |
++---------+------------------------------+
+|   FR-38 |                        Isère |
++---------+------------------------------+
+|   FR-42 |                        Loire |
++---------+------------------------------+
+|   FR-69 |                        Rhône |
++---------+------------------------------+
+|   FR-73 |                       Savoie |
++---------+------------------------------+
+
+
+* Germany
+
++---------+------------------------------+
+|   ISO   | Name of region               |
++=========+==============================+
+|   DE-BW |            Baden-Württemberg |
++---------+------------------------------+
+|   DE-BY |                       Bayern |
++---------+------------------------------+
+|   DE-BE |                       Berlin |
++---------+------------------------------+
+|   DE-BB |                  Brandenburg |
++---------+------------------------------+
+|   DE-HB |                       Bremen |
++---------+------------------------------+
+|   DE-HH |                      Hamburg |
++---------+------------------------------+
+|   DE-HE |                       Hessen |
++---------+------------------------------+
+|   DE-MV |       Mecklenburg-Vorpommern |
++---------+------------------------------+
+|   DE-NI |                Niedersachsen |
++---------+------------------------------+
+|   DE-NW |          Nordrhein-Westfalen |
++---------+------------------------------+
+|   DE-RP |              Rheinland-Pfalz |
++---------+------------------------------+
+|   DE-SL |                     Saarland |
++---------+------------------------------+
+|   DE-ST |               Sachsen-Anhalt |
++---------+------------------------------+
+|   DE-SN |                      Sachsen |
++---------+------------------------------+
+|   DE-SH |           Schleswig-Holstein |
++---------+------------------------------+
+|   DE-TH |                    Thüringen |
++---------+------------------------------+
+
+
+* Italy
+
+
++------+------------------------------------+
+|ISO   | Name of region                     |
++======+====================================+
+|IT-CH |Chieti                              |
++------+------------------------------------+
+|IT-AQ |L'Aquila                            |
++------+------------------------------------+
+|IT-PE |Pescara                             |
++------+------------------------------------+
+|IT-TE |Teramo                              |
++------+------------------------------------+
+|IT-BA |Bari                                |
++------+------------------------------------+
+|IT-BT |Barletta-Andria-Trani               |
++------+------------------------------------+
+|IT-BR |Brindisi                            |
++------+------------------------------------+
+|IT-FG |Foggia                              |
++------+------------------------------------+
+|IT-LE |Lecce                               |
++------+------------------------------------+
+|IT-TA |Taranto                             |
++------+------------------------------------+
+|IT-MT |Matera                              |
++------+------------------------------------+
+|IT-PZ |Potenza                             |
++------+------------------------------------+
+|IT-CZ |Catanzaro                           |
++------+------------------------------------+
+|IT-CS |Cosenza                             |
++------+------------------------------------+
+|IT-KR |Crotone                             |
++------+------------------------------------+
+|IT-RC |Reggio Di Calabria                  |
++------+------------------------------------+
+|IT-VV |Vibo Valentia                       |
++------+------------------------------------+
+|IT-AV |Avellino                            |
++------+------------------------------------+
+|IT-BN |Benevento                           |
++------+------------------------------------+
+|IT-CE |Caserta                             |
++------+------------------------------------+
+|IT-NA |Napoli                              |
++------+------------------------------------+
+|IT-SA |Salerno                             |
++------+------------------------------------+
+|IT-BO |Bologna                             |
++------+------------------------------------+
+|IT-FE |Ferrara                             |
++------+------------------------------------+
+|IT-FC |            Forli' - Cesena         |
++------+------------------------------------+
+|IT-MO |Modena                              |
++------+------------------------------------+
+|IT-PR |Parma                               |
++------+------------------------------------+
+|IT-PC |Piacenza                            |
++------+------------------------------------+
+|IT-RA |Ravenna                             |
++------+------------------------------------+
+|IT-RE |Reggio Nell'Emilia                  |
++------+------------------------------------+
+|IT-RN |Rimini                              |
++------+------------------------------------+
+|IT-GO |Gorizia                             |
++------+------------------------------------+
+|IT-PN |Pordenone                           |
++------+------------------------------------+
+|IT-TS |Trieste                             |
++------+------------------------------------+
+|IT-UD |Udine                               |
++------+------------------------------------+
+|IT-FR |Frosinone                           |
++------+------------------------------------+
+|IT-LT |Latina                              |
++------+------------------------------------+
+|IT-RI |Rieti                               |
++------+------------------------------------+
+|IT-RM |Roma                                |
++------+------------------------------------+
+|IT-VT |Viterbo                             |
++------+------------------------------------+
+|IT-GE |Genova                              |
++------+------------------------------------+
+|IT-IM |Imperia                             |
++------+------------------------------------+
+|IT-SP |La Spezia                           |
++------+------------------------------------+
+|IT-SV |Savona                              |
++------+------------------------------------+
+|IT-BG |Bergamo                             |
++------+------------------------------------+
+|IT-BS |Brescia                             |
++------+------------------------------------+
+|IT-CO |Como                                |
++------+------------------------------------+
+|IT-CR |Cremona                             |
++------+------------------------------------+
+|IT-LC |Lecco                               |
++------+------------------------------------+
+|IT-LO |Lodi                                |
++------+------------------------------------+
+|IT-MN |Mantua                              |
++------+------------------------------------+
+|IT-MI |Milano                              |
++------+------------------------------------+
+|IT-MB |Monza and Brianza                   |
++------+------------------------------------+
+|IT-PV |Pavia                               |
++------+------------------------------------+
+|IT-SO |Sondrio                             |
++------+------------------------------------+
+|IT-VA |Varese                              |
++------+------------------------------------+
+|IT-AN |Ancona                              |
++------+------------------------------------+
+|IT-AP |Ascoli Piceno                       |
++------+------------------------------------+
+|IT-FM |Fermo                               |
++------+------------------------------------+
+|IT-MC |Macerata                            |
++------+------------------------------------+
+|IT-PU |Pesaro E Urbino                     |
++------+------------------------------------+
+|IT-CB |Campobasso                          |
++------+------------------------------------+
+|IT-IS |Isernia                             |
++------+------------------------------------+
+|IT-AL |Alessandria                         |
++------+------------------------------------+
+|IT-AT |Asti                                |
++------+------------------------------------+
+|IT-BI |Biella                              |
++------+------------------------------------+
+|IT-CN |Cuneo                               |
++------+------------------------------------+
+|IT-NO |Novara                              |
++------+------------------------------------+
+|IT-TO |Torino                              |
++------+------------------------------------+
+|IT-VB |Verbano-Cusio-Ossola                |
++------+------------------------------------+
+|IT-VC |Vercelli                            |
++------+------------------------------------+
+|IT-CA |Cagliari                            |
++------+------------------------------------+
+|IT-CI |Carbonia-Iglesias                   |
++------+------------------------------------+
+|IT-VS |Medio Campidano                     |
++------+------------------------------------+
+|IT-NU |Nuoro                               |
++------+------------------------------------+
+|IT-OG |Ogliastra                           |
++------+------------------------------------+
+|IT-OT |Olbia-Tempio                        |
++------+------------------------------------+
+|IT-OR |Oristano                            |
++------+------------------------------------+
+|IT-SS |Sassari                             |
++------+------------------------------------+
+|IT-AG |Agrigento                           |
++------+------------------------------------+
+|IT-CL |Caltanissetta                       |
++------+------------------------------------+
+|IT-CT |Catania                             |
++------+------------------------------------+
+|IT-EN |Enna                                |
++------+------------------------------------+
+|IT-ME |Messina                             |
++------+------------------------------------+
+|IT-PA |Palermo                             |
++------+------------------------------------+
+|IT-RG |Ragusa                              |
++------+------------------------------------+
+|IT-SR |Syracuse                            |
++------+------------------------------------+
+|IT-TP |Trapani                             |
++------+------------------------------------+
+|IT-AR |Arezzo                              |
++------+------------------------------------+
+|IT-FI |Florence                            |
++------+------------------------------------+
+|IT-GR |Grosseto                            |
++------+------------------------------------+
+|IT-LI |Livorno                             |
++------+------------------------------------+
+|IT-LU |Lucca                               |
++------+------------------------------------+
+|IT-MS |Massa Carrara                       |
++------+------------------------------------+
+|IT-PI |Pisa                                |
++------+------------------------------------+
+|IT-PT |Pistoia                             |
++------+------------------------------------+
+|IT-PO |Prato                               |
++------+------------------------------------+
+|IT-SI |Siena                               |
++------+------------------------------------+
+|IT-BZ |Bolzano                             |
++------+------------------------------------+
+|IT-TN |Trento                              |
++------+------------------------------------+
+|IT-PG |Perugia                             |
++------+------------------------------------+
+|IT-TR |Terni                               |
++------+------------------------------------+
+|IT-AO |Aosta                               |
++------+------------------------------------+
+|IT-BL |Belluno                             |
++------+------------------------------------+
+|IT-PD |Padua                               |
++------+------------------------------------+
+|IT-RO |Rovigo                              |
++------+------------------------------------+
+|IT-TV |Treviso                             |
++------+------------------------------------+
+|IT-VE |Venezia                             |
++------+------------------------------------+
+|IT-VR |Verona                              |
++------+------------------------------------+
+|IT-VI |Vicenza                             |
++------+------------------------------------+
+
+
+* Japan
+
++-------+----------------+
+| ISO   | Name of region |
++=======+================+
+| JP-01 | Hokkaido       |
++-------+----------------+
+| JP-02 | Aomori         |
++-------+----------------+
+| JP-03 | Iwate          |
++-------+----------------+
+| JP-04 | Miyagi         |
++-------+----------------+
+| JP-05 | Akita          |
++-------+----------------+
+| JP-06 | Yamagata       |
++-------+----------------+
+| JP-07 | Fukushima      |
++-------+----------------+
+| JP-08 | Ibaraki        |
++-------+----------------+
+| JP-09 | Tochigi        |
++-------+----------------+
+| JP-10 | Gunma          |
++-------+----------------+
+| JP-11 | Saitama        |
++-------+----------------+
+| JP-12 | Chiba          |
++-------+----------------+
+| JP-13 | Tokyo          |
++-------+----------------+
+| JP-14 | Kanagawa       |
++-------+----------------+
+| JP-15 | Niigata        |
++-------+----------------+
+| JP-16 | Toyama         |
++-------+----------------+
+| JP-17 | Ishikawa       |
++-------+----------------+
+| JP-18 | Fukui          |
++-------+----------------+
+| JP-19 | Yamanashi      |
++-------+----------------+
+| JP-20 | Nagano         |
++-------+----------------+
+| JP-21 | Gifu           |
++-------+----------------+
+| JP-22 | Shizuoka       |
++-------+----------------+
+| JP-23 | Aichi          |
++-------+----------------+
+| JP-24 | Mie            |
++-------+----------------+
+| JP-25 | Shiga          |
++-------+----------------+
+| JP-26 | Kyoto          |
++-------+----------------+
+| JP-27 | Osaka          |
++-------+----------------+
+| JP-28 | Hyogo          |
++-------+----------------+
+| JP-29 | Nara           |
++-------+----------------+
+| JP-30 | Wakayama       |
++-------+----------------+
+| JP-31 | Tottori        |
++-------+----------------+
+| JP-32 | Shimane        |
++-------+----------------+
+| JP-33 | Okayama        |
++-------+----------------+
+| JP-34 | Hiroshima      |
++-------+----------------+
+| JP-35 | Yamaguchi      |
++-------+----------------+
+| JP-36 | Tokushima      |
++-------+----------------+
+| JP-37 | Kagawa         |
++-------+----------------+
+| JP-38 | Ehime          |
++-------+----------------+
+| JP-39 | Kochi          |
++-------+----------------+
+| JP-40 | Fukuoka        |
++-------+----------------+
+| JP-41 | Saga           |
++-------+----------------+
+| JP-42 | Nagasaki       |
++-------+----------------+
+| JP-43 | Kumamoto       |
++-------+----------------+
+| JP-44 | Oita           |
++-------+----------------+
+| JP-45 | Miyazaki       |
++-------+----------------+
+| JP-46 | Kagoshima      |
++-------+----------------+
+| JP-47 | Okinawa        |
++-------+----------------+
+
+* Korea
+
++-------+----------------+
+| ISO   | Name of region |
++=======+================+
+| KR-11 | Seoul          |
++-------+----------------+
+| KR-26 | Busan          |
++-------+----------------+
+| KR-27 | Daegu          |
++-------+----------------+
+| KR-28 | Incheon        |
++-------+----------------+
+| KR-29 | Gwangju        |
++-------+----------------+
+| KR-30 | Daejeon        |
++-------+----------------+
+| KR-31 | Ulsan          |
++-------+----------------+
+| KR-41 | Gyeonggi       |
++-------+----------------+
+| KR-42 | Gangwon        |
++-------+----------------+
+| KR-43 | Chungbuk       |
++-------+----------------+
+| KR-44 | Chungnam       |
++-------+----------------+
+| KR-45 | Jeonbuk        |
++-------+----------------+
+| KR-46 | Jeonnam        |
++-------+----------------+
+| KR-47 | Gyeongbuk      |
++-------+----------------+
+| KR-48 | Gyeongnam      |
++-------+----------------+
+| KR-49 | Jeju           |
++-------+----------------+
+| KR-50 | Sejong         |
++-------+----------------+
+
+* Liechtenstein
+
++-------+----------------+
+| ISO   | Name of region |
++=======+================+
+| LI-01 | Balzers        |
++-------+----------------+
+| LI-02 | Eschen         |
++-------+----------------+
+| LI-03 | Gamprin        |
++-------+----------------+
+| LI-04 | Mauren         |
++-------+----------------+
+| LI-05 | Planken        |
++-------+----------------+
+| LI-06 | Ruggell        |
++-------+----------------+
+| LI-07 | Schaan         |
++-------+----------------+
+| LI-08 | Schellenberg   |
++-------+----------------+
+| LI-09 | Triesen        |
++-------+----------------+
+| LI-10 | Triesenberg    |
++-------+----------------+
+| LI-11 | Vaduz          |
++-------+----------------+
+
+* Morocco
+
++-------+------------------------------+
+|ISO    | Name of region               |
++=======+==============================+
+|MA-BES |                  Ben Slimane |
++-------+------------------------------+
+|MA-KHO |                    Khouribga |
++-------+------------------------------+
+|MA-SET |                       Settat |
++-------+------------------------------+
+|MA-JDI |                    El Jadida |
++-------+------------------------------+
+|MA-SAF |                         Safi |
++-------+------------------------------+
+|MA-BOM |                    Boulemane |
++-------+------------------------------+
+|MA-FES |                          Fès |
++-------+------------------------------+
+|MA-SEF |                       Sefrou |
++-------+------------------------------+
+|MA-MOU |        Zouagha-Moulay Yacoub |
++-------+------------------------------+
+|MA-KEN |                      Kénitra |
++-------+------------------------------+
+|MA-SIK |                   Sidi Kacem |
++-------+------------------------------+
+|MA-CAS |                   Casablanca |
++-------+------------------------------+
+|MA-MOH |                   Mohammedia |
++-------+------------------------------+
+|MA-ASZ |                     Assa-Zag |
++-------+------------------------------+
+|MA-GUE |                      Guelmim |
++-------+------------------------------+
+|MA-TNT |                      Tan-Tan |
++-------+------------------------------+
+|MA-TAT |                         Tata |
++-------+------------------------------+
+|MA-LAA |                     Laâyoune |
++-------+------------------------------+
+|MA-HAO |                     Al Haouz |
++-------+------------------------------+
+|MA-CHI |                    Chichaoua |
++-------+------------------------------+
+|MA-KES |         El Kelaâ des Sraghna |
++-------+------------------------------+
+|MA-ESI |                    Essaouira |
++-------+------------------------------+
+|MA-MMD |                    Marrakech |
++-------+------------------------------+
+|MA-HAJ |                     El Hajeb |
++-------+------------------------------+
+|MA-ERR |                   Errachidia |
++-------+------------------------------+
+|MA-IFR |                       Ifrane |
++-------+------------------------------+
+|MA-KHN |                     Khénifra |
++-------+------------------------------+
+|MA-MEK |                       Meknès |
++-------+------------------------------+
+|MA-BER |             Berkane Taourirt |
++-------+------------------------------+
+|MA-FIG |                       Figuig |
++-------+------------------------------+
+|MA-JRA |                       Jerada |
++-------+------------------------------+
+|MA-NAD |                        Nador |
++-------+------------------------------+
+|MA-OUJ |                  Oujda Angad |
++-------+------------------------------+
+|MA-KHE |                    Khémisset |
++-------+------------------------------+
+|MA-RAB |                        Rabat |
++-------+------------------------------+
+|MA-SAL |                         Salé |
++-------+------------------------------+
+|MA-SKH |              Skhirate-Témara |
++-------+------------------------------+
+|MA-AGD |         Agadir-Ida ou Tanane |
++-------+------------------------------+
+|MA-CHT |             Chtouka-Aït Baha |
++-------+------------------------------+
+|MA-INE |         Inezgane-Aït Melloul |
++-------+------------------------------+
+|MA-OUA |                   Ouarzazate |
++-------+------------------------------+
+|MA-TAR |                   Taroudannt |
++-------+------------------------------+
+|MA-TIZ |                       Tiznit |
++-------+------------------------------+
+|MA-ZAG |                       Zagora |
++-------+------------------------------+
+|MA-AZI |                       Azilal |
++-------+------------------------------+
+|MA-BEM |                  Béni Mellal |
++-------+------------------------------+
+|MA-CHE |                  Chefchaouen |
++-------+------------------------------+
+|MA-FAH |                   Fahs Anjra |
++-------+------------------------------+
+|MA-LAR |                      Larache |
++-------+------------------------------+
+|MA-TET |                      Tétouan |
++-------+------------------------------+
+|MA-TNG |               Tanger-Assilah |
++-------+------------------------------+
+|MA-HOC |                   Al Hoceïma |
++-------+------------------------------+
+|MA-TAO |                     Taounate |
++-------+------------------------------+
+|MA-TAZ |                         Taza |
++-------+------------------------------+
+
+
+* Netherlands
+
++------+------------------------------+
+|ISO   | Name of region               |
++======+==============================+
+|NL-DR |                      Drenthe |
++------+------------------------------+
+|NL-FL |                    Flevoland |
++------+------------------------------+
+|NL-FR |                    Friesland |
++------+------------------------------+
+|NL-GE |                   Gelderland |
++------+------------------------------+
+|NL-GR |                    Groningen |
++------+------------------------------+
+|NL-YS |                   IJsselmeer |
++------+------------------------------+
+|NL-LI |                      Limburg |
++------+------------------------------+
+|NL-NB |                Noord-Brabant |
++------+------------------------------+
+|NL-NH |                Noord-Holland |
++------+------------------------------+
+|NL-OV |                   Overijssel |
++------+------------------------------+
+|NL-UT |                      Utrecht |
++------+------------------------------+
+|NL-ZE |                      Zeeland |
++------+------------------------------+
+|NL-ZM |                Zeeuwse meren |
++------+------------------------------+
+|NL-ZH |                 Zuid-Holland |
++------+------------------------------+
+
+* Russian
+
++-------+------------------------------+
+|ISO    | Name of region               |
++=======+==============================+
+|RU-AD  |                       Adygey |
++-------+------------------------------+
+|RU-ALT |                        Altay |
++-------+------------------------------+
+|RU-AMU |                         Amur |
++-------+------------------------------+
+|RU-ARK |                 Arkhangel'sk |
++-------+------------------------------+
+|RU-AST |                   Astrakhan' |
++-------+------------------------------+
+|RU-BA  |                Bashkortostan |
++-------+------------------------------+
+|RU-BEL |                     Belgorod |
++-------+------------------------------+
+|RU-BRY |                      Bryansk |
++-------+------------------------------+
+|RU-BU  |                       Buryat |
++-------+------------------------------+
+|RU-CE  |                     Chechnya |
++-------+------------------------------+
+|RU-CHE |                  Chelyabinsk |
++-------+------------------------------+
+|RU-CHU |                       Chukot |
++-------+------------------------------+
+|RU-CU  |                      Chuvash |
++-------+------------------------------+
+|RU-SPE |       City of St. Petersburg |
++-------+------------------------------+
+|RU-DA  |                     Dagestan |
++-------+------------------------------+
+|RU-AL  |                  Gorno-Altay |
++-------+------------------------------+
+|RU-IN  |                       Ingush |
++-------+------------------------------+
+|RU-IRK |                      Irkutsk |
++-------+------------------------------+
+|RU-IVA |                      Ivanovo |
++-------+------------------------------+
+|RU-KB  |              Kabardin-Balkar |
++-------+------------------------------+
+|RU-KGD |                  Kaliningrad |
++-------+------------------------------+
+|RU-KL  |                       Kalmyk |
++-------+------------------------------+
+|RU-KLU |                       Kaluga |
++-------+------------------------------+
+|RU-KAM |                    Kamchatka |
++-------+------------------------------+
+|RU-KC  |            Karachay-Cherkess |
++-------+------------------------------+
+|RU-KR  |                      Karelia |
++-------+------------------------------+
+|RU-KEM |                     Kemerovo |
++-------+------------------------------+
+|RU-KHA |                   Khabarovsk |
++-------+------------------------------+
+|RU-KK  |                      Khakass |
++-------+------------------------------+
+|RU-KHM |                Khanty-Mansiy |
++-------+------------------------------+
+|RU-KIR |                        Kirov |
++-------+------------------------------+
+|RU-KO  |                         Komi |
++-------+------------------------------+
+|RU-KOS |                     Kostroma |
++-------+------------------------------+
+|RU-KDA |                    Krasnodar |
++-------+------------------------------+
+|RU-KYA |                  Krasnoyarsk |
++-------+------------------------------+
+|RU-KGN |                       Kurgan |
++-------+------------------------------+
+|RU-KRS |                        Kursk |
++-------+------------------------------+
+|RU-LEN |                    Leningrad |
++-------+------------------------------+
+|RU-LIP |                      Lipetsk |
++-------+------------------------------+
+|RU-MAG |               Maga Buryatdan |
++-------+------------------------------+
+|RU-ME  |                     Mariy-El |
++-------+------------------------------+
+|RU-MO  |                     Mordovia |
++-------+------------------------------+
+|RU-MOW |                  Moscow City |
++-------+------------------------------+
+|RU-MOS |                       Moskva |
++-------+------------------------------+
+|RU-MUR |                     Murmansk |
++-------+------------------------------+
+|RU-NEN |                       Nenets |
++-------+------------------------------+
+|RU-NIZ |                   Nizhegorod |
++-------+------------------------------+
+|RU-SE  |                North Ossetia |
++-------+------------------------------+
+|RU-NGR |                     Novgorod |
++-------+------------------------------+
+|RU-NVS |                  Novosibirsk |
++-------+------------------------------+
+|RU-OMS |                         Omsk |
++-------+------------------------------+
+|RU-ORL |                         Orel |
++-------+------------------------------+
+|RU-ORE |                     Orenburg |
++-------+------------------------------+
+|RU-PNZ |                        Penza |
++-------+------------------------------+
+|RU-PER |                        Perm' |
++-------+------------------------------+
+|RU-PRI |                    Primor'ye |
++-------+------------------------------+
+|RU-PSK |                        Pskov |
++-------+------------------------------+
+|RU-ROS |                       Rostov |
++-------+------------------------------+
+|RU-RYA |                      Ryazan' |
++-------+------------------------------+
+|RU-SAK |                     Sakhalin |
++-------+------------------------------+
+|RU-SA  |                        Sakha |
++-------+------------------------------+
+|RU-SAM |                       Samara |
++-------+------------------------------+
+|RU-SAR |                      Saratov |
++-------+------------------------------+
+|RU-SMO |                     Smolensk |
++-------+------------------------------+
+|RU-STA |                   Stavropol' |
++-------+------------------------------+
+|RU-SVE |                   Sverdlovsk |
++-------+------------------------------+
+|RU-TAM |                       Tambov |
++-------+------------------------------+
+|RU-TA  |                    Tatarstan |
++-------+------------------------------+
+|RU-TOM |                        Tomsk |
++-------+------------------------------+
+|RU-TUL |                         Tula |
++-------+------------------------------+
+|RU-TY  |                         Tuva |
++-------+------------------------------+
+|RU-TVE |                        Tver' |
++-------+------------------------------+
+|RU-TYU |                      Tyumen' |
++-------+------------------------------+
+|RU-UD  |                       Udmurt |
++-------+------------------------------+
+|RU-ULY |                   Ul'yanovsk |
++-------+------------------------------+
+|RU-VLA |                     Vladimir |
++-------+------------------------------+
+|RU-VGG |                    Volgograd |
++-------+------------------------------+
+|RU-VLG |                      Vologda |
++-------+------------------------------+
+|RU-VOR |                     Voronezh |
++-------+------------------------------+
+|RU-YAN |                 Yamal-Nenets |
++-------+------------------------------+
+|RU-YAR |                   Yaroslavl' |
++-------+------------------------------+
+|RU-YEV |                       Yevrey |
++-------+------------------------------+
+|RU-ZAB |                  Zabaykal'ye |
++-------+------------------------------+
+
+* Singapore
+
++-----+------------------------------+
+| Id  | Name of region               |
++=====+==============================+
+|  205|                    Singapore |
++-----+------------------------------+
+
+* Spain
+
++-------+-----------------------------+
+|ISO    | Name of region              |
++=======+=============================+
+|ES-AL  |                     Almería |
++-------+-----------------------------+
+|ES-CA  |                       Cádiz |
++-------+-----------------------------+
+|ES-CO  |                     Córdoba |
++-------+-----------------------------+
+|ES-GR  |                     Granada |
++-------+-----------------------------+
+|ES-H   |                      Huelva |
++-------+-----------------------------+
+|ES-J   |                        Jaén |
++-------+-----------------------------+
+|ES-MA  |                      Málaga |
++-------+-----------------------------+
+|ES-SE  |                     Sevilla |
++-------+-----------------------------+
+|ES-HU  |                      Huesca |
++-------+-----------------------------+
+|ES-TE  |                      Teruel |
++-------+-----------------------------+
+|ES-Z   |                    Zaragoza |
++-------+-----------------------------+
+|ES-S3  |                   Cantabria |
++-------+-----------------------------+
+|ES-AB  |                    Albacete |
++-------+-----------------------------+
+|ES-CR  |                 Ciudad Real |
++-------+-----------------------------+
+|ES-CU  |                      Cuenca |
++-------+-----------------------------+
+|ES-GU  |                 Guadalajara |
++-------+-----------------------------+
+|ES-TO  |                      Toledo |
++-------+-----------------------------+
+|ES-AV  |                       Ávila |
++-------+-----------------------------+
+|ES-BU  |                      Burgos |
++-------+-----------------------------+
+|ES-LE  |                        León |
++-------+-----------------------------+
+|ES-P   |                    Palencia |
++-------+-----------------------------+
+|ES-SA  |                   Salamanca |
++-------+-----------------------------+
+|ES-SG  |                     Segovia |
++-------+-----------------------------+
+|ES-SO  |                       Soria |
++-------+-----------------------------+
+|ES-VA  |                  Valladolid |
++-------+-----------------------------+
+|ES-ZA  |                      Zamora |
++-------+-----------------------------+
+|ES-B   |                   Barcelona |
++-------+-----------------------------+
+|ES-GI  |                      Girona |
++-------+-----------------------------+
+|ES-L   |                      Lleida |
++-------+-----------------------------+
+|ES-T   |                   Tarragona |
++-------+-----------------------------+
+|ES-CE  |                       Ceuta |
++-------+-----------------------------+
+|ES-ML  |                     Melilla |
++-------+-----------------------------+
+|ES-M5  |                      Madrid |
++-------+-----------------------------+
+|ES-NA7 |                     Navarra |
++-------+-----------------------------+
+|ES-A   |                    Alicante |
++-------+-----------------------------+
+|ES-CS  |                   Castellón |
++-------+-----------------------------+
+|ES-V   |                    Valencia |
++-------+-----------------------------+
+|ES-BA  |                     Badajoz |
++-------+-----------------------------+
+|ES-CC  |                     Cáceres |
++-------+-----------------------------+
+|ES-C   |                    A Coruña |
++-------+-----------------------------+
+|ES-LU  |                        Lugo |
++-------+-----------------------------+
+|ES-OR  |                     Ourense |
++-------+-----------------------------+
+|ES-PO  |                  Pontevedra |
++-------+-----------------------------+
+|ES-PM  |                    Baleares |
++-------+-----------------------------+
+|ES-GC  |                  Las Palmas |
++-------+-----------------------------+
+|ES-TF  |      Santa Cruz de Tenerife |
++-------+-----------------------------+
+|ES-LO4 |                    La Rioja |
++-------+-----------------------------+
+|ES-VI  |                       Álava |
++-------+-----------------------------+
+|ES-SS  |                   Guipúzcoa |
++-------+-----------------------------+
+|ES-BI  |                     Vizcaya |
++-------+-----------------------------+
+|ES-O2  |                    Asturias |
++-------+-----------------------------+
+|ES-MU6 |                      Murcia |
++-------+-----------------------------+
+
+* Switzerland
+
++-------+-----------------------------+
+|ISO    | Name of region              |
++=======+=============================+
+|CH-AG  |                      Aargau |
++-------+-----------------------------+
+|CH-AR  |      Appenzell Ausserrhoden |
++-------+-----------------------------+
+|CH-AI  |       Appenzell Innerrhoden |
++-------+-----------------------------+
+|CH-BL  |            Basel-Landschaft |
++-------+-----------------------------+
+|CH-BS  |                 Basel-Stadt |
++-------+-----------------------------+
+|CH-BE  |                        Bern |
++-------+-----------------------------+
+|CH-FR  |                    Freiburg |
++-------+-----------------------------+
+|CH-GE  |                        Genf |
++-------+-----------------------------+
+|CH-GL  |                      Glarus |
++-------+-----------------------------+
+|CH-GR  |                  Graubünden |
++-------+-----------------------------+
+|CH-JU  |                        Jura |
++-------+-----------------------------+
+|CH-LU  |                      Luzern |
++-------+-----------------------------+
+|CH-NE  |                   Neuenburg |
++-------+-----------------------------+
+|CH-NW  |                   Nidwalden |
++-------+-----------------------------+
+|CH-OW  |                    Obwalden |
++-------+-----------------------------+
+|CH-SH  |                Schaffhausen |
++-------+-----------------------------+
+|CH-SZ  |                      Schwyz |
++-------+-----------------------------+
+|CH-SO  |                   Solothurn |
++-------+-----------------------------+
+|CH-SG  |                  St. Gallen |
++-------+-----------------------------+
+|CH-TI  |                      Tessin |
++-------+-----------------------------+
+|CH-TG  |                     Thurgau |
++-------+-----------------------------+
+|CH-UR  |                         Uri |
++-------+-----------------------------+
+|CH-VD  |                       Waadt |
++-------+-----------------------------+
+|CH-VS  |                      Wallis |
++-------+-----------------------------+
+|CH-ZG  |                         Zug |
++-------+-----------------------------+
+|CH-ZH  |                      Zürich |
++-------+-----------------------------+
+
+* Uk
+
++-------+------------------------------+
+|ISO    | Name of region               |
++=======+==============================+
+|GB-BDG |         Barking and Dagenham |
++-------+------------------------------+
+|GB-BAS | Bath and North East Somerset |
++-------+------------------------------+
+|GB-BDF |                 Bedfordshire |
++-------+------------------------------+
+|GB-WBK |                    Berkshire |
++-------+------------------------------+
+|GB-BEX |                       Bexley |
++-------+------------------------------+
+|GB-BBD |        Blackburn with Darwen |
++-------+------------------------------+
+|GB-BMH |                  Bournemouth |
++-------+------------------------------+
+|GB-BEN |                        Brent |
++-------+------------------------------+
+|GB-BNH |            Brighton and Hove |
++-------+------------------------------+
+|GB-BST |                      Bristol |
++-------+------------------------------+
+|GB-BRY |                      Bromley |
++-------+------------------------------+
+|GB-BKM |              Buckinghamshire |
++-------+------------------------------+
+|GB-CAM |               Cambridgeshire |
++-------+------------------------------+
+|GB-CMD |                       Camden |
++-------+------------------------------+
+|GB-CHS |                     Cheshire |
++-------+------------------------------+
+|GB-CON |                     Cornwall |
++-------+------------------------------+
+|GB-CRY |                      Croydon |
++-------+------------------------------+
+|GB-CMA |                      Cumbria |
++-------+------------------------------+
+|GB-DAL |                   Darlington |
++-------+------------------------------+
+|GB-DBY |                   Derbyshire |
++-------+------------------------------+
+|GB-DER |                        Derby |
++-------+------------------------------+
+|GB-DEV |                        Devon |
++-------+------------------------------+
+|GB-DOR |                       Dorset |
++-------+------------------------------+
+|GB-DUR |                       Durham |
++-------+------------------------------+
+|GB-EAL |                       Ealing |
++-------+------------------------------+
+|GB-ERY |     East Riding of Yorkshire |
++-------+------------------------------+
+|GB-ESX |                  East Sussex |
++-------+------------------------------+
+|GB-ENF |                      Enfield |
++-------+------------------------------+
+|GB-ESS |                        Essex |
++-------+------------------------------+
+|GB-GLS |              Gloucestershire |
++-------+------------------------------+
+|GB-GRE |                    Greenwich |
++-------+------------------------------+
+|GB-HCK |                      Hackney |
++-------+------------------------------+
+|GB-HAL |                       Halton |
++-------+------------------------------+
+|GB-HMF |       Hammersmith and Fulham |
++-------+------------------------------+
+|GB-HAM |                    Hampshire |
++-------+------------------------------+
+|GB-HRY |                     Haringey |
++-------+------------------------------+
+|GB-HRW |                       Harrow |
++-------+------------------------------+
+|GB-HPL |                   Hartlepool |
++-------+------------------------------+
+|GB-HAV |                     Havering |
++-------+------------------------------+
+|GB-HRT |                Herefordshire |
++-------+------------------------------+
+|GB-HEF |                Hertfordshire |
++-------+------------------------------+
+|GB-HIL |                   Hillingdon |
++-------+------------------------------+
+|GB-HNS |                     Hounslow |
++-------+------------------------------+
+|GB-IOW |                Isle of Wight |
++-------+------------------------------+
+|GB-ISL |                    Islington |
++-------+------------------------------+
+|GB-KEC |       Kensington and Chelsea |
++-------+------------------------------+
+|GB-KEN |                         Kent |
++-------+------------------------------+
+|GB-KHL |           Kingston upon Hull |
++-------+------------------------------+
+|GB-KTT |         Kingston upon Thames |
++-------+------------------------------+
+|GB-LBH |                      Lambeth |
++-------+------------------------------+
+|GB-LAN |                   Lancashire |
++-------+------------------------------+
+|GB-LEC |               Leicestershire |
++-------+------------------------------+
+|GB-LCE |                    Leicester |
++-------+------------------------------+
+|GB-LEW |                     Lewisham |
++-------+------------------------------+
+|GB-LIN |                 Lincolnshire |
++-------+------------------------------+
+|GB-LND |                       London |
++-------+------------------------------+
+|GB-LUT |                        Luton |
++-------+------------------------------+
+|GB-MAN |                   Manchester |
++-------+------------------------------+
+|GB-MDW |                       Medway |
++-------+------------------------------+
+|GB-MER |                   Merseyside |
++-------+------------------------------+
+|GB-MRT |                       Merton |
++-------+------------------------------+
+|GB-MDB |                Middlesbrough |
++-------+------------------------------+
+|GB-MIK |                Milton Keynes |
++-------+------------------------------+
+|GB-NWM |                       Newham |
++-------+------------------------------+
+|GB-NFK |                      Norfolk |
++-------+------------------------------+
+|GB-NEL |      North East Lincolnshire |
++-------+------------------------------+
+|GB-NLN |           North Lincolnshire |
++-------+------------------------------+
+|GB-NSM |               North Somerset |
++-------+------------------------------+
+|GB-NYK |              North Yorkshire |
++-------+------------------------------+
+|GB-NTH |             Northamptonshire |
++-------+------------------------------+
+|GB-NBL |               Northumberland |
++-------+------------------------------+
+|GB-NTT |              Nottinghamshire |
++-------+------------------------------+
+|GB-NGM |                   Nottingham |
++-------+------------------------------+
+|GB-OXF |                  Oxfordshire |
++-------+------------------------------+
+|GB-PTE |                 Peterborough |
++-------+------------------------------+
+|GB-PLY |                     Plymouth |
++-------+------------------------------+
+|GB-POL |                        Poole |
++-------+------------------------------+
+|GB-POR |                   Portsmouth |
++-------+------------------------------+
+|GB-RDB |                    Redbridge |
++-------+------------------------------+
+|GB-RCC |         Redcar and Cleveland |
++-------+------------------------------+
+|GB-RIC |         Richmond upon Thames |
++-------+------------------------------+
+|GB-RUT |                      Rutland |
++-------+------------------------------+
+|GB-SHR |                   Shropshire |
++-------+------------------------------+
+|GB-SOM |                     Somerset |
++-------+------------------------------+
+|GB-SGC |        South Gloucestershire |
++-------+------------------------------+
+|GB-SY  |              South Yorkshire |
++-------+------------------------------+
+|GB-STH |                  Southampton |
++-------+------------------------------+
+|GB-SOS |              Southend-on-Sea |
++-------+------------------------------+
+|GB-SWK |                    Southwark |
++-------+------------------------------+
+|GB-STS |                Staffordshire |
++-------+------------------------------+
+|GB-STT |             Stockton-on-Tees |
++-------+------------------------------+
+|GB-STE |               Stoke-on-Trent |
++-------+------------------------------+
+|GB-SFK |                      Suffolk |
++-------+------------------------------+
+|GB-SRY |                       Surrey |
++-------+------------------------------+
+|GB-STN |                       Sutton |
++-------+------------------------------+
+|GB-SWD |                      Swindon |
++-------+------------------------------+
+|GB-TFW |           Telford and Wrekin |
++-------+------------------------------+
+|GB-THR |                     Thurrock |
++-------+------------------------------+
+|GB-TOB |                       Torbay |
++-------+------------------------------+
+|GB-TWH |                Tower Hamlets |
++-------+------------------------------+
+|GB-TAW |                Tyne and Wear |
++-------+------------------------------+
+|GB-WFT |               Waltham Forest |
++-------+------------------------------+
+|GB-WND |                   Wandsworth |
++-------+------------------------------+
+|GB-WRT |                   Warrington |
++-------+------------------------------+
+|GB-WAR |                 Warwickshire |
++-------+------------------------------+
+|GB-WM  |                West Midlands |
++-------+------------------------------+
+|GB-WSX |                  West Sussex |
++-------+------------------------------+
+|GB-WY  |               West Yorkshire |
++-------+------------------------------+
+|GB-WSM |                  Westminster |
++-------+------------------------------+
+|GB-WIL |                    Wiltshire |
++-------+------------------------------+
+|GB-WOR |               Worcestershire |
++-------+------------------------------+
+|GB-YOR |                         York |
++-------+------------------------------+
+|GB-ANT |                       Antrim |
++-------+------------------------------+
+|GB-ARD |                         Ards |
++-------+------------------------------+
+|GB-ARM |                       Armagh |
++-------+------------------------------+
+|GB-BLA |                    Ballymena |
++-------+------------------------------+
+|GB-BLY |                   Ballymoney |
++-------+------------------------------+
+|GB-BNB |                    Banbridge |
++-------+------------------------------+
+|GB-BFS |                      Belfast |
++-------+------------------------------+
+|GB-CKF |                Carrickfergus |
++-------+------------------------------+
+|GB-CSR |                  Castlereagh |
++-------+------------------------------+
+|GB-CLR |                    Coleraine |
++-------+------------------------------+
+|GB-CKT |                    Cookstown |
++-------+------------------------------+
+|GB-CGV |                    Craigavon |
++-------+------------------------------+
+|GB-DRY |                        Derry |
++-------+------------------------------+
+|GB-DOW |                         Down |
++-------+------------------------------+
+|GB-DGN |                    Dungannon |
++-------+------------------------------+
+|GB-FER |                    Fermanagh |
++-------+------------------------------+
+|GB-LRN |                        Larne |
++-------+------------------------------+
+|GB-LMV |                     Limavady |
++-------+------------------------------+
+|GB-LSB |                      Lisburn |
++-------+------------------------------+
+|GB-MFT |                  Magherafelt |
++-------+------------------------------+
+|GB-MYL |                        Moyle |
++-------+------------------------------+
+|GB-NYM |             Newry and Mourne |
++-------+------------------------------+
+|GB-NTA |                 Newtownabbey |
++-------+------------------------------+
+|GB-NDN |                   North Down |
++-------+------------------------------+
+|GB-OMH |                        Omagh |
++-------+------------------------------+
+|GB-STB |                     Strabane |
++-------+------------------------------+
+|GB-ABD |                Aberdeenshire |
++-------+------------------------------+
+|GB-ABE |                     Aberdeen |
++-------+------------------------------+
+|GB-ANS |                        Angus |
++-------+------------------------------+
+|GB-AGB |              Argyll and Bute |
++-------+------------------------------+
+|GB-CLK |             Clackmannanshire |
++-------+------------------------------+
+|GB-DGY |        Dumfries and Galloway |
++-------+------------------------------+
+|GB-DND |                       Dundee |
++-------+------------------------------+
+|GB-EAY |                East Ayrshire |
++-------+------------------------------+
+|GB-EDU |          East Dunbartonshire |
++-------+------------------------------+
+|GB-ELN |                 East Lothian |
++-------+------------------------------+
+|GB-ERW |            East Renfrewshire |
++-------+------------------------------+
+|GB-EDH |                    Edinburgh |
++-------+------------------------------+
+|GB-ELS |                  Eilean Siar |
++-------+------------------------------+
+|GB-FAL |                      Falkirk |
++-------+------------------------------+
+|GB-FIF |                         Fife |
++-------+------------------------------+
+|GB-GLG |                      Glasgow |
++-------+------------------------------+
+|GB-HLD |                     Highland |
++-------+------------------------------+
+|GB-IVC |                   Inverclyde |
++-------+------------------------------+
+|GB-MLN |                   Midlothian |
++-------+------------------------------+
+|GB-MRY |                        Moray |
++-------+------------------------------+
+|GB-NAY |               North Ayrshire |
++-------+------------------------------+
+|GB-NLK |            North Lanarkshire |
++-------+------------------------------+
+|GB-ORK |               Orkney Islands |
++-------+------------------------------+
+|GB-PKN |       Perthshire and Kinross |
++-------+------------------------------+
+|GB-RFW |                 Renfrewshire |
++-------+------------------------------+
+|GB-SCB |             Scottish Borders |
++-------+------------------------------+
+|GB-ZET |             Shetland Islands |
++-------+------------------------------+
+|GB-SAY |               South Ayrshire |
++-------+------------------------------+
+|GB-SLK |            South Lanarkshire |
++-------+------------------------------+
+|GB-STG |                     Stirling |
++-------+------------------------------+
+|GB-WDU |          West Dunbartonshire |
++-------+------------------------------+
+|GB-WLN |                 West Lothian |
++-------+------------------------------+
+|GB-AGY |                     Anglesey |
++-------+------------------------------+
+|GB-BGW |                Blaenau Gwent |
++-------+------------------------------+
+|GB-BGE |                     Bridgend |
++-------+------------------------------+
+|GB-CAY |                   Caerphilly |
++-------+------------------------------+
+|GB-CRF |                      Cardiff |
++-------+------------------------------+
+|GB-CMN |              Carmarthenshire |
++-------+------------------------------+
+|GB-CGN |                   Ceredigion |
++-------+------------------------------+
+|GB-CWY |                        Conwy |
++-------+------------------------------+
+|GB-DEN |                 Denbighshire |
++-------+------------------------------+
+|GB-FLN |                   Flintshire |
++-------+------------------------------+
+|GB-GWN |                      Gwynedd |
++-------+------------------------------+
+|GB-MTY |               Merthyr Tydfil |
++-------+------------------------------+
+|GB-MON |                Monmouthshire |
++-------+------------------------------+
+|GB-NTL |            Neath Port Talbot |
++-------+------------------------------+
+|GB-NWP |                      Newport |
++-------+------------------------------+
+|GB-PEM |                Pembrokeshire |
++-------+------------------------------+
+|GB-POW |                        Powys |
++-------+------------------------------+
+|GB-RCT |                       Rhondda|
++-------+------------------------------+
+|GB-SWA |                      Swansea |
++-------+------------------------------+
+|GB-TOF |                      Torfaen |
++-------+------------------------------+
+|GB-VGL |            Vale of Glamorgan |
++-------+------------------------------+
+|GB-WRX |                      Wrexham |
++-------+------------------------------+
+
+* Ukraine
+
++------+------------------------------+
+|ISO   | Name of region               |
++======+==============================+
+|UA-71 |           Cherkasy           |
++------+------------------------------+
+|UA-74 |         Chernihiv            |
++------+------------------------------+
+|UA-77 |         Chernivtsi           |
++------+------------------------------+
+|UA-43 |         Crimea               |
++------+------------------------------+
+|UA-12 |         Dnipropetrovs'k      |
++------+------------------------------+
+|UA-14 |         Donets'k             |
++------+------------------------------+
+|UA-26 |         Ivano-Frankivs'k     |
++------+------------------------------+
+|UA-63 |         Kharkiv              |
++------+------------------------------+
+|UA-65 |         Kherson              |
++------+------------------------------+
+|UA-68 |         Khmel'nyts'kyy       |
++------+------------------------------+
+|UA-30 |         Kiev City            |
++------+------------------------------+
+|UA-32 |         Kiev                 |
++------+------------------------------+
+|UA-35 |         Kirovohrad           |
++------+------------------------------+
+|UA-46 |         L'viv                |
++------+------------------------------+
+|UA-09 |         Luhans'k             |
++------+------------------------------+
+|UA-48 |         Mykolayiv            |
++------+------------------------------+
+|UA-51 |         Odessa               |
++------+------------------------------+
+|UA-53 |         Poltava              |
++------+------------------------------+
+|UA-56 |         Rivne                |
++------+------------------------------+
+|UA-40 |         Sevastopol'          |
++------+------------------------------+
+|UA-59 |         Sumy                 |
++------+------------------------------+
+|UA-61 |         Ternopil'            |
++------+------------------------------+
+|UA-21 |         Transcarpathia       |
++------+------------------------------+
+|UA-05 |         Vinnytsya            |
++------+------------------------------+
+|UA-07 |         Volyn                |
++------+------------------------------+
+|UA-23 |         Zaporizhzhya         |
++------+------------------------------+
+|UA-18 |         Zhytomyr             |
++------+------------------------------+
+
+
+* Usa
+
++------+------------------------------+
+|ISO   | Name of region               |
++======+==============================+
+|US-AL |                      Alabama |
++------+------------------------------+
+|US-AK |                       Alaska |
++------+------------------------------+
+|US-AK |                       Alaska |
++------+------------------------------+
+|US-AZ |                      Arizona |
++------+------------------------------+
+|US-AR |                     Arkansas |
++------+------------------------------+
+|US-CA |                   California |
++------+------------------------------+
+|US-CO |                     Colorado |
++------+------------------------------+
+|US-CT |                  Connecticut |
++------+------------------------------+
+|US-DE |                     Delaware |
++------+------------------------------+
+|US-DC |         District of Columbia |
++------+------------------------------+
+|US-FL |                      Florida |
++------+------------------------------+
+|US-GA |                      Georgia |
++------+------------------------------+
+|US-HI |                       Hawaii |
++------+------------------------------+
+|US-ID |                        Idaho |
++------+------------------------------+
+|US-IL |                     Illinois |
++------+------------------------------+
+|US-IN |                      Indiana |
++------+------------------------------+
+|US-IA |                         Iowa |
++------+------------------------------+
+|US-KS |                       Kansas |
++------+------------------------------+
+|US-KY |                     Kentucky |
++------+------------------------------+
+|US-LA |                    Louisiana |
++------+------------------------------+
+|US-ME |                        Maine |
++------+------------------------------+
+|US-MD |                     Maryland |
++------+------------------------------+
+|US-MA |                Massachusetts |
++------+------------------------------+
+|US-MI |                     Michigan |
++------+------------------------------+
+|US-MN |                    Minnesota |
++------+------------------------------+
+|US-MS |                  Mississippi |
++------+------------------------------+
+|US-MO |                     Missouri |
++------+------------------------------+
+|US-MT |                      Montana |
++------+------------------------------+
+|US-NE |                     Nebraska |
++------+------------------------------+
+|US-NV |                       Nevada |
++------+------------------------------+
+|US-NH |                New Hampshire |
++------+------------------------------+
+|US-NJ |                   New Jersey |
++------+------------------------------+
+|US-NM |                   New Mexico |
++------+------------------------------+
+|US-NY |                     New York |
++------+------------------------------+
+|US-NC |               North Carolina |
++------+------------------------------+
+|US-ND |                 North Dakota |
++------+------------------------------+
+|US-OH |                         Ohio |
++------+------------------------------+
+|US-OK |                     Oklahoma |
++------+------------------------------+
+|US-OR |                       Oregon |
++------+------------------------------+
+|US-PA |                 Pennsylvania |
++------+------------------------------+
+|US-RI |                 Rhode Island |
++------+------------------------------+
+|US-SC |               South Carolina |
++------+------------------------------+
+|US-SD |                 South Dakota |
++------+------------------------------+
+|US-TN |                    Tennessee |
++------+------------------------------+
+|US-TX |                        Texas |
++------+------------------------------+
+|US-UT |                         Utah |
++------+------------------------------+
+|US-VT |                      Vermont |
++------+------------------------------+
+|US-VA |                     Virginia |
++------+------------------------------+
+|US-WA |                   Washington |
++------+------------------------------+
+|US-WV |                West Virginia |
++------+------------------------------+
+|US-WI |                    Wisconsin |
++------+------------------------------+
+|US-WY |                      Wyoming |
++------+------------------------------+
+
+
+Need to add a new Country?
+-------------------------------
+
+To add a new country in country map tools, we need to follow the following steps :
+
+1. You need shapefiles which contain data of your map.
+   You can get this file on this site: https://www.diva-gis.org/gdata
+
+2. You need to add ISO 3166-2 with column name ISO for all record in your file.
+   It's important because it's a norm for mapping your data with geojson file
+
+3. You need to convert shapefile to geojson file.
+   This action can make with ogr2ogr tools: https://www.gdal.org/ogr2ogr.html
+
+4. Put your geojson file in next folder : superset-frontend/src/visualizations/CountryMap/countries with the next name : nameofyourcountries.geojson
+
+5. You can to reduce size of geojson file on this site: https://mapshaper.org/
+
+6. Go in file superset-frontend/src/explore/controls.jsx
+
+7. Add your country in component 'select_country'
+   Example :
+
+.. code-block:: javascript
+
+    select_country: {
+        type: 'SelectControl',
+        label: 'Country Name Type',
+        default: 'France',
+        choices: [
+        'Belgium',
+        'Brazil',
+        'China',
+        'Egypt',
+        'France',
+        'Germany',
+        'Italy',
+        'Japan',
+        'Korea',
+        'Morocco',
+        'Netherlands',
+        'Russia',
+        'Singapore',
+        'Spain',
+        'Uk',
+        'Usa',
+        ].map(s => [s, s]),
+        description: 'The name of country that Superset should display',
+    },
diff --git a/_sources/visualization.txt b/_sources/visualization.txt
new file mode 100644
index 0000000..05247ca
--- /dev/null
+++ b/_sources/visualization.txt
@@ -0,0 +1,1759 @@
+Visualization Tools
+===================
+
+The data is visualized via the slices. These slices are visual components made with the D3.js. Some components can be completed or required inputs.
+
+Country Map Tools
+-----------------
+
+This tool is used in slices for visualization number or string by region, province or department of your countries.
+So, if you want to use tools, you need ISO 3166-2 code of region, province or departmenent.
+
+ISO 3166-2 is part of the ISO 3166 standard published by the International Organization for Standardization (ISO), and defines codes for identifying the principal subdivisions (e.g., provinces or states) of all countries coded in ISO 3166-1
+
+The purpose of ISO 3166-2 is to establish an international standard of short and unique alphanumeric codes to represent the relevant administrative divisions and dependent territories of all countries in a more convenient and less ambiguous form than their full names. Each complete ISO 3166-2 code consists of two parts, separated by a hyphen:[1]
+
+The first part is the ISO 3166-1 alpha-2 code of the country;
+The second part is a string of up to three alphanumeric characters, which is usually obtained from national sources and stems from coding systems already in use in the country concerned, but may also be developed by the ISO itself.
+
+List of Countries
+-----------------
+
+* Belgium
+
++---------+-------------------+
+|  ISO    | Name of region    | 
++=========+===================+
+|  BE-BRU |  Bruxelles        |
++---------+-------------------+
+|  BE-VAN |  Antwerpen        |
++---------+-------------------+
+|  BE-VLI |  Limburg          |
++---------+-------------------+
+|  BE-VOV |  Oost-Vlaanderen  |
++---------+-------------------+
+|  BE-VBR |  Vlaams Brabant   |
++---------+-------------------+
+|  BE-VWV |  West-Vlaanderen  |
++---------+-------------------+
+|  BE-WBR |  Brabant Wallon   |
++---------+-------------------+
+|  BE-WHT |  Hainaut          |
++---------+-------------------+
+|  BE-WLG |  Liège            |
++---------+-------------------+
+|  BE-VLI |  Limburg          |
++---------+-------------------+
+|  BE-WLX |  Luxembourg       |
++---------+-------------------+
+|  BE-WNA |  Namur            |
++---------+-------------------+
+
+
+
+* Brazil
+
++----------+-----------------------+
+|  ISO     | Name of region        | 
++==========+=======================+
... 89369 lines suppressed ...