You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pegasus.apache.org by wu...@apache.org on 2020/11/13 04:11:22 UTC

[incubator-pegasus] branch v2.1 updated (4f0aa95 -> 7686ece)

This is an automated email from the ASF dual-hosted git repository.

wutao pushed a change to branch v2.1
in repository https://gitbox.apache.org/repos/asf/incubator-pegasus.git.


    from 4f0aa95  chore: use asf license header (#633)
     new 03e02d3  chore: add 3rdparty licenses (#629)
     new ad2fffc  chore: remove unmaintained scripts (#634)
     new 7686ece  chore: sort out in-source 3rdparty licenses under rdsn (#637)

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 LICENSE                           | 207 ++++++++++
 scripts/carrot                    | 291 -------------
 scripts/cluster_check.in          |   7 -
 scripts/clusters_show.sh          | 130 ------
 scripts/clusters_stat.sh          | 137 ------
 scripts/create_table.py           | 202 ---------
 scripts/falcon_screen.json        | 849 --------------------------------------
 scripts/falcon_screen.py          | 600 ---------------------------
 scripts/pegasus_check_clusters.py |  62 ---
 scripts/pegasus_check_ports.py    |  74 ----
 scripts/pegasus_falcon_screen.sh  |  69 ----
 scripts/py_utils/__init__.py      |  23 --
 scripts/py_utils/lib.py           | 167 --------
 scripts/scp-no-interactive        |  24 --
 scripts/ssh-no-interactive        |  22 -
 scripts/update_qt_config.sh       |  91 ----
 16 files changed, 207 insertions(+), 2748 deletions(-)
 delete mode 100755 scripts/carrot
 delete mode 100644 scripts/cluster_check.in
 delete mode 100755 scripts/clusters_show.sh
 delete mode 100755 scripts/clusters_stat.sh
 delete mode 100755 scripts/create_table.py
 delete mode 100644 scripts/falcon_screen.json
 delete mode 100755 scripts/falcon_screen.py
 delete mode 100755 scripts/pegasus_check_clusters.py
 delete mode 100755 scripts/pegasus_check_ports.py
 delete mode 100755 scripts/pegasus_falcon_screen.sh
 delete mode 100644 scripts/py_utils/__init__.py
 delete mode 100644 scripts/py_utils/lib.py
 delete mode 100755 scripts/scp-no-interactive
 delete mode 100755 scripts/ssh-no-interactive
 delete mode 100755 scripts/update_qt_config.sh


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pegasus.apache.org
For additional commands, e-mail: commits-help@pegasus.apache.org


[incubator-pegasus] 01/03: chore: add 3rdparty licenses (#629)

Posted by wu...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

wutao pushed a commit to branch v2.1
in repository https://gitbox.apache.org/repos/asf/incubator-pegasus.git

commit 03e02d315a3e6da581d4d0139ba87f7df852fd2b
Author: Wu Tao <wu...@163.com>
AuthorDate: Tue Oct 27 07:54:46 2020 -0500

    chore: add 3rdparty licenses (#629)
---
 LICENSE | 127 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 127 insertions(+)

diff --git a/LICENSE b/LICENSE
index 6efe7f3..d357bd4 100644
--- a/LICENSE
+++ b/LICENSE
@@ -200,3 +200,130 @@ distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
+
+--------------------------------------------------------------------------------
+
+rdsn/** - MIT License & Apache 2.0 License
+
+ The MIT License (MIT)
+
+ Copyright (c) 2015 Microsoft Corporation
+
+ -=- Robust Distributed System Nucleus (rDSN) -=-
+
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+
+ The above copyright notice and this permission notice shall be included in
+ all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ THE SOFTWARE.
+
+ Copyright (c) 2017-present, Xiaomi, Inc.  All rights reserved.
+ This source code is licensed under the Apache License Version 2.0.
+
+--------------------------------------------------------------------------------
+
+src/shell/linenoise/* - BSD-2-Clause License
+
+  Copyright (c) 2010-2014, Salvatore Sanfilippo <antirez at gmail dot com>
+  Copyright (c) 2010-2013, Pieter Noordhuis <pcnoordhuis at gmail dot com>
+
+  All rights reserved.
+
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions are
+  met:
+
+   *  Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+
+   *  Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in the
+      documentation and/or other materials provided with the distribution.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+  HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+--------------------------------------------------------------------------------
+
+src/shell/sds/* - BSD-2-Clause License
+
+  Copyright (c) 2006-2015, Salvatore Sanfilippo <antirez at gmail dot com>
+  Copyright (c) 2015, Oran Agra
+  Copyright (c) 2015, Redis Labs, Inc
+  All rights reserved.
+
+  Redistribution and use in source and binary forms, with or without
+  modification, are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright notice,
+      this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in the
+      documentation and/or other materials provided with the distribution.
+    * Neither the name of Redis nor the names of its contributors may be used
+      to endorse or promote products derived from this software without
+      specific prior written permission.
+
+  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+  ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+  LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+  CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+  SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+  INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+  CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+  ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+  POSSIBILITY OF SUCH DAMAGE.
+
+--------------------------------------------------------------------------------
+
+src/shell/argh.h - BSD 3-Clause
+
+ Copyright (c) 2016, Adi Shavit
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are met:
+
+ * Redistributions of source code must retain the above copyright notice,
+ this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of  nor the names of its contributors may be used to
+ endorse or promote products derived from this software without specific
+ prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ POSSIBILITY OF SUCH DAMAGE.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pegasus.apache.org
For additional commands, e-mail: commits-help@pegasus.apache.org


[incubator-pegasus] 02/03: chore: remove unmaintained scripts (#634)

Posted by wu...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

wutao pushed a commit to branch v2.1
in repository https://gitbox.apache.org/repos/asf/incubator-pegasus.git

commit ad2fffcf19a8ae3e9bc7c22fd76c804ba95e1fa2
Author: Wu Tao <wu...@163.com>
AuthorDate: Mon Nov 2 16:39:49 2020 +0800

    chore: remove unmaintained scripts (#634)
---
 scripts/carrot                    | 291 -------------
 scripts/cluster_check.in          |   7 -
 scripts/clusters_show.sh          | 130 ------
 scripts/clusters_stat.sh          | 137 ------
 scripts/create_table.py           | 202 ---------
 scripts/falcon_screen.json        | 849 --------------------------------------
 scripts/falcon_screen.py          | 600 ---------------------------
 scripts/pegasus_check_clusters.py |  62 ---
 scripts/pegasus_check_ports.py    |  74 ----
 scripts/pegasus_falcon_screen.sh  |  69 ----
 scripts/py_utils/__init__.py      |  23 --
 scripts/py_utils/lib.py           | 167 --------
 scripts/scp-no-interactive        |  24 --
 scripts/ssh-no-interactive        |  22 -
 scripts/update_qt_config.sh       |  91 ----
 15 files changed, 2748 deletions(-)

diff --git a/scripts/carrot b/scripts/carrot
deleted file mode 100755
index 39fc82f..0000000
--- a/scripts/carrot
+++ /dev/null
@@ -1,291 +0,0 @@
-#!/bin/bash
-
-staging_branch="master"
-project_name="pegasus"
-command_decorator="verify"
-
-function git_current_branch()
-{
-    echo `git branch | fgrep "*" | cut -d " " -f 2`
-}
-
-function get_next_version()
-{
-    versions=(`grep "PEGASUS_VERSION" src/include/pegasus/version.h | cut -d"\"" -f 2 | sed 's/\./ /g'`)
-    case $1 in
-        major)
-            versions[0]=$[ ${versions[0]} + 1 ]
-            ;;
-        minor)
-            versions[1]=$[ ${versions[1]} + 1 ]
-            ;;
-        patch)
-            if [ ${versions[2]} == "SNAPSHOT" ]; then
-                versions[2]="0"
-            else
-                versions[2]=$[ ${versions[2]} + 1 ]
-            fi
-            ;;
-        *)
-            echo "Invalid next version type"
-            exit -1
-            ;;
-    esac
-
-    echo ${versions[*]} | sed 's/ /\./g'
-}
-
-function get_current_version()
-{
-    versions=(`grep "PEGASUS_VERSION" src/include/pegasus/version.h | cut -d"\"" -f 2 | sed 's/\./ /g'`)
-    case $1 in
-        major)
-            echo ${versions[0]}
-            ;;
-        minor)
-            echo ${versions[0]}.${versions[1]}
-            ;;
-        patch)
-            echo ${versions[*]} | sed 's/ /\./g'
-            ;;
-        *)
-            echo "Invalid current version type"
-            exit -1
-            ;;
-    esac
-}
-
-function get_branch_type()
-{
-    if [ $1 = $staging_branch ]; then
-        echo "staging"
-    else
-        echo "release"
-    fi
-}
-
-function verify_command()
-{
-    answer=""
-    echo -n -e "\033[31mExecuting command: $@, y/N?\033[0m"
-    read answer
-    if [ -z $answer ] || [ $answer = "y" ]; then
-        eval "$@"
-    else
-        return -1
-    fi
-    return $?
-}
-
-function verbose_command()
-{
-    echo -e "\033[31mExec Command: $@ \033[0m"
-    eval "$@"
-    return $?
-}
-
-function carrot_execute()
-{
-    case $command_decorator in
-        silence)
-            eval $1
-            ;;
-        verbose)
-            verbose_command $1
-            ;;
-        verify)
-            verify_command $1
-            ;;
-        simulate)
-            echo -e "\033[32m$1\033[0m"
-            ;;
-        *)
-            echo "invalid command decorator"
-            exit -1
-            ;;
-    esac
-    if [ $? -ne 0 ]; then
-        echo "error in execute command $1, simulate the remaining commands"
-        command_decorator="simulate"
-    fi
-}
-
-#
-# patch -b|--branch branch_name -p|--commit_point commit_point -s|--start_from_this -d|--decorate decorate_type
-#
-function usage_patch
-{
-    echo "carrot patch -- apply patch to specific branch, and release a new patch version"
-    echo "  -h|--help, print this help"
-    echo "  -b|--branch BRANCH_NAME, the target branch. For current branch if not set"
-    echo "  -p|--commit_point GIT_COMMIT_ID, cherry-pick this to the target"
-    echo "  -s|--start_from_this. If set, cherry-pick from [GIT_COMMIT_ID, HEAD] to the target"
-    echo "  -d|--decorate TYPE. [silence|verbose|verify|simulate], default is verify"
-}
-
-function make_patch
-{
-    branch_name=""
-    commit_point=""
-    recent_commit=""
-    starting_flag="false"
-    
-    while [[ $# > 0 ]]; do
-        key="$1"
-        case $key in
-            -h|--help)
-                usage_patch
-                exit 0
-                ;;
-            -b|--branch)
-                branch_name=$2
-                shift
-                ;;
-            -p|--commit_point)
-                commit_point=$2
-                shift;;
-            -s|--start_from_this)
-                starting_flag="true"
-                ;;
-            -d|--decorate)
-                command_decorator=$2
-                shift
-                ;;
-            *)
-                usage_patch
-                exit -1
-                ;;
-        esac
-        shift
-    done
-
-    old_branch=`git_current_branch`
-    old_branch_type=`get_branch_type $old_branch`
-
-    # only in staging branch, we try to calcuate the -s flag, AND
-    # only in staging branch, we try to get the recent commit point in log
-    if [ $old_branch_type == "staging" ]; then
-        if [ ! -z $commit_point ]; then
-            if [ $starting_flag == "true" ]; then
-                recent_commit=`git log | sed -n "1p" | cut -d" " -f 2`
-            fi
-        else
-            commit_point=`git log | sed -n "1p" | cut -d" " -f 2`
-        fi
-    fi
-
-    current_branch=$old_branch
-    # we don't apply the patch unless we are in a release tag
-    if [ ! -z $branch_name ]; then
-        carrot_execute "git checkout $branch_name"
-        current_branch=$branch_name
-        if [ ! -z $recent_commit ]; then
-            carrot_execute "git cherry-pick $commit_point^..$recent_commit"
-        elif [ -n $commit_point ]; then
-            carrot_execute "git cherry-pick $commit_point"
-        fi
-    elif [ $old_branch_type == "staging" ]; then
-        echo "Please checkout to a release branch, or give a release branch name by -b"
-        exit -1
-    fi
-
-    new_version=`get_next_version patch`
-    carrot_execute "./run.sh bump_version $new_version"
-    carrot_execute "git commit -am \"Release $project_name $new_version\""
-    carrot_execute "git tag -a v$new_version -m \"Release $project_name $new_version\""
-    carrot_execute "git push -u origin $current_branch"
-    carrot_execute "git push origin v$new_version"
-
-    if [ $current_branch != $old_branch ]; then
-        carrot_execute "git checkout $old_branch"
-    fi
-}
-
-#
-# minor-release -d|--decorate decorate_type
-#
-function usage_release_minor
-{
-    echo "carrot minor-release"
-    echo "  -h|--help, print this help "
-    echo "  -d|--decorate TYPE. [silence|verbose|verify|simulate], default is verify"
-}
-
-function release_minor
-{
-    while [[ $# > 0 ]]; do
-        key="$1"
-        case $key in
-            -h|--help)
-                usage_release_minor
-                exit 0
-                ;;
-            -d|--decorate)
-                command_decorator=$2
-                shift
-                ;;
-        esac
-        shift
-    done
-
-    this_branch=`git_current_branch`
-    branch_type=`get_branch_type $this_branch`
-
-    if [ $branch_type != "staging" ]; then
-        echo "when release minor, we need to be in staging branch, currently in a $branch_type branch $this_branch"
-        exit -1
-    fi
-
-    this_version=`get_current_version minor`
-
-    # create new branch and push
-    carrot_execute "git checkout -b v$this_version"
-    # from a.b.SNAPSHOT -> a.b.0
-    new_version=`get_next_version patch`
-    # commit the release version
-    carrot_execute "./run.sh bump_version $new_version"
-    carrot_execute "git commit -am \"Release $project_name $new_version\""
-    carrot_execute "git push -u origin v$this_version"
-    # then make tag
-    carrot_execute "git tag -a v$new_version -m \"Release $project_name $new_version\""
-    carrot_execute "git push origin v$new_version"
-
-    # update the staging branch's version
-    carrot_execute "git checkout $this_branch"
-    # from a.b.SNAPSHOT -> a.b+1.SNAPSHOT
-    new_version=`get_next_version minor`
-    carrot_execute "./run.sh bump_version $new_version"
-    carrot_execute "git commit -am \"Bump version to $new_version\""
-    carrot_execute "git push -u origin $this_branch"
-}
-
-function usage_carrot
-{
-    echo "carrot -- Carrot is A Release veRsiOn Tool"
-    echo "  help             print the help"
-    echo "  patch            Make patch"
-    echo "  minor-release    Release a minor version"
-}
-
-pwd="$( cd "$( dirname "$0"  )" && pwd )"
-shell_dir="$( cd $pwd/.. && pwd )"
-cd $shell_dir
-
-action=$1
-case $action in
-    help)
-        usage_carrot ;;
-    patch)
-        shift
-        make_patch $*
-        ;;
-    minor-release)
-        shift
-        release_minor $*
-        ;;
-    *)
-        echo "ERROR: unknown command $cmd"
-        echo
-        usage_carrot
-        exit -1
-esac
diff --git a/scripts/cluster_check.in b/scripts/cluster_check.in
deleted file mode 100644
index e8bb69f..0000000
--- a/scripts/cluster_check.in
+++ /dev/null
@@ -1,7 +0,0 @@
-cluster_info
-server_info
-ls -d
-nodes -d
-app_stat
-query_backup_policy -p every_day
-
diff --git a/scripts/clusters_show.sh b/scripts/clusters_show.sh
deleted file mode 100755
index b5661f5..0000000
--- a/scripts/clusters_show.sh
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-if [ $# -lt 2 ]; then
-  echo "USAGE: $0 <cluster-list-file> <result-format>"
-  echo
-  echo "The result format must be 'table' or 'csv'."
-  echo
-  echo "For example:"
-  echo "  $0 \"clusters.txt\" \"table\""
-  echo
-  exit 1
-fi
-
-PID=$$
-clusters_file=$1
-format=$2
-if [ "$format" != "table" -a "$format" != "csv" ]; then
-  echo "ERROR: invalid result format, should be 'table' or 'csv'."
-  exit 1
-fi
-
-pwd="$( cd "$( dirname "$0"  )" && pwd )"
-shell_dir="$( cd $pwd/.. && pwd )"
-cd $shell_dir
-
-echo "show_time = `date`"
-echo
-echo "Columns:"
-echo "  - cluster: name of the cluster"
-echo "  - rs_count: current count of replica servers"
-echo "  - version: current version of replica servers"
-echo "  - lb_op_count: current count of load balance operations to make cluster balanced"
-echo "  - app_count: current count of tables in the cluster"
-echo "  - storage_gb: current total data size in GB of tables in the cluster"
-echo
-if [ "$format" == "table" ]; then
-  printf '%-30s%-12s%-12s%-12s%-12s%-12s\n' cluster rs_count version lb_op_count app_count storage_gb
-elif [ "$format" == "csv" ]; then
-  echo "cluster,rs_count,version,lb_op_count,app_count,storage_gb"
-else
-  echo "ERROR: invalid format: $format"
-  exit -1
-fi
-cluster_count=0
-rs_count_sum=0
-app_count_sum=0
-data_size_sum=0
-lb_op_count_sum=0
-while read cluster
-do
-  tmp_file="/tmp/$UID.$PID.pegasus.clusters_status.cluster_info"
-  echo "cluster_info" | ./run.sh shell -n $cluster &>$tmp_file
-  cluster_info_fail=`grep "\<failed\>" $tmp_file | wc -l`
-  if [ $cluster_info_fail -eq 1 ]; then
-    echo "ERROR: get cluster info failed, refer error to $tmp_file"
-    exit 1
-  fi
-  lb_op_count=`cat $tmp_file | grep 'balance_operation_count' | grep -o 'total=[0-9]*' | cut -d= -f2`
-  if [ -z $lb_op_count ]; then
-    lb_op_count="-"
-  else
-    lb_op_count_sum=$((lb_op_count_sum + lb_op_count))
-  fi
-
-  tmp_file="/tmp/$UID.$PID.pegasus.clusters_status.server_info"
-  echo "server_info" | ./run.sh shell -n $cluster &>$tmp_file
-  rs_count=`cat $tmp_file | grep 'replica-server' | wc -l`
-  rs_version=`cat $tmp_file | grep 'replica-server' | grep -o 'Pegasus Server [^ ]*' | head -n 1 | sed 's/SNAPSHOT/SN/' | awk '{print $3}'`
-
-  app_stat_result="/tmp/$UID.$PID.pegasus.clusters_status.app_stat_result"
-  tmp_file="/tmp/$UID.$PID.pegasus.clusters_status.app_stat"
-  echo "app_stat -o $app_stat_result" | ./run.sh shell -n $cluster &>$tmp_file
-  app_stat_fail=`grep "\<failed\>" $tmp_file | wc -l`
-  if [ $app_stat_fail -eq 1 ]; then
-    sleep 1
-    echo "app_stat -o $app_stat_result" | ./run.sh shell -n $cluster &>$tmp_file
-    app_stat_fail=`grep "\<failed\>" $tmp_file | wc -l`
-    if [ $app_stat_fail -eq 1 ]; then
-      echo "ERROR: app stat failed, refer error to $tmp_file"
-      exit 1
-    fi
-  fi
-  app_count=`cat $app_stat_result | wc -l`
-  app_count=$((app_count-2))
-  data_size_column=`cat $app_stat_result | awk '/file_mb/{ for(i = 1; i <= NF; i++) { if ($i == "file_mb") print i; } }'`
-  data_size=`cat $app_stat_result | tail -n 1 | awk '{print $'$data_size_column'}' | sed 's/\.00$//'`
-  data_size=$(((data_size+1023)/1024))
-
-  if [ "$format" == "table" ]; then
-    printf '%-30s%-12s%-12s%-12s%-12s%-12s\n' $cluster $rs_count $rs_version $lb_op_count $app_count $data_size
-  elif [ "$format" == "csv" ]; then
-    echo -e "$cluster,$rs_count,$rs_version,$lb_op_count,$app_count,$data_size"
-  else
-    echo "ERROR: invalid format: $format"
-    exit -1
-  fi
-
-  cluster_count=$((cluster_count + 1))
-  rs_count_sum=$((rs_count_sum + rs_count))
-  app_count_sum=$((app_count_sum + app_count))
-  data_size_sum=$((data_size_sum + data_size))
-done <clusters
-
-if [ "$format" == "table" ]; then
-  printf '%-30s%-12s%-12s%-12s%-12s%-12s\n' "(total:$cluster_count)" $rs_count_sum "-" $lb_op_count_sum $app_count_sum $data_size_sum
-elif [ "$format" == "csv" ]; then
-  echo -e "(total:$cluster_count),$rs_count_sum,,$lb_op_count_sum,$app_count_sum,$data_size_sum"
-else
-  echo "ERROR: invalid format: $format"
-  exit -1
-fi
-
-rm -rf /tmp/$UID.$PID.pegasus.* &>/dev/null
-
diff --git a/scripts/clusters_stat.sh b/scripts/clusters_stat.sh
deleted file mode 100755
index ef030e6..0000000
--- a/scripts/clusters_stat.sh
+++ /dev/null
@@ -1,137 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-if [ $# -lt 3 ]; then
-  echo "USAGE: $0 <cluster-list-file> <month-list> <result-format>"
-  echo
-  echo "The result format must be 'table' or 'csv'."
-  echo
-  echo "For example:"
-  echo "  $0 \"clusters.txt\" \"2019-01\" \"table\""
-  echo "  $0 \"clusters.txt\" \"2019-01 2019-02\" \"csv\""
-  echo
-  exit 1
-fi
-
-clusters_file=$1
-months=$2
-format=$3
-if [ "$format" != "table" -a "$format" != "csv" ]; then
-  echo "ERROR: invalid result format, should be 'table' or 'csv'."
-  exit 1
-fi
-
-pwd="$( cd "$( dirname "$0"  )" && pwd )"
-shell_dir="$( cd $pwd/.. && pwd )"
-cd $shell_dir
-
-all_result="/tmp/pegasus.stat_available.all_result"
-rm $all_result &>/dev/null
-echo "stat_time = `date`"
-echo "month_list = $months"
-echo
-echo "Stat method:"
-echo "  - for each cluster, there is a collector which sends get/set requests to detect table every 3 seconds."
-echo "  - every minute, the collector will write a record of Send and Succeed count into detect table."
-echo "  - to stat cluster availability, we scan all the records for the months from detect table, calculate the"
-echo "    total Send count and total Succeed count, and calculate the availability by:"
-echo "        Available = TotalSucceedCount / TotalSendCount"
-echo
-echo "Columns:"
-echo "  - cluster: name of the cluster"
-echo "  - rs_count: current count of replica servers"
-echo "  - version: current version of replica servers"
-echo "  - minutes: record count in detect table for the months"
-echo "  - available: cluster availability"
-echo "  - app_count: current count of tables in the cluster"
-echo "  - storage_gb: current total data size in GB of tables in the cluster"
-echo
-if [ "$format" == "table" ]; then
-  printf '%-30s%-12s%-12s%-12s%-12s%-12s%-12s\n' cluster rs_count version minutes available app_count storage_gb
-elif [ "$format" == "csv" ]; then
-  echo "cluster,rs_count,version,minutes,available,table_count,storage_gb"
-else
-  echo "ERROR: invalid format: $format"
-  exit 1
-fi
-cluster_count=0
-rs_count_sum=0
-app_count_sum=0
-data_size_sum=0
-while read cluster
-do
-  rs_count=`echo server_info | ./run.sh shell -n $cluster 2>&1 | grep 'replica-server' | wc -l`
-  rs_version=`echo server_info | ./run.sh shell -n $cluster 2>&1 | grep 'replica-server' | \
-      grep -o 'Pegasus Server [^ ]*' | head -n 1 | sed 's/SNAPSHOT/SN/' | awk '{print $3}'`
-  result=`./scripts/pegasus_stat_available.sh $cluster $months`
-  if echo $result | grep '^ERROR'; then
-    echo "ERROR: process cluster $cluster failed"
-    continue
-  fi
-  minutes=`echo $result | awk '{print $2}'`
-  available=`echo $result | awk '{print $3}' | sed 's/data/-/'`
-  app_count=`echo $result | awk '{print $4}'`
-  data_size=`echo $result | awk '{print $5}'`
-  if [ "$available" == "1.000000" ]; then
-    available_str="99.9999%"
-  elif [ "$available" == "0" ]; then
-    available_str="00.0000%"
-  else
-    available_str="${available:2:2}.${available:4:4}%"
-  fi
-  if [ "$format" == "table" ]; then
-    printf '%-30s%-12s%-12s%-12s%-12s%-12s%-12s\n' $cluster $rs_count $rs_version $minutes $available $app_count $data_size
-  elif [ "$format" == "csv" ]; then
-    echo -e "$cluster,$rs_count,$rs_version,$minutes,=\"$available_str\",$app_count,$data_size"
-  else
-    echo "ERROR: invalid format: $format"
-    exit 1
-  fi
-  cluster_count=$((cluster_count + 1))
-  rs_count_sum=$((rs_count_sum + rs_count))
-  app_count_sum=$((app_count_sum + app_count))
-  data_size_sum=$((data_size_sum + data_size))
-done <$clusters_file
-
-minutes=`cat $all_result | wc -l`
-if [ $minutes -eq 0 ]; then
-  available="0.000000"
-else
-  available=`cat $all_result | grep -o '[0-9]*,[0-9]*,[0-9]*' | awk -F, '{a+=$1;b+=$2}END{printf("%f\n",(double)b/a);}'`
-fi
-
-if [ "$available" == "1.000000" ]; then
-  available_str="99.9999%"
-elif [ "$available" == "0" ]; then
-  available_str="00.0000%"
-else
-  available_str="${available:2:2}.${available:4:4}%"
-fi
-
-if [ "$format" == "table" ]; then
-  printf '%-30s%-12s%-12s%-12s%-12s%-12s%-12s\n' "(total:$cluster_count)" $rs_count_sum "-" $minutes $available $app_count_sum $data_size_sum
-  echo
-elif [ "$format" == "csv" ]; then
-  echo -e "(total:$cluster_count),$rs_count_sum,,$minutes,=\"$available_str\",$app_count_sum,$data_size_sum"
-else
-  echo "ERROR: invalid format: $format"
-  exit 1
-fi
-
-rm $all_result &>/dev/null
-
diff --git a/scripts/create_table.py b/scripts/create_table.py
deleted file mode 100755
index 557001d..0000000
--- a/scripts/create_table.py
+++ /dev/null
@@ -1,202 +0,0 @@
-#!/usr/bin/python
-# -*- coding: utf-8 -*-
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-"""
-HOWTO
-=====
-
-./scripts/create_table.py --table ai_user_info \
-                          --depart 云平台部-存储平台-KV系统组 \
-                          --user wutao1&qinzuoyan \
-                          --cluster bj1-ai \
-                          --write_throttling "2000*delay*100" \
-                          --partition_count 16
-
-OR
-
-./scripts/create_table.py -t ai_user_info \
-                          -d 云平台部-存储平台-KV系统组 \
-                          -u wutao1&qinzuoyan \
-                          -c bj1-ai \
-                          -w "2000*delay*100" \
-                          -p 16
-
-DEVLOPER GUIDE
-==============
-
-The source code is formatted using autopep8.
-Ensure you have run formatter before committing changes.
-```
-autopep8 -i --aggressive --aggressive scripts/create_table.py
-```
-
-TODO(wutao1): automatically set write throttling according to the given
-              estimated QPS on the table.
-"""
-
-import os
-import click
-import py_utils
-import re
-import json
-import math
-
-
-def validate_param_table(ctx, param, value):
-    # TODO(wutao1): check illegal characters
-    return value.encode('utf-8')
-
-
-def validate_param_depart(ctx, param, value):
-    return value.encode('utf-8')
-
-
-def validate_param_user(ctx, param, value):
-    return value.encode('utf-8')
-
-
-def validate_param_cluster(ctx, param, value):
-    return value.encode('utf-8')
-
-
-def validate_param_partition_count(ctx, param, value):
-    if value == 0:
-        raise click.BadParameter("Cannot create table with 0 partition")
-    if math.log(value, 2) != math.floor(math.log(value, 2)):
-        raise click.BadParameter(
-            "Partition count {} should be a power of 2".format(value))
-    return value
-
-
-def validate_param_write_throttling(ctx, param, value):
-    if value == '':
-        return None
-    pattern = re.compile(r'^\d+\*delay\*\d+(,\d+\*reject\*\d+)?$')
-    match = pattern.match(value)
-    if match is not None:
-        return value.encode('utf-8')
-    else:
-        raise click.BadParameter(
-            'invalid value of throttle \'%s\'' % value)
-
-
-def create_table_if_needed(cluster, table, partition_count):
-    if not cluster.has_table(table):
-        try:
-            # TODO(wutao1): Outputs progress while polling.
-            py_utils.echo("Creating table {}...".format(table))
-            cluster.create_table(table, partition_count)
-        except Exception as err:
-            py_utils.echo(err, "red")
-            exit(1)
-    else:
-        py_utils.echo("Success: table \"{}\" exists".format(table))
-
-
-def set_business_info_if_needed(cluster, table, depart, user):
-    new_business_info = "depart={},user={}".format(depart, user)
-    set_app_envs_if_needed(cluster, table, 'business.info', new_business_info)
-
-
-def set_write_throttling_if_needed(cluster, table, new_throttle):
-    if new_throttle is None:
-        return
-    set_app_envs_if_needed(
-        cluster, table, 'replica.write_throttling', new_throttle)
-
-
-def set_app_envs_if_needed(cluster, table, env_name, new_env_value):
-    py_utils.echo("New value of {}={}".format(env_name, new_env_value))
-    envs = cluster.get_app_envs(table)
-    if envs is not None and envs.get(env_name) is not None:
-        old_env_value = envs.get(env_name).encode('utf-8')
-        if old_env_value is not None:
-            py_utils.echo("Old value of {}={}".format(env_name, old_env_value))
-            if old_env_value == new_env_value:
-                py_utils.echo("Success: {} keeps unchanged".format(env_name))
-                return
-    cluster.set_app_envs(table, env_name,
-                         new_env_value)
-
-
-def all_arguments_to_string(
-        table,
-        depart,
-        user,
-        cluster,
-        partition_count,
-        write_throttling):
-    return json.dumps({
-        'table': table,
-        'depart': depart,
-        'user': user,
-        'cluster': cluster,
-        'partition_count': partition_count,
-        'write_throttling': write_throttling,
-    }, sort_keys=True, indent=4, ensure_ascii=False, encoding='utf-8')
-
-
-@click.command()
-@click.option("--table", "-t",
-              required=True,
-              callback=validate_param_table,
-              help="Name of the table you want to create.")
-@click.option(
-    "--depart", "-d",
-    required=True,
-    callback=validate_param_depart,
-    help="Department of the table owner. If there are more than one levels of department, use '-' to concatenate them.")
-@click.option(
-    "--user", "-u",
-    required=True,
-    callback=validate_param_user,
-    help="The table owner. If there are more than one owners, use '&' to concatenate them.")
-@click.option("--cluster", "-c",
-              required=True,
-              callback=validate_param_cluster,
-              help="The cluster name. Where you want to place the table.")
-@click.option("--partition_count", "-p",
-              callback=validate_param_partition_count,
-              help="The partition count of the table. Empty means no create.",
-              type=int)
-@click.option(
-    "--write_throttling", "-w",
-    default="",
-    callback=validate_param_write_throttling,
-    help="{delay_qps_threshold}*delay*{delay_ms},{reject_qps_threshold}*reject*{delay_ms_before_reject}")
-def main(table, depart, user, cluster, partition_count, write_throttling):
-    if not click.confirm(
-        "Confirm to create table:\n{}\n".format(
-            all_arguments_to_string(
-            table,
-            depart,
-            user,
-            cluster,
-            partition_count,
-            write_throttling))):
-        return
-    c = py_utils.PegasusCluster(cluster_name=cluster)
-    create_table_if_needed(c, table, partition_count)
-    set_business_info_if_needed(c, table, depart, user)
-    set_write_throttling_if_needed(c, table, write_throttling)
-
-
-if __name__ == "__main__":
-    main()
diff --git a/scripts/falcon_screen.json b/scripts/falcon_screen.json
deleted file mode 100644
index 6f05187..0000000
--- a/scripts/falcon_screen.json
+++ /dev/null
@@ -1,849 +0,0 @@
-{
-  "comments": [
-    {
-      "title": "graph名称",
-      "endpoints": ["机器名或者tag标识,tag之间用空格分隔"],
-      "counters": ["counter名称"],
-      "graph_type": "展示类型,endpoint视角为h,counters视角为k,组合视角为a",
-      "method": "绘图是否进行求和,求和填写sum,不求和填写空字符串",
-      "timespan": "展示的时间跨度,单位为秒"
-    }
-  ],
-  "version": "20180625",
-  "graphs": [
-    {
-      "title": "集群可用度(单位:百分比*10000;1M表示100%)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*cluster.available.minute/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各操作总QPS(统计get、multi_get、put、multi_put、remove、multi_remove、scan各操作的总QPS)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.get_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.multi_get_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.put_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.multi_put_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.remove_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.multi_remove_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.incr_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.check_and_set_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.scan_qps#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "集群读写吞吐量(统计最近10s内的读写吞吐量,单位:Capacity Unit)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.recent_read_cu#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.recent_write_cu#_all_/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "集群Load-Balance状态(待执行的balance操作数、已执行的balance操作数等)",
-      "endpoints": ["cluster=${cluster.name} job=meta service=pegasus"],
-      "counters": [
-          "meta*eon.greedy_balancer*balance_operation_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.greedy_balancer*recent_balance_move_primary_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.greedy_balancer*recent_balance_copy_primary_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.greedy_balancer*recent_balance_copy_secondary_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各ReplicaServer内存用量(单位:MB)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*server*memused.res(MB)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点存储使用率(百分比)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "df.bytes.used.percent/fstype=ext4,mount=/home",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd1",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd2",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd3",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd4",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd5",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd6",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd7",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd8",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd9",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd10",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd11",
-          "df.bytes.used.percent/fstype=ext4,mount=/home/work/ssd12"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点内存使用率(百分比)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "mem.memused.percent"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各表存储用量(统计各表的单备份数据存储用量;单位:MB)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.storage_mb#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各表RocksDB缓存命中率(统计各表的RocksDB Block Cache命中率;单位:百分比*10000;1M表示100%)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.rdb_block_cache_hit_rate#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Get 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_GET.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Get 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_GET.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 MultiGet 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_GET.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 MultiGet 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_GET.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Set 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_PUT.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Set 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_PUT.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 MultiSet 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 MultiSet 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Del 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_REMOVE.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Del 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_REMOVE.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 MultiDel 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_REMOVE.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 MultiDel 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_REMOVE.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Incr 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_INCR.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Incr 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_INCR.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 CheckAndSet 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_CHECK_AND_SET.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 CheckAndSet 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_CHECK_AND_SET.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 CheckAndMutate 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_CHECK_AND_MUTATE.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 CheckAndMutate 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_CHECK_AND_MUTATE.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Scan 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_SCAN.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Scan 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_SCAN.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Prepare 发送端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_PREPARE_ACK.latency.client(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Prepare 发送端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_PREPARE_ACK.latency.client(ns).p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 Prepare 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_PREPARE.latency.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P999 Prepare 服务端延迟(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_PREPARE.latency.server.p999/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点Replica个数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*replica(Count)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点Commit QPS",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*replicas.commit.qps/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点SharedLog大小(单位:MB)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*shared.log.size(MB)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点SharedLog最近写入字节数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*shared.log.recent.write.size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "集群Partition健康状况(处于heathy、writable_ill、unwritable、unreadable、dead状态的partition个数)",
-      "endpoints": ["cluster=${cluster.name} job=meta service=pegasus"],
-      "counters": [
-          "meta*eon.server_state*dead_partition_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*unreadable_partition_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*unwritable_partition_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*writable_ill_partition_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*healthy_partition_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "集群Config更新情况(节点失联个数、config变化次数等)",
-      "endpoints": ["cluster=${cluster.name} job=meta service=pegasus"],
-      "counters": [
-          "meta*eon.meta_service*recent_disconnect_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.meta_service*unalive_nodes/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*recent_update_config_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*recent_partition_change_unwritable_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_state*recent_partition_change_writable_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus",
-          "meta*eon.server_load_balancer*recent_choose_primary_fail_count/cluster=${cluster.name},job=meta,port=${meta.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点ReplicaServer异常统计(心跳失败次数、Prepare失败次数、Error文件夹个数、Garbage文件夹个数等)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.failure_detector*recent_beacon_fail_count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.error.replica.dir.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.garbage.replica.dir.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.recent.prepare.fail.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.recent.replica.move.error.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.recent.replica.move.garbage.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.recent.replica.remove.dir.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点Learning相关统计(执行次数、执行时间、传输数据量等)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.nfs_client*recent_copy_data_size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.nfs_client*recent_copy_fail_count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.nfs_client*recent_write_data_size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.nfs_client*recent_write_fail_count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.nfs_server*recent_copy_data_size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.nfs_server*recent_copy_fail_count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.max.copy.file.size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.max.duration.time(ms)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.copy.buffer.size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.copy.file.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.copy.file.size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.learn.app.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.learn.cache.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.learn.fail.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.learn.log.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.learn.reset.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.learn.succ.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.round.start.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*replicas.learning.recent.start.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点Cold-Backup相关统计(执行次数、执行时间、上传数据量等)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*cold.backup.max.duration.time.ms/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.max.upload.file.size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.cancel.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.fail.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.pause.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.start.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.succ.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.upload.file.fail.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.upload.file.size/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.recent.upload.file.succ.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*cold.backup.running.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点Manual-Compact相关统计(当前执行个数等)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*app.pegasus*manual.compact.running.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "CPU Busy",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "cpu.busy"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Network Dropped",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "net.if.total.dropped/iface=eth0"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Network In Bytes",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "net.if.in.bytes/iface=eth0"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Network Out Bytes",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "net.if.out.bytes/iface=eth0"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "SSD Util",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "disk.io.util/device=sdb",
-          "disk.io.util/device=sdc",
-          "disk.io.util/device=sdd",
-          "disk.io.util/device=sde",
-          "disk.io.util/device=sdf",
-          "disk.io.util/device=sdg",
-          "disk.io.util/device=sdh",
-          "disk.io.util/device=sdi",
-          "disk.io.util/device=sdj",
-          "disk.io.util/device=sdk",
-          "disk.io.util/device=sdl",
-          "disk.io.util/device=sdm",
-          "disk.io.util/device=vda",
-          "disk.io.util/device=vdb",
-          "disk.io.util/device=vdc",
-          "disk.io.util/device=vdd",
-          "disk.io.util/device=vde",
-          "disk.io.util/device=xvda",
-          "disk.io.util/device=xvdb",
-          "disk.io.util/device=xvdc",
-          "disk.io.util/device=xvdd",
-          "disk.io.util/device=xvde"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "SSD Await",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "disk.io.await/device=sdb",
-          "disk.io.await/device=sdc",
-          "disk.io.await/device=sdd",
-          "disk.io.await/device=sde",
-          "disk.io.await/device=sdf",
-          "disk.io.await/device=sdg",
-          "disk.io.await/device=sdh",
-          "disk.io.await/device=sdi",
-          "disk.io.await/device=sdj",
-          "disk.io.await/device=sdk",
-          "disk.io.await/device=sdl",
-          "disk.io.await/device=sdm",
-          "disk.io.await/device=vda",
-          "disk.io.await/device=vdb",
-          "disk.io.await/device=vdc",
-          "disk.io.await/device=vdd",
-          "disk.io.await/device=vde",
-          "disk.io.await/device=xvda",
-          "disk.io.await/device=xvdb",
-          "disk.io.await/device=xvdc",
-          "disk.io.await/device=xvdd",
-          "disk.io.await/device=xvde"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近Flush次数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*app.pegasus*recent.flush.completed.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近Compaction次数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*app.pegasus*recent.compaction.completed.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近Flush写出字节数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*app.pegasus*recent.flush.output.bytes/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近Compaction写入写出字节数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*app.pegasus*recent.compaction.input.bytes/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*app.pegasus*recent.compaction.output.bytes/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近Emergency Checkpoint触发次数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*recent.trigger.emergency.checkpoint.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近Write Stall触发次数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*app.pegasus*recent.write.change.delayed.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*app.pegasus*recent.write.change.stopped.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 单条读 排队时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_GET.queue(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 单条读 执行时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_GET.exec(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 多条读 排队时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_GET.queue(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 多条读 执行时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_GET.exec(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 单条写 排队时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_PUT.queue(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 单条写 执行时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_PUT.exec(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 多条写 排队时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.queue(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "P99 多条写 执行时间(单位:纳秒)",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.exec(ns)/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点最近读写失败次数",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-          "replica*eon.replica_stub*recent.read.fail.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-          "replica*eon.replica_stub*recent.write.fail.count/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "异常查询条数(统计各表最近10秒执行时间超过100毫秒的查询条数)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.recent_abnormal_count#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Expire数据条数(统计各表最近10秒查询的过期数据条数)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.recent_expire_count#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Filter数据条数(统计各表最近10秒过滤的数据条数)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.recent_filter_count#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Delay数据条数(统计各表最近10秒write throttling delay的数据条数)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.recent_write_throttling_delay_count#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "Reject数据条数(统计各表最近10秒write throttling reject的数据条数)",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.recent_write_throttling_reject_count#${for.each.table}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "【${for.each.table}】单表QPS",
-      "endpoints": ["cluster=${cluster.name} job=collector service=pegasus"],
-      "counters": [
-          "collector*app.pegasus*app.stat.get_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.multi_get_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.put_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.multi_put_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.remove_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.multi_remove_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.incr_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.check_and_set_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus",
-          "collector*app.pegasus*app.stat.scan_qps#${table.name}/cluster=${cluster.name},job=collector,port=${collector.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    },
-    {
-      "title": "各节点 P99 RPC 报文长度",
-      "endpoints": ["cluster=${cluster.name} job=replica service=pegasus"],
-      "counters": [
-        "zion*profiler*RPC_RRDB_RRDB_PUT.size.request.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-        "zion*profiler*RPC_RRDB_RRDB_MULTI_PUT.size.request.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-        "zion*profiler*RPC_RRDB_RRDB_GET.size.response.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus",
-        "zion*profiler*RPC_RRDB_RRDB_MULTI_GET.size.response.server/cluster=${cluster.name},job=replica,port=${replica.port},service=pegasus"
-      ],
-      "graph_type": "a",
-      "method": "",
-      "timespan": 86400
-    }
-  ]
-}
diff --git a/scripts/falcon_screen.py b/scripts/falcon_screen.py
deleted file mode 100755
index 9617e53..0000000
--- a/scripts/falcon_screen.py
+++ /dev/null
@@ -1,600 +0,0 @@
-#!/usr/bin/env python                                                                                                                                                                       
-# -*- coding: utf-8 -*-
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
- 
-import requests
-import json
-import re
-import sys
-import os
-import copy
-
-#
-# RESTful API doc: http://wiki.n.miui.com/pages/viewpage.action?pageId=66037692
-# falcon ctrl api: http://dev.falcon.srv/doc/
-#
-
-# account info
-serviceAccount = ""
-serviceSeedMd5 = ""
-
-###############################################################################
-
-# global variables
-falconServiceUrl = "http://falcon.srv"
-pegasusScreenId = 18655
-sessionId = ""
-metaPort = ""
-replicaPort = ""
-collectorPort = ""
-
-# return: bool
-def get_service_port_by_minos(clusterName):
-    minosEnv = os.environ.get("MINOS_CONFIG_FILE")
-    if not isinstance(minosEnv, str) or len(minosEnv) == 0:
-        print "WARN: environment variables 'MINOS_CONFIG_FILE' is not set"
-        return False
-    if not os.path.isfile(minosEnv):
-        print "WARN: environment variables 'MINOS_CONFIG_FILE' is not valid"
-        return False
-    minosConfigDir = os.path.dirname(minosEnv)
-    if not os.path.isdir(minosConfigDir):
-        print "WARN: environment variables 'MINOS_CONFIG_FILE' is not valid"
-        return False
-    clusterConfigFile = minosConfigDir + "/xiaomi-config/conf/pegasus/pegasus-" + clusterName + ".cfg"
-    if not os.path.isfile(clusterConfigFile):
-        print "WARN: cluster config file '%s' not exist" % clusterConfigFile
-        return False
-
-    lines = [line.strip() for line in open(clusterConfigFile)]
-    mode = ''
-    global metaPort
-    global replicaPort
-    global collectorPort
-    for line in lines:
-        if line == '[meta]':
-            mode = 'meta'
-        elif line == '[replica]':
-            mode = 'replica'
-        elif line == '[collector]':
-            mode = 'collector'
-        m = re.search('^base_port *= *([0-9]+)', line)
-        if m:
-            basePort = int(m.group(1))
-            if mode == 'meta':
-                metaPort = str(basePort + 1)
-            elif mode == 'replica':
-                replicaPort = str(basePort + 1)
-            elif mode == 'collector':
-                collectorPort = str(basePort + 1)
-            mode = ''
-
-    print "INFO: metaPort = %s, replicaPort = %s, collectorPort = %s" % (metaPort, replicaPort, collectorPort)
-    if metaPort == '' or replicaPort == '' or collectorPort == '':
-        print "WARN: get port from cluster config file '%s' failed" % clusterConfigFile
-        return False
-    return True
-
-
-# return: bool
-def get_service_port_by_minos2(clusterName):
-    minosEnv = os.environ.get("MINOS2_CONFIG_FILE")
-    if not isinstance(minosEnv, str) or len(minosEnv) == 0:
-        print "WARN: environment variables 'MINOS2_CONFIG_FILE' is not set"
-        return False
-    if not os.path.isfile(minosEnv):
-        print "WARN: environment variables 'MINOS2_CONFIG_FILE' is not valid"
-        return False
-    minosConfigDir = os.path.dirname(minosEnv)
-    if not os.path.isdir(minosConfigDir):
-        print "WARN: environment variables 'MINOS2_CONFIG_FILE' is not valid"
-        return False
-    clusterConfigFile = minosConfigDir + "/xiaomi-config/conf/pegasus/pegasus-" + clusterName + ".yaml"
-    if not os.path.isfile(clusterConfigFile):
-        print "WARN: cluster config file '%s' not exist" % clusterConfigFile
-        return False
-
-    lines = [line.strip() for line in open(clusterConfigFile)]
-    mode = ''
-    global metaPort
-    global replicaPort
-    global collectorPort
-    for line in lines:
-        if line == 'meta:':
-            mode = 'meta'
-        elif line == 'replica:':
-            mode = 'replica'
-        elif line == 'collector:':
-            mode = 'collector'
-        m = re.search('^base *: *([0-9]+)', line)
-        if m:
-            basePort = int(m.group(1))
-            if mode == 'meta':
-                metaPort = str(basePort + 1)
-            elif mode == 'replica':
-                replicaPort = str(basePort + 1)
-            elif mode == 'collector':
-                collectorPort = str(basePort + 1)
-            mode = ''
-
-    print "INFO: metaPort = %s, replicaPort = %s, collectorPort = %s" % (metaPort, replicaPort, collectorPort)
-    if metaPort == '' or replicaPort == '' or collectorPort == '':
-        print "WARN: get port from cluster config file '%s' failed" % clusterConfigFile
-        return False
-
-    return True
-
-
-# return:
-def get_session_id():
-    url = falconServiceUrl + "/v1.0/auth/info"
-    headers = {
-        "Accept": "text/plain"
-    }
-
-    r = requests.get(url, headers=headers)
-    if r.status_code != 200:
-        print "ERROR: get_session_id failed, status_code = %s, result:\n%s" % (r.status_code, r.text)
-        sys.exit(1)
-
-    c = r.headers['Set-Cookie']
-    m = re.search('falconSessionId=([^;]+);', c)
-    if m:
-        global sessionId
-        sessionId = m.group(1)
-        print "INFO: sessionId =", sessionId
-    else:
-        print "ERROR: get_session_id failed, cookie not set"
-        sys.exit(1)
-
-
-# return:
-def auth_by_misso():
-    url = falconServiceUrl + "/v1.0/auth/callback/misso"
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId,
-        "Authorization": serviceAccount + ";" + serviceSeedMd5 + ";" + serviceSeedMd5
-    }
-
-    r = requests.get(url, headers=headers)
-    if r.status_code != 200:
-        print "ERROR: auth_by_misso failed, status_code = %s, result:\n%s" % (r.status_code, r.text)
-        sys.exit(1)
-
-
-# return:
-def check_auth_info():
-    url = falconServiceUrl + "/v1.0/auth/info"
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.get(url, headers=headers)
-    if r.status_code != 200:
-        print "ERROR: check_auth_info failed, status_code = %s, result:\n%s" % (r.status_code, r.text)
-        sys.exit(1)
-    
-    j = json.loads(r.text)
-    if "user" not in j or j["user"] is None or "name" not in j["user"] or j["user"]["name"] != serviceAccount:
-        print "ERROR: check_auth_info failed, bad json result:\n%s" % r.text
-        sys.exit(1)
-
-
-def login():
-    get_session_id()
-    auth_by_misso()
-    check_auth_info()
-    print "INFO: login succeed"
-    
-
-# return:
-def logout():
-    url = falconServiceUrl + "/v1.0/auth/logout"
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.get(url, headers=headers)
-    if r.status_code != 200:
-        print "ERROR: logout failed, status_code = %s, result:\n%s" % (r.status_code, r.text)
-        sys.exit(1)
-    
-    print "INFO: logout succeed"
-
-
-# return: screenId
-def create_screen(screenName):
-    url = falconServiceUrl + "/v1.0/dashboard/screen"
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-    req = {
-        "pid" : pegasusScreenId,
-        "name" : screenName
-    }
-
-    r = requests.post(url, headers=headers, data=json.dumps(req))
-    if r.status_code != 200:
-        print "ERROR: create_screen failed, screenName = %s, status_code = %s, result:\n%s" \
-              % (screenName, r.status_code, r.text)
-        sys.exit(1)
-    
-    j = json.loads(r.text)
-    if "id" not in j:
-        print "ERROR: create_screen failed, screenName = %s, bad json result\n%s" \
-              % (screenName, r.text)
-        sys.exit(1)
-        
-    screenId = j["id"]
-    print "INFO: create_screen succeed, screenName = %s, screenId = %s" % (screenName, screenId)
-    return screenId
-
-
-# return: screenConfig
-def prepare_screen_config(clusterName, screenTemplateFile, tableListFile):
-    tableList = []
-    lines = [line.strip() for line in open(tableListFile)]
-    for line in lines:
-        if len(line) > 0:
-            if line in tableList:
-                print "ERROR: bad table list file: duplicate table '%s'" % line
-                sys.exit(1)
-            tableList.append(line)
-    if len(tableList) == 0:
-        print "ERROR: bad table list file: should be non-empty list"
-        sys.exit(1)
-
-    jsonData = open(screenTemplateFile).read()
-    screenJson = json.loads(jsonData)
-    graphsJson = screenJson["graphs"]
-    if not isinstance(graphsJson, list) or len(graphsJson) == 0:
-        print "ERROR: bad screen template json: [graphs] should be provided as non-empty list"
-        sys.exit(1)
-
-    # resolve ${for.each.table} in title and ${table.name} in counters
-    newGraphsJson = []
-    titleSet = []
-    for graphJson in graphsJson:
-        title = graphJson["title"]
-        if not isinstance(title, (str, unicode)) or len(title) == 0:
-            print type(title)
-            print "ERROR: bad screen template json: [graphs]: [title] should be provided as non-empty str"
-            sys.exit(1)
-        if title.find("${for.each.table}") != -1:
-            for table in tableList:
-                newTitle = title.replace("${for.each.table}", table)
-                if newTitle in titleSet:
-                    print "ERROR: bad screen template json: [graphs][%s]: duplicate resolved title '%s' " % (title, newTitle)
-                    sys.exit(1)
-                newGraphJson = copy.deepcopy(graphJson)
-                counters = newGraphJson["counters"]
-                if not isinstance(counters, list) or len(counters) == 0:
-                    print "ERROR: bad screen template json: [graphs][%s]: [counters] should be provided as non-empty list" % title
-                    sys.exit(1)
-                newCounters = []
-                for counter in counters:
-                    if len(counter) != 0:
-                        newCounter = counter.replace("${table.name}", table)
-                        newCounters.append(newCounter)
-                if len(newCounters) == 0:
-                    print "ERROR: bad screen template json: [graphs][%s]: [counters] should be provided as non-empty list" % title
-                    sys.exit(1)
-                newGraphJson["counters"] = newCounters
-                newGraphJson["title"] = newTitle
-                newGraphsJson.append(newGraphJson)
-                titleSet.append(newTitle)
-        else:
-            if title in titleSet:
-                print "ERROR: bad screen template json: [graphs][%s]: duplicate title" % title
-                sys.exit(1)
-            newGraphsJson.append(graphJson)
-            titleSet.append(title)
-
-    screenConfig = []
-    position = 1
-    for graphJson in newGraphsJson:
-        title = graphJson["title"]
-
-        endpoints = graphJson["endpoints"]
-        if not isinstance(endpoints, list) or len(endpoints) == 0:
-            print "ERROR: bad screen template json: [graphs][%s]: [endpoints] should be provided as non-empty list" % title
-            sys.exit(1)
-        newEndpoints = []
-        for endpoint in endpoints:
-            if len(endpoint) != 0:
-                newEndpoint = endpoint.replace("${cluster.name}", clusterName).replace("${meta.port}", metaPort)
-                newEndpoint = newEndpoint.replace("${replica.port}", replicaPort).replace("${collector.port}", collectorPort)
-                newEndpoints.append(newEndpoint)
-        if len(newEndpoints) == 0:
-            print "ERROR: bad screen template json: [graphs][%s]: [endpoints] should be provided as non-empty list" % title
-            sys.exit(1)
-
-        counters = graphJson["counters"]
-        if not isinstance(counters, list) or len(counters) == 0:
-            print "ERROR: bad screen template json: [graphs][%s]: [counters] should be provided as non-empty list" % title
-            sys.exit(1)
-        newCounters = []
-        for counter in counters:
-            if len(counter) != 0:
-                newCounter = counter.replace("${cluster.name}", clusterName).replace("${meta.port}", metaPort)
-                newCounter = newCounter.replace("${replica.port}", replicaPort).replace("${collector.port}", collectorPort)
-                if newCounter.find("${for.each.table}") != -1:
-                    for table in tableList:
-                        newCounters.append(newCounter.replace("${for.each.table}", table))
-                else:
-                    newCounters.append(newCounter)
-        if len(newCounters) == 0:
-            print "ERROR: bad screen template json: [graphs][%s]: [counters] should be provided as non-empty list" % title
-            sys.exit(1)
-
-        graphType = graphJson["graph_type"]
-        if not isinstance(graphType, (str, unicode)):
-            print "ERROR: bad screen template json: [graphs][%s]: [graph_type] should be provided as str" % title
-            sys.exit(1)
-        if graphType != "h" and graphType != "k" and graphType != "a":
-            print "ERROR: bad screen template json: [graphs][%s]: [graph_type] should be 'h' or 'k' or 'a'" % title
-            sys.exit(1)
-
-        method = graphJson["method"]
-        if not isinstance(method, (str, unicode)):
-            print "ERROR: bad screen template json: [graphs][%s]: [method] should be provided as str" % title
-            sys.exit(1)
-        if method != "" and method != "sum":
-            print "ERROR: bad screen template json: [graphs][%s]: [method] should be '' or 'sum'" % title
-            sys.exit(1)
-
-        timespan = graphJson["timespan"]
-        if not isinstance(timespan, int) or timespan <= 0:
-            print "ERROR: bad screen template json: [graphs][%s]: [timespan] should be provided as positive int" % title
-            sys.exit(1)
-        
-        graphConfig = {}
-        graphConfig["counters"] = newCounters
-        graphConfig["endpoints"] = newEndpoints
-        graphConfig["falcon_tags"] = ""
-        graphConfig["graph_type"] = graphType
-        graphConfig["method"] = method
-        graphConfig["position"] = position
-        graphConfig["timespan"] = timespan
-        graphConfig["title"] = title
-        screenConfig.append(graphConfig)
-
-        position += 1
-
-    return screenConfig
-
-
-# return: graphId
-def create_graph(graphConfig):
-    url = falconServiceUrl + "/v1.0/dashboard/graph"
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.post(url, headers=headers, data=json.dumps(graphConfig))
-    if r.status_code != 200:
-        print "ERROR: create_graph failed, graphTitle = \"%s\", status_code = %s, result:\n%s" \
-              % (graphConfig["title"], r.status_code, r.text)
-        sys.exit(1)
-    
-    j = json.loads(r.text)
-    if "id" not in j:
-        print "ERROR: create_graph failed, graphTitle = \"%s\", bad json result\n%s" \
-              % (graphConfig["title"], r.text)
-        sys.exit(1)
-        
-    graphId = j["id"]
-    print "INFO: create_graph succeed, graphTitle = \"%s\", graphId = %s" \
-          % (graphConfig["title"], graphId)
-
-    # udpate graph position immediately
-    graphConfig["id"] = graphId
-    update_graph(graphConfig, "position")
-
-    return graphId
-
-
-# return: screen[]
-def get_screens():
-    url = falconServiceUrl + "/v1.0/dashboard/screen/pid/" + str(pegasusScreenId)
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.get(url, headers=headers)
-    if r.status_code != 200:
-        print "ERROR: get_screens failed, status_code = %s, result:\n%s" % (r.status_code, r.text)
-        sys.exit(1)
-    
-    j = json.loads(r.text)
-    
-    print "INFO: get_screens succeed, screenCount = %s" % len(j)
-    return j
-
-
-# return: graph[]
-def get_screen_graphs(screenName, screenId):
-    url = falconServiceUrl + "/v1.0/dashboard/graph/screen/" + str(screenId)
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.get(url, headers=headers)
-    if r.status_code != 200:
-        print "ERROR: get_screen_graphs failed, screenName = %s, screenId = %s, status_code = %s, result:\n%s" \
-              % (screenName, screenId, r.status_code, r.text)
-        sys.exit(1)
-    
-    j = json.loads(r.text)
-    
-    print "INFO: get_screen_graphs succeed, screenName = %s, screenId = %s, graphCount = %s" \
-          % (screenName, screenId, len(j))
-    return j
-
-
-# return:
-def delete_graph(graphTitle, graphId):
-    url = falconServiceUrl + "/v1.0/dashboard/graph/" + str(graphId)
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.delete(url, headers=headers)
-    if r.status_code != 200 or r.text.find("delete success!") == -1:
-        print "ERROR: delete_graph failed, graphTitle = \"%s\", graphId = %s, status_code = %s, result:\n%s" \
-              % (graphTitle, graphId, r.status_code, r.text)
-        sys.exit(1)
-    
-    print "INFO: delete_graph succeed, graphTitle = \"%s\", graphId = %s" % (graphTitle, graphId)
-
-
-# return:
-def update_graph(graphConfig, updateReason):
-    url = falconServiceUrl + "/v1.0/dashboard/graph/" + str(graphConfig["id"])
-    headers = {
-        "Cookie": "falconSessionId=" + sessionId
-    }
-
-    r = requests.put(url, headers=headers, data=json.dumps(graphConfig))
-    if r.status_code != 200:
-        print "ERROR: update_graph failed, graphTitle = \"%s\", graphId = %s, status_code = %s, result:\n%s" \
-              % (graphConfig["title"], graphConfig["id"], r.status_code, r.text)
-        sys.exit(1)
-    
-    j = json.loads(r.text)
-    if "id" not in j:
-        print "ERROR: update_graph failed, graphTitle = \"%s\", graphId = %s, bad json result\n%s" \
-              % (graphConfig["title"], graphConfig["id"], r.text)
-        sys.exit(1)
-        
-    print "INFO: update_graph succeed, graphTitle = \"%s\", graphId = %s, updateReason = \"%s changed\"" \
-          % (graphConfig["title"], graphConfig["id"], updateReason)
-
-
-# return: bool, reason
-def is_equal(graph1, graph2):
-    if graph1["title"] != graph2["title"]:
-        return False, "title"
-    if graph1["graph_type"] != graph2["graph_type"]:
-        return False, "graph_type"
-    if graph1["method"] != graph2["method"]:
-        return False, "method"
-    if graph1["position"] != graph2["position"]:
-        return False, "position"
-    if graph1["timespan"] != graph2["timespan"]:
-        return False, "timespan"
-    endpoints1 = graph1["endpoints"]
-    endpoints2 = graph2["endpoints"]
-    if len(endpoints1) != len(endpoints2):
-        return False, "endpoints"
-    for endpoint in endpoints1:
-        if not endpoint in endpoints2:
-            return False, "endpoints"
-    counters1 = graph1["counters"]
-    counters2 = graph2["counters"]
-    if len(counters1) != len(counters2):
-        return False, "counters"
-    for counter in counters1:
-        if not counter in counters2:
-            return False, "counters"
-    return True, ""
-
-
-if __name__ == '__main__':
-    if serviceAccount == "" or serviceSeedMd5 == "":
-        print "ERROR: please set 'serviceAccount' and 'serviceSeedMd5' in %s" % sys.argv[0]
-        sys.exit(1)
-
-    if len(sys.argv) != 5:
-        print "USAGE: python %s <cluster_name> <screen_template_file> <table_list_file> <create|update>" % sys.argv[0]
-        sys.exit(1)
-
-    clusterName = sys.argv[1]
-    screenTemplateFile = sys.argv[2]
-    tableListFile = sys.argv[3]
-    operateType = sys.argv[4]
-
-    if operateType != "create" and operateType != "update":
-        print "ERROR: argv[4] should be 'create' or 'update', but '%s'" % operateType
-        sys.exit(1)
-
-    if not get_service_port_by_minos2(clusterName) and not get_service_port_by_minos(clusterName):
-        print "ERROR: get service ports from minos config failed"
-        sys.exit(1)
-
-    login()
-
-    if operateType == "create":
-        screenConfig = prepare_screen_config(clusterName, screenTemplateFile, tableListFile)
-        screenId = create_screen(screenName=clusterName)
-        for graphConfig in screenConfig:
-            graphConfig["screen_id"] = screenId
-            create_graph(graphConfig)
-        print "INFO: %s graphs created" % len(screenConfig)
-    else: # update
-        screens = get_screens()
-        screenId = 0
-        oldScreenConfig = None
-        for screen in screens:
-            if screen["name"] == clusterName:
-                screenId = screen["id"]
-                oldScreenConfig = get_screen_graphs(clusterName, screenId)
-        if oldScreenConfig is None:
-            print "ERROR: screen '%s' not exit, please create it first" % clusterName
-            sys.exit(1)
-        #print "INFO: old screen config:\n%s" % json.dumps(oldScreenConfig, indent=2)
-
-        newScreenConfig = prepare_screen_config(clusterName, screenTemplateFile, tableListFile)
-        #print "INFO: new screen config:\n%s" % json.dumps(newScreenConfig, indent=2)
-
-        oldScreenMap = {}
-        newScreenMap = {}
-        for graph in oldScreenConfig:
-            oldScreenMap[graph["title"]] = graph
-        for graph in newScreenConfig:
-            newScreenMap[graph["title"]] = graph
-        deleteConfigList = []
-        createConfigList = []
-        updateConfigList = []
-        for graph in oldScreenConfig:
-            if not graph["title"] in newScreenMap:
-                deleteConfigList.append(graph)
-        for graph in newScreenConfig:
-            if not graph["title"] in oldScreenMap:
-                graph["screen_id"] = screenId
-                createConfigList.append(graph)
-            else:
-                oldGraph = oldScreenMap[graph["title"]]
-                equal, reason = is_equal(graph, oldGraph)
-                if not equal:
-                    graph["id"] = oldGraph["graph_id"]
-                    graph["screen_id"] = screenId
-                    updateConfigList.append((graph, reason))
-
-        for graph in deleteConfigList:
-            delete_graph(graphTitle=graph["title"], graphId=graph["graph_id"])
-        for graph in createConfigList:
-            create_graph(graph)
-        for graph,reason in updateConfigList:
-            update_graph(graph, reason)
-
-        print "INFO: %d graphs deleted, %d graphs created, %d graphs updated" \
-              % (len(deleteConfigList), len(createConfigList), len(updateConfigList))
-
-    logout()
-
diff --git a/scripts/pegasus_check_clusters.py b/scripts/pegasus_check_clusters.py
deleted file mode 100755
index 1c8ee60..0000000
--- a/scripts/pegasus_check_clusters.py
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/usr/bin/python
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-"""
-Basic usage:
-
-> vim ~/.bashrc
-export PYTHONPATH=$PYTHONPATH:$HOME/.local/lib/python2.7/site-packages/ 
-export PEGASUS_CONFIG_PATH=$HOME/work/conf_pegasus
-export PEGASUS_SHELL_PATH=$HOME/work/pegasus
-> pip install --user click
-> ./pegasus_check_clusters.py --env c3srv
-"""
-
-import os
-import click
-
-from py_utils import *
-
-
-@click.command()
-@click.option(
-    "--env", default="", help="Env of pegasus cluster, eg. c3srv or c4tst")
-@click.option('-v', '--verbose', count=True)
-def main(env, verbose):
-    pegasus_config_path = os.getenv("PEGASUS_CONFIG_PATH")
-    if pegasus_config_path is None:
-        echo(
-            "Please configure environment variable PEGASUS_CONFIG_PATH in your bashrc or zshrc",
-            "red")
-        exit(1)
-    if env != "":
-        echo("env = " + env)
-    set_global_verbose(verbose)
-    clusters = list_pegasus_clusters(pegasus_config_path, env)
-    for cluster in clusters:
-        echo("=== " + cluster.name())
-        try:
-            cluster.print_imbalance_nodes()
-            cluster.print_unhealthy_partitions()
-        except RuntimeError as e:
-            echo(str(e), "red")
-            return
-        echo("===")
-
-
-if __name__ == "__main__":
-    main()
diff --git a/scripts/pegasus_check_ports.py b/scripts/pegasus_check_ports.py
deleted file mode 100755
index e6a7ad5..0000000
--- a/scripts/pegasus_check_ports.py
+++ /dev/null
@@ -1,74 +0,0 @@
-#!/usr/bin/python
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-"""
-Basic usage:
-
-> vim ~/.bashrc
-export PYTHONPATH=$PYTHONPATH:$HOME/.local/lib/python2.7/site-packages/ 
-export PEGASUS_CONFIG_PATH=$HOME/work/conf_pegasus
-export PEGASUS_SHELL_PATH=$HOME/work/pegasus
-> pip install --user click
-> ./pegasus_check_posts.py --env c3srv
-"""
-
-import os
-import click
-
-from py_utils import *
-
-
-@click.command()
-@click.option("--env", help="Env of pegasus cluster, eg. c3srv or c4tst")
-def main(env):
-    pegasus_config_path = os.getenv("PEGASUS_CONFIG_PATH")
-    if pegasus_config_path is None:
-        echo(
-            "Please configure environment variable PEGASUS_CONFIG_PATH in your bashrc or zshrc",
-            "red")
-        exit(1)
-    clusters = list_pegasus_clusters(pegasus_config_path, env)
-    host_to_ports = {}
-    for cluster in clusters:
-        try:
-            p = cluster.get_meta_port()
-            h = cluster.get_meta_host()
-            if not h in host_to_ports:
-                host_to_ports[h] = set()
-            if p in host_to_ports[h]:
-                echo(
-                    "port number conflicted: {0} {1} [{2}]".format(
-                        p, cluster.name(), h), "red")
-                continue
-            host_to_ports[h].add(p)
-            echo("cluster {0}: {1} [{2}]".format(cluster.name(), p, h))
-        except RuntimeError as e:
-            echo(str(e), "red")
-            return
-
-    echo("")
-    for h in host_to_ports:
-        echo("recommended port number for [{0}] is: {1}".format(
-            h, str(max(host_to_ports[h]) + 1000)))
-        echo("host [{0}] has in total {1} clusters on it".format(
-            h, len(host_to_ports[h])))
-        echo("")
-
-
-if __name__ == "__main__":
-    main()
diff --git a/scripts/pegasus_falcon_screen.sh b/scripts/pegasus_falcon_screen.sh
deleted file mode 100755
index 6313ac0..0000000
--- a/scripts/pegasus_falcon_screen.sh
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-PID=$$
-
-if [ $# -ne 2 ]
-then
-  echo "This tool is for create or update falcon screen for specified cluster."
-  echo "USAGE: $0 <create|update> <cluster-name>"
-  exit 1
-fi
-
-pwd="$( cd "$( dirname "$0"  )" && pwd )"
-shell_dir="$( cd $pwd/.. && pwd )"
-cd $shell_dir
-
-operate=$1
-cluster=$2
-
-if [ "$operate" != "create" -a "$operate" != "update" ]; then
-    echo "ERROR: invalid operation type: $operate"
-    exit 1
-fi
-
-echo "UID: $UID"
-echo "PID: $PID"
-echo "cluster: $cluster"
-echo "operate: $operate"
-echo "Start time: `date`"
-all_start_time=$((`date +%s`))
-echo
-
-cd $shell_dir
-echo ls | ./run.sh shell -n $cluster &>/tmp/$UID.$PID.pegasus.ls
-grep AVAILABLE /tmp/$UID.$PID.pegasus.ls | awk '{print $3}' >/tmp/$UID.$PID.pegasus.table.list
-table_count=`cat /tmp/$UID.$PID.pegasus.table.list | wc -l`
-if [ $table_count -eq 0 ]; then
-    echo "ERROR: table list is empty, please check the cluster $cluster"
-    exit 1
-fi
-cd $pwd
-
-python falcon_screen.py $cluster falcon_screen.json /tmp/$UID.$PID.pegasus.table.list $operate
-if [ $? -ne 0 ]; then
-    echo "ERROR: falcon screen $operate failed"
-    exit 1
-fi
-
-echo
-echo "Finish time: `date`"
-all_finish_time=$((`date +%s`))
-echo "Falcon screen $operate done, elasped time is $((all_finish_time - all_start_time)) seconds."
-
-rm -f /tmp/$UID.$PID.pegasus.* &>/dev/null
diff --git a/scripts/py_utils/__init__.py b/scripts/py_utils/__init__.py
deleted file mode 100644
index 5643f3d..0000000
--- a/scripts/py_utils/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/python
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-from .lib import set_global_verbose, echo, list_pegasus_clusters, PegasusCluster
-
-__all__ = [
-    'set_global_verbose', 'echo', 'list_pegasus_clusters', 'PegasusCluster'
-]
diff --git a/scripts/py_utils/lib.py b/scripts/py_utils/lib.py
deleted file mode 100644
index 7c26132..0000000
--- a/scripts/py_utils/lib.py
+++ /dev/null
@@ -1,167 +0,0 @@
-#!/usr/bin/python
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-import click
-import commands
-import os
-import json
-
-_global_verbose = False
-
-
-def set_global_verbose(val):
-    _global_verbose = val
-
-
-def echo(message, color=None):
-    click.echo(click.style(message, fg=color))
-
-
-class PegasusCluster(object):
-    def __init__(self, cfg_file_name=None, cluster_name=None):
-        if cluster_name is None:
-            self._cluster_name = os.path.basename(cfg_file_name).replace(
-                "pegasus-", "").replace(".cfg", "")
-        else:
-            self._cluster_name = cluster_name
-        self._shell_path = os.getenv("PEGASUS_SHELL_PATH")
-        self._cfg_file_name = cfg_file_name
-        if self._shell_path is None:
-            echo(
-                "Please configure environment variable PEGASUS_SHELL_PATH in your bashrc or zshrc",
-                "red")
-            exit(1)
-
-    def print_unhealthy_partitions(self):
-        list_detail = self._run_shell("ls -d -j").strip()
-
-        list_detail_json = json.loads(list_detail)
-        read_unhealthy_app_count = int(
-            list_detail_json["summary"]["read_unhealthy_app_count"])
-        write_unhealthy_app_count = int(
-            list_detail_json["summary"]["write_unhealthy_app_count"])
-        if write_unhealthy_app_count > 0:
-            echo("cluster is write unhealthy, write_unhealthy_app_count = " +
-                 str(write_unhealthy_app_count))
-            return
-        if read_unhealthy_app_count > 0:
-            echo("cluster is read unhealthy, read_unhealthy_app_count = " +
-                 str(read_unhealthy_app_count))
-            return
-
-    def print_imbalance_nodes(self):
-        nodes_detail = self._run_shell("nodes -d -j").strip()
-
-        primaries_per_node = {}
-        min_ = 0
-        max_ = 0
-        for ip_port, node_info in json.loads(nodes_detail)["details"].items():
-            primary_count = int(node_info["primary_count"])
-            min_ = min(min_, primary_count)
-            max_ = max(max_, primary_count)
-            primaries_per_node[ip_port] = primary_count
-        if float(min_) / float(max_) < 0.8:
-            print json.dumps(primaries_per_node, indent=4)
-
-    def get_meta_port(self):
-        with open(self._cfg_file_name) as cfg:
-            for line in cfg.readlines():
-                if line.strip().startswith("base_port"):
-                    return int(line.split("=")[1])
-
-    def get_meta_host(self):
-        with open(self._cfg_file_name) as cfg:
-            for line in cfg.readlines():
-                if line.strip().startswith("host.0"):
-                    return line.split("=")[1].strip()
-
-    def create_table(self, table, parts):
-        create_result = self._run_shell(
-            "create {} -p {}".format(table, parts)).strip()
-        if "ERR_INVALID_PARAMETERS" in create_result:
-            raise ValueError("failed to create table \"{}\"".format(table))
-
-    def get_app_envs(self, table):
-        envs_result = self._run_shell(
-            "use {} \n get_app_envs".format(table)).strip()[len("OK\n"):]
-        if "ERR_OBJECT_NOT_FOUND" in envs_result:
-            raise ValueError("table {} does not exist".format(table))
-        if envs_result == "":
-            return None
-        envs_result = self._run_shell(
-            "use {} \n get_app_envs -j".format(table)).strip()[len("OK\n"):]
-        return json.loads(envs_result)['app_envs']
-
-    def set_app_envs(self, table, env_name, env_value):
-        envs_result = self._run_shell(
-            "use {} \n set_app_envs {} {}".format(
-                table, env_name, env_value)).strip()[
-            len("OK\n"):]
-        if "ERR_OBJECT_NOT_FOUND" in envs_result:
-            raise ValueError("table {} does not exist".format(table))
-
-    def has_table(self, table):
-        app_result = self._run_shell("app {} ".format(table)).strip()
-        return "ERR_OBJECT_NOT_FOUND" not in app_result
-
-    def _run_shell(self, args):
-        """
-        :param args: arguments passed to ./run.sh shell (type `string`)
-        :return: shell output
-        """
-        global _global_verbose
-
-        cmd = "cd {1}; echo -e \"{0}\" | ./run.sh shell -n {2}".format(
-            args, self._shell_path, self._cluster_name)
-        if _global_verbose:
-            echo("executing command: \"{0}\"".format(cmd))
-
-        status, output = commands.getstatusoutput(cmd)
-        if status != 0:
-            raise RuntimeError("failed to execute \"{0}\": {1}".format(
-                cmd, output))
-
-        result = ""
-        result_begin = False
-        for line in output.splitlines():
-            if line.startswith("The cluster meta list is:"):
-                result_begin = True
-                continue
-            if line.startswith("dsn exit with code"):
-                break
-            if result_begin:
-                result += line + "\n"
-        return result
-
-    def name(self):
-        return self._cluster_name
-
-
-def list_pegasus_clusters(config_path, env):
-    clusters = []
-    for fname in os.listdir(config_path):
-        if not os.path.isfile(config_path + "/" + fname):
-            continue
-        if not fname.startswith("pegasus-" + env):
-            continue
-        if not fname.endswith(".cfg"):
-            continue
-        if fname.endswith("proxy.cfg"):
-            continue
-        clusters.append(PegasusCluster(config_path + "/" + fname))
-    return clusters
diff --git a/scripts/scp-no-interactive b/scripts/scp-no-interactive
deleted file mode 100755
index 09e9713..0000000
--- a/scripts/scp-no-interactive
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/usr/bin/expect
-
-# USAGE: scp-no-interactive <host> <username> <password> <src_file> <dest_file>
-
-set timeout 10
-set host [lindex $argv 0]
-set username [lindex $argv 1]
-set password [lindex $argv 2]
-set src_file [lindex $argv 3]
-set dest_file [lindex $argv 4]
-spawn scp $src_file $username@$host:$dest_file
- expect {
- "(yes/no)?"
-  {
-  send "yes\n"
-  expect "*assword:" { send "$password\n"}
- }
- "*assword:"
-{
- send "$password\n"
-}
-}
-expect "100%"
-expect eof
diff --git a/scripts/ssh-no-interactive b/scripts/ssh-no-interactive
deleted file mode 100755
index 8ab5bf8..0000000
--- a/scripts/ssh-no-interactive
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/usr/bin/expect
-
-# USAGE: ssh-no-interactive <host> <username> <password> <command>
-
-set timeout 10
-set host [lindex $argv 0]
-set username [lindex $argv 1]
-set password [lindex $argv 2]
-set command [lindex $argv 3]
-spawn ssh $username@$host "$command"
- expect {
- "(yes/no)?"
-  {
-  send "yes\n"
-  expect "*assword:" { send "$password\n"}
- }
- "*assword:"
-{
- send "$password\n"
-}
-}
-expect eof
diff --git a/scripts/update_qt_config.sh b/scripts/update_qt_config.sh
deleted file mode 100755
index bd61c47..0000000
--- a/scripts/update_qt_config.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-# 
-#   http://www.apache.org/licenses/LICENSE-2.0
-# 
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-# This is used for updating the meta-data of Qt Creator IDE.
-
-PREFIX=pegasus
-if [ $# -eq 1 ]
-then
-    PREFIX=$1
-fi
-
-pwd="$( cd "$( dirname "$0"  )" && pwd )"
-shell_dir="$( cd $pwd/.. && pwd )"
-cd $shell_dir
-
-# config
-CONFIG_OUT="${PREFIX}.config"
-echo "Generate $CONFIG_OUT"
-rm $CONFIG_OUT &>/dev/null
-echo "#define __cplusplus 201103L" >>$CONFIG_OUT
-echo "#define _DEBUG" >>$CONFIG_OUT
-echo "#define DSN_USE_THRIFT_SERIALIZATION" >>$CONFIG_OUT
-echo "#define DSN_ENABLE_THRIFT_RPC" >>$CONFIG_OUT
-echo "#define DSN_BUILD_TYPE" >>$CONFIG_OUT
-echo "#define DSN_BUILD_HOSTNAME" >>$CONFIG_OUT
-echo "#define ROCKSDB_PLATFORM_POSIX" >>$CONFIG_OUT
-echo "#define OS_LINUX" >>$CONFIG_OUT
-echo "#define ROCKSDB_FALLOCATE_PRESENT" >>$CONFIG_OUT
-echo "#define GFLAGS google" >>$CONFIG_OUT
-echo "#define ZLIB" >>$CONFIG_OUT
-echo "#define BZIP2" >>$CONFIG_OUT
-echo "#define ROCKSDB_MALLOC_USABLE_SIZE" >>$CONFIG_OUT
-#echo "#define __FreeBSD__" >>$CONFIG_OUT
-#echo "#define _WIN32" >>$CONFIG_OUT
-
-# includes
-INCLUDES_OUT="${PREFIX}.includes"
-echo "Generate $INCLUDES_OUT"
-rm $INCLUDES_OUT &>/dev/null
-echo "/usr/include" >>$INCLUDES_OUT
-echo "/usr/include/c++/4.8" >>$INCLUDES_OUT
-echo "/usr/include/x86_64-linux-gnu" >>$INCLUDES_OUT
-echo "/usr/include/x86_64-linux-gnu/c++/4.8" >>$INCLUDES_OUT
-echo "rdsn/include" >>$INCLUDES_OUT
-echo "rdsn/thirdparty/output/include" >>$INCLUDES_OUT
-echo "rdsn/include/dsn/dist/failure_detector" >>$INCLUDES_OUT
-echo "rdsn/src/dist/replication/client_lib" >>$INCLUDES_OUT
-echo "rdsn/src/dist/replication/lib" >>$INCLUDES_OUT
-echo "rdsn/src/dist/replication/meta_server" >>$INCLUDES_OUT
-echo "rdsn/src/dist/replication/zookeeper" >>$INCLUDES_OUT
-echo "rdsn/thirdparty/output/include" >>$INCLUDES_OUT
-echo "rdsn/src/dist/block_service/fds" >>$INCLUDES_OUT
-echo "rdsn/src/dist/block_service/local" >>$INCLUDES_OUT
-echo "rdsn/src" >> $INCLUDES_OUT
-echo "rocksdb" >>$INCLUDES_OUT
-echo "rocksdb/include" >>$INCLUDES_OUT
-echo "src" >>$INCLUDES_OUT
-echo "src/include" >>$INCLUDES_OUT
-echo "src/redis_protocol/proxy_lib" >>$INCLUDES_OUT
-
-# files
-FILES_OUT="${PREFIX}.files"
-echo "Generate $FILES_OUT"
-rm $FILES_OUT >&/dev/null
-echo "build.sh" >>$FILES_OUT
-echo "rdsn/CMakeLists.txt" >>$FILES_OUT
-echo "rdsn/bin/dsn.cmake" >>$FILES_OUT
-FILES_DIR="
-src rocksdb rdsn scripts
-"
-for i in $FILES_DIR
-do
-    find $i -name '*.h' -o -name '*.cpp' -o -name '*.c' -o -name '*.cc' \
-        -o -name '*.thrift' -o -name '*.ini' -o -name '*.act' \
-        -o -name 'CMakeLists.txt' -o -name '*.sh' \
-        | grep -v '\<builder\>\|rdsn\/thirdparty\|\.zk_install' >>$FILES_OUT
-done


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pegasus.apache.org
For additional commands, e-mail: commits-help@pegasus.apache.org


[incubator-pegasus] 03/03: chore: sort out in-source 3rdparty licenses under rdsn (#637)

Posted by wu...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

wutao pushed a commit to branch v2.1
in repository https://gitbox.apache.org/repos/asf/incubator-pegasus.git

commit 7686ecef697795a1333028b14072472d920d8d19
Author: Wu Tao <wu...@163.com>
AuthorDate: Fri Nov 6 22:44:27 2020 +0800

    chore: sort out in-source 3rdparty licenses under rdsn (#637)
---
 LICENSE | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/LICENSE b/LICENSE
index d357bd4..9362a43 100644
--- a/LICENSE
+++ b/LICENSE
@@ -327,3 +327,83 @@ src/shell/argh.h - BSD 3-Clause
  CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
  POSSIBILITY OF SUCH DAMAGE.
+
+--------------------------------------------------------------------------------
+
+rdsn/include/dsn/utility/smart_pointers.h - Apache 2.0 License
+rdsn/include/dsn/utility/string_view.h
+rdsn/include/dsn/utility/absl/base/internal/invoke.h
+rdsn/include/dsn/utility/absl/utility/utility.h
+rdsn/src/utils/memutil.h
+rdsn/src/utils/string_view.cpp
+rdsn/src/utils/test/memutil_test.cpp
+rdsn/src/utils/test/smart_pointers_test.cpp
+rdsn/src/utils/test/string_view_test.cpp
+
+Copyright 2017 The Abseil Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+--------------------------------------------------------------------------------
+
+rdsn/src/http/pprof_http_service.cpp - Apache 2.0 License
+rdsn/include/dsn/utility/timer.h
+rdsn/include/dsn/utility/string_splitter.h
+
+Copyright (c) 2011 Baidu, Inc.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+--------------------------------------------------------------------------------
+
+rdsn/include/dsn/utility/safe_strerror_posix.h - 3-clause BSD
+rdsn/src/runtime/build_config.h
+rdsn/src/utils/test/autoref_ptr_test.cpp
+
+Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+    * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+    * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pegasus.apache.org
For additional commands, e-mail: commits-help@pegasus.apache.org