You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@doris.apache.org by yi...@apache.org on 2023/06/09 06:16:51 UTC

[doris] branch master updated: [doc](catalog) remove external table doc (#20632)

This is an automated email from the ASF dual-hosted git repository.

yiguolei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new c8bda9508e [doc](catalog) remove external table doc (#20632)
c8bda9508e is described below

commit c8bda9508ec29e13a2a3bef1e0a970596c6a2918
Author: Mingyu Chen <mo...@163.com>
AuthorDate: Fri Jun 9 14:16:44 2023 +0800

    [doc](catalog) remove external table doc (#20632)
---
 docs/en/docs/lakehouse/external-table/es.md      | 595 -----------------------
 docs/en/docs/lakehouse/external-table/hive.md    |  34 --
 docs/en/docs/lakehouse/external-table/jdbc.md    | 530 --------------------
 docs/en/docs/lakehouse/external-table/odbc.md    |  34 --
 docs/sidebars.json                               |  10 -
 docs/zh-CN/docs/lakehouse/external-table/es.md   | 595 -----------------------
 docs/zh-CN/docs/lakehouse/external-table/hive.md | 209 --------
 docs/zh-CN/docs/lakehouse/external-table/jdbc.md | 520 --------------------
 docs/zh-CN/docs/lakehouse/external-table/odbc.md | 406 ----------------
 9 files changed, 2933 deletions(-)

diff --git a/docs/en/docs/lakehouse/external-table/es.md b/docs/en/docs/lakehouse/external-table/es.md
deleted file mode 100644
index f6f6594b76..0000000000
--- a/docs/en/docs/lakehouse/external-table/es.md
+++ /dev/null
@@ -1,595 +0,0 @@
----
-{
-    "title": "Elasticsearch External Table",
-    "language": "en"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Elasticsearch External Table
-
-<version deprecated="1.2.2">
-
-Please use [ES Catalog](../multi-catalog/es) to access Elasticsearch (ES) data sources, this function will no longer be maintained after version 1.2.2.
-
-</version>
-
-Doris-on-ES provides an advanced OLAP solution, where you can benefit from both the distributed query planning capability of Doris and the full-text search capability of ES: 
-
-1. Multi-index distributed Join queries in ES;
-2. Join queries across Doris and ES as well as full-text search and filter.
-
-This topic is about how ES External Tables are implemented and used in Doris.
-
-## Basic Concepts
-
-### Doris-Related Concepts
-* FE: Frontend of Doris, responsible for metadata management and request processing
-* BE: Backend of Doris, responsible for query execution and data storage
-
-### ES-Related Concepts
-* DataNode: nodes for data storage and computing in ES
-* MasterNode: nodes for managing metadata, nodes, and data distribution in ES
-* scroll: built-in dataset cursor in ES, used to stream scan and filter data 
-* _source: the original JSON file in data ingestion
-* doc_values: the columnar storage definition of fields in ES/Lucene
-* keyword: string field, ES/Lucene not tokenizing texts
-* text: string field, ES/Lucene tokenizing texts using the specified tokenizer (the standard tokenizer, if not specified)
-
-
-## Usage
-
-### Create ES Index
-
-```
-PUT test
-{
-   "settings": {
-      "index": {
-         "number_of_shards": "1",
-         "number_of_replicas": "0"
-      }
-   },
-   "mappings": {
-      "doc": { // In ES 7.x or newer, you don't have to specify the type when creating an index. It will come with a unique `_doc` type by default.
-         "properties": {
-            "k1": {
-               "type": "long"
-            },
-            "k2": {
-               "type": "date"
-            },
-            "k3": {
-               "type": "keyword"
-            },
-            "k4": {
-               "type": "text",
-               "analyzer": "standard"
-            },
-            "k5": {
-               "type": "float"
-            }
-         }
-      }
-   }
-}
-```
-
-### Data Ingestion
-
-```
-POST /_bulk
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Trying out Elasticsearch", "k4": "Trying out Elasticsearch", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Trying out Doris", "k4": "Trying out Doris", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Doris On ES", "k4": "Doris On ES", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Doris", "k4": "Doris", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "ES", "k4": "ES", "k5": 10.0}
-```
-
-### Create ES External Table in Doris
-
-See [CREATE TABLE](https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/) for syntax details.
-
-```
-CREATE EXTERNAL TABLE `test` // You don't have to specify the schema. The system will auto-pull the ES mapping for tabale creation.
-ENGINE=ELASTICSEARCH 
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"type" = "doc",
-"user" = "root",
-"password" = "root"
-);
-
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH // ENGINE should be Elasticsearch.
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"type" = "doc",
-"user" = "root",
-"password" = "root"
-);
-```
-
-Parameter Description:
-
-| **Parameter** | **Description**                                              |
-| ------------- | ------------------------------------------------------------ |
-| **hosts**     | One or multiple ES cluster addresses or the load balancer address of ES frontend |
-| **index**     | The corresponding ES index; supports alias, but not when doc_value is used. |
-| **type**      | Type of index (no longer needed in ES 7.x or newer)          |
-| **user**      | Username for the ES cluster                                  |
-| **password**  | The corresponding password                                   |
-
-* In the ES versions before 7.x, please choose the correct **index type** when creating tables.
-* Only HTTP Basic authentication is supported. Please make sure the user has access to the relevant paths (/\_cluster/state/, _nodes/http) and read privilege on the index. If you have not enabled security authentication for the clusters, you don't have to set the username and password.
-* Please ensure that the column names and types in the Doris are consistent with the field names and types in ES.
-*  The **ENGINE** should be **Elasticsearch**.
-
-##### Predicate Pushdown
-A key feature of `Doris On ES` is predicate pushdown: The filter conditions will be pushed down to ES so only the filtered data will be returned. This can largely improve query performance and reduce usage of CPU, memory, and IO in Doris and ES.
-
-Operators will be converted into ES queries as follows:
-
-| SQL syntax     |            ES 5.x+ syntax             |
-| -------------- | :-----------------------------------: |
-| =              |              term query               |
-| in             |              terms query              |
-| > , < , >= , ⇐ |              range query              |
-| and            |              bool.filter              |
-| or             |              bool.should              |
-| not            |             bool.must_not             |
-| not in         |      bool.must_not + terms query      |
-| is\_not\_null  |             exists query              |
-| is\_null       |     bool.must_not + exists query      |
-| esquery        | QueryDSL in the ES-native JSON format |
-
-##### Data Type Mapping
-
-| Doris\ES | byte    | short   | integer | long    | float   | double  | keyword | text    | date    |
-| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
-| tinyint  | &radic; |         |         |         |         |         |         |         |         |
-| smallint | &radic; | &radic; |         |         |         |         |         |         |         |
-| int      | &radic; | &radic; | &radic; |         |         |         |         |         |         |
-| bigint   | &radic; | &radic; | &radic; | &radic; |         |         |         |         |         |
-| float    |         |         |         |         | &radic; |         |         |         |         |
-| double   |         |         |         |         |         | &radic; |         |         |         |
-| char     |         |         |         |         |         |         | &radic; | &radic; |         |
-| varchar  |         |         |         |         |         |         | &radic; | &radic; |         |
-| date     |         |         |         |         |         |         |         |         | &radic; |
-| datetime |         |         |         |         |         |         |         |         | &radic; |
-
-
-### Improve Query Speed by Enabling Columnar Scan (enable\_docvalue\_scan=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"enable_docvalue_scan" = "true"
-);
-```
-
-Parameter Description:
-
-| Parameter                  | Description                                                  |
-| -------------------------- | ------------------------------------------------------------ |
-| **enable\_docvalue\_scan** | This specifies whether to acquire value from the query field via ES/Lucene columnar storage. It is set to false by default. |
-
-If this parameter is set to true, Doris will follow these rules when obtaining data from ES:
-
-* **Try and see**: Doris will automatically check if columnar storage is enabled for the target fields (doc_value: true), if it is, Doris will obtain all values in the fields from the columnar storage.
-* **Auto-downgrading**: If any one of the target fields is not available in columnar storage, Doris will parse and obtain all target data from row storage (`_source`).
-
-##### Benefits:
-
-By default, Doris-on-ES obtains all target columns from `_source`, which is in row storage and JSON format. Compared to columnar storage, `_source` is slow in batch read. In particular, when the system only needs to read small number of columns, the performance of `docvalue` can be about a dozen times faster than that of `_source`.
-
-##### Note
-1. Columnar storage is not available for `text` fields in ES. Thus, if you need to obtain fields containing `text` values, you will need to obtain them from `_source`.
-2. When obtaining large numbers of fields (`>= 25`), the performances of `docvalue` and `_source` are basically equivalent.
-
-### Sniff Keyword Fields (enable\_keyword\_sniff=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"enable_keyword_sniff" = "true"
-);
-```
-
-Parameter Description:
-
-| Parameter                  | Description                                                  |
-| -------------------------- | ------------------------------------------------------------ |
-| **enable\_keyword\_sniff** | This specifies whether to sniff (**text**)  `fields` for untokenized (**keyword**) fields (multi-fields mechanism) |
-
-You can start data ingestion without creating an index since ES will generate a new index automatically. For string fields, ES will create a field of both `text` type and `keyword`  type. This is the multi-fields mechanism of ES. The mapping goes as follows:
-
-```
-"k4": {
-   "type": "text",
-   "fields": {
-      "keyword": {   
-         "type": "keyword",
-         "ignore_above": 256
-      }
-   }
-}
-```
-In conditional filtering of k4, "=" filtering for example,Doris-on-ES will convert the query into an ES TermQuery.
-
-SQL filter:
-
-```
-k4 = "Doris On ES"
-```
-
-Converted query DSL in ES:
-
-```
-"term" : {
-    "k4": "Doris On ES"
-
-}
-```
-
-The primary field type of k4 is `text` so on data ingestion, the designated tokenizer (or the standard tokenizer, if no specification) for k4 will split it into three terms: "doris", "on", and "es". 
-
-For example:
-
-```
-POST /_analyze
-{
-  "analyzer": "standard",
-  "text": "Doris On ES"
-}
-```
-It will be tokenized as follows:
-
-```
-{
-   "tokens": [
-      {
-         "token": "doris",
-         "start_offset": 0,
-         "end_offset": 5,
-         "type": "<ALPHANUM>",
-         "position": 0
-      },
-      {
-         "token": "on",
-         "start_offset": 6,
-         "end_offset": 8,
-         "type": "<ALPHANUM>",
-         "position": 1
-      },
-      {
-         "token": "es",
-         "start_offset": 9,
-         "end_offset": 11,
-         "type": "<ALPHANUM>",
-         "position": 2
-      }
-   ]
-}
-```
-The term used in the query is:
-
-```
-"term" : {
-    "k4": "Doris On ES"
-}
-```
-Since `Doris On ES` does not match any term in the dictionary, no result will be returned. However, if you `enable_keyword_sniff: true` , then  `k4 = "Doris On ES"` will be turned into `k4.keyword = "Doris On ES"`. The converted ES query DSL will be:
-
-```
-"term" : {
-    "k4.keyword": "Doris On ES"
-}
-```
-
-In this case, `k4.keyword` is of `keyword` type and the data writted into ES is a complete term so the matching can be done.
-
-### Enable Node Discovery (nodes\_discovery=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"nodes_discovery" = "true"
-);
-```
-
-Parameter Description:
-
-| Parameter            | Description                                                  |
-| -------------------- | ------------------------------------------------------------ |
-| **nodes\_discovery** | This specifies whether to enable ES node discovery. It is set to true by default. |
-
-If this is set to true, Doris will locate all relevant data nodes (the allocated tablets) that are available. If the data node addresses are not accessed by Doris BE, this should be set to false. The deployment of ES clusters is done in an intranet so users require proxy access.
-
-### Enable HTTPS Access Mode for ES Clusters (http_ssl_enabled=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"http_ssl_enabled" = "true"
-);
-```
-
-Parameter Description:
-
-| Parameter              | Description                                                  |
-| ---------------------- | ------------------------------------------------------------ |
-| **http\_ssl\_enabled** | This specifies whether to enable HTTPS access mode for ES cluster. It is set to false by default. |
-
-Currently, the FE and BE implement a trust-all method, which is temporary solution. The actual user configuration certificate will be used in the future.
-
-### Query
-
-After creating an ES External Table in Doris, you can query data from ES as simply as querying data in Doris itself, except that you won't be able to use the Doris data models (rollup, pre-aggregation, and materialized view).
-
-#### Basic Query
-
-```
-select * from es_table where k1 > 1000 and k3 ='term' or k4 like 'fu*z_'
-```
-
-#### Extended esquery (field, QueryDSL)
-For queries that cannot be expressed in SQL, such as match_phrase and geoshape, you can use the `esquery(field, QueryDSL)` function to push them down to ES for filtering. The first parameter `field`  associates with  `index` ; the second one is the Json expression of ES query DSL, which should be surrounded by `{}`. There should be one and only one `root key`, such as match_phrase, geo_shape, and bool.
-
-For example, a match_phrase query:
-
-```
-select * from es_table where esquery(k4, '{
-        "match_phrase": {
-           "k4": "doris on es"
-        }
-    }');
-```
-A geo query:
-
-```
-select * from es_table where esquery(k4, '{
-      "geo_shape": {
-         "location": {
-            "shape": {
-               "type": "envelope",
-               "coordinates": [
-                  [
-                     13,
-                     53
-                  ],
-                  [
-                     14,
-                     52
-                  ]
-               ]
-            },
-            "relation": "within"
-         }
-      }
-   }');
-```
-
-A bool query:
-
-```
-select * from es_table where esquery(k4, ' {
-         "bool": {
-            "must": [
-               {
-                  "terms": {
-                     "k1": [
-                        11,
-                        12
-                     ]
-                  }
-               },
-               {
-                  "terms": {
-                     "k2": [
-                        100
-                     ]
-                  }
-               }
-            ]
-         }
-      }');
-```
-
-
-
-## Illustration
-
-```              
-+----------------------------------------------+
-|                                              |
-| Doris      +------------------+              |
-|            |       FE         +--------------+-------+
-|            |                  |  Request Shard Location
-|            +--+-------------+-+              |       |
-|               ^             ^                |       |
-|               |             |                |       |
-|  +-------------------+ +------------------+  |       |
-|  |            |      | |    |             |  |       |
-|  | +----------+----+ | | +--+-----------+ |  |       |
-|  | |      BE       | | | |      BE      | |  |       |
-|  | +---------------+ | | +--------------+ |  |       |
-+----------------------------------------------+       |
-   |        |          | |        |         |          |
-   |        |          | |        |         |          |
-   |    HTTP SCROLL    | |    HTTP SCROLL   |          |
-+-----------+---------------------+------------+       |
-|  |        v          | |        v         |  |       |
-|  | +------+--------+ | | +------+-------+ |  |       |
-|  | |               | | | |              | |  |       |
-|  | |   DataNode    | | | |   DataNode   +<-----------+
-|  | |               | | | |              | |  |       |
-|  | |               +<--------------------------------+
-|  | +---------------+ | | |--------------| |  |       |
-|  +-------------------+ +------------------+  |       |
-|   Same Physical Node                         |       |
-|                                              |       |
-|           +-----------------------+          |       |
-|           |                       |          |       |
-|           |      MasterNode       +<-----------------+
-| ES        |                       |          |
-|           +-----------------------+          |
-+----------------------------------------------+
-
-
-```
-
-1. After an ES External Table is created, Doris FE will send a request to the designated host for information regarding HTTP port and index shard allocation. If the request fails, Doris FE will traverse all hosts until the request succeeds or completely fails.
-2. Based on the nodes and metadata in indexes, Doris FE will generate a query plan and send it to the relevant BE nodes.
-3. The BE nodes will send requests to locally deployed ES nodes. Via `HTTP Scroll`, BE nodes obtain data in `_source` and `docvalue` concurrently from each tablet in ES index.
-4. Doris returns the query results to the user.
-
-## Best Practice
-
-### Usage of Time Field 
-
-ES allows flexible use of time fields, but improper configuration of time field types can lead to predicate pushdown failure.
-
-When creating an index, allow the greatest format compatibility for time data types:
-
-```
- "dt": {
-     "type": "date",
-     "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
- }
-```
-
-It is recommended to set the corresponding fields in Doris to `date` or `datetime` type (or `varchar`). Then you can use the following SQL statement to push the filters down to ES:
-
-```
-select * from doe where k2 > '2020-06-21';
-
-select * from doe where k2 < '2020-06-21 12:00:00'; 
-
-select * from doe where k2 < 1593497011; 
-
-select * from doe where k2 < now();
-
-select * from doe where k2 < date_format(now(), '%Y-%m-%d');
-```
-
-Note:
-
-* If you don't specify the `format` for time fields in ES, the default format will be: 
-
-```
-strict_date_optional_time||epoch_millis
-```
-
-* Timestamps should be converted to `ms`  before they are imported into ES; otherwise errors might occur in Doris-on-ES.
-
-### Obtain ES Metadata Field `_id`
-
-You can specify an informative `_id` for a file on ingestion. If not, ES will assign a globally unique `_id` (the primary key) to the file. If you need to acquire the `_id` through Doris-on-ES, you can add a `_id` field of `varchar`  type upon table creation. 
-
-```
-CREATE EXTERNAL TABLE `doe` (
-  `_id` varchar COMMENT "",
-  `city`  varchar COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://127.0.0.1:8200",
-"user" = "root",
-"password" = "root",
-"index" = "doe"
-}
-```
-
-Note:
-
-1. `_id` fields only support `=` and `in` filters.
-2. `_id` field should be of `varchar` type.
-
-## FAQ
-
-1. What versions of ES does Doris-on-ES support?
-
-   Doris-on-ES supports ES 5.x or newer since the data scanning works differently in older versions of ES.
-
-2. Are X-Pack authenticated  ES clusters supported?
-
-   All ES clusters with HTTP Basic authentication are supported.
-
-3. Why are some queries a lot slower than direct queries oo ES?
-
-   For certain queries such as `_count`, ES can directly read the metadata for the number of files that meet the conditions, which is much faster than reading and filtering all the data.
-
-4. Can aggregation operations be pushed down?
-
-   Currently, Doris-on-ES does not support pushing down aggregation operations such as `sum`, `avg`, and `min`/`max`. Instead, all relevant files from ES will be streamed into Doris in batches, where the computation will be performed.
-
diff --git a/docs/en/docs/lakehouse/external-table/hive.md b/docs/en/docs/lakehouse/external-table/hive.md
deleted file mode 100644
index 68ff21abec..0000000000
--- a/docs/en/docs/lakehouse/external-table/hive.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-{
-    "title": "Hive External Table",
-    "language": "en"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Hive External Table
-
-<version deprecated="1.2.0">
-
-Please use [Hive Catalog](../multi-catalog/hive.md) to visit Hive, this function will no longer be maintained after version 1.2.0.
- 
-</version>
-
diff --git a/docs/en/docs/lakehouse/external-table/jdbc.md b/docs/en/docs/lakehouse/external-table/jdbc.md
deleted file mode 100644
index c5b782fb6f..0000000000
--- a/docs/en/docs/lakehouse/external-table/jdbc.md
+++ /dev/null
@@ -1,530 +0,0 @@
----
-{
-    "title": "JDBC External Table",
-    "language": "en"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# JDBC External Table
-
-<version deprecated="1.2.2">
-
-Please use [JDBC Catalog](https://doris.apache.org/docs/dev/lakehouse/multi-catalog/jdbc/) to access JDBC data sources, this function will no longer be maintained after version 1.2.2.
-
-</version>
-
-<version since="1.2.0">
-
-By creating JDBC External Tables, Doris can access external tables via JDBC, the standard database access inferface. This allows Doris to visit various databases without tedious data ingestion, and give full play to its own OLAP capabilities to perform data analysis on external tables:
-
-1. Multiple data sources can be connected to Doris;
-2. It enables Join queries across Doris and other data sources and thus allows more complex analysis.
-
-This topic introduces how to use JDBC External Tables in Doris.
-
-</version>
-
-### Create JDBC External Table in Doris
-
-See [CREATE TABLE](https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE/) for syntax details.
-
-#### 1. Create JDBC External Table by Creating JDBC_Resource
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_resource
-properties (
-    "type"="jdbc",
-    "user"="root",
-    "password"="123456",
-    "jdbc_url"="jdbc:mysql://192.168.0.1:3306/test?useCursorFetch=true",
-    "driver_url"="http://IP:port/mysql-connector-java-5.1.47.jar",
-    "driver_class"="com.mysql.jdbc.Driver"
-);
-     
-CREATE EXTERNAL TABLE `baseall_mysql` (
-  `k1` tinyint(4) NULL,
-  `k2` smallint(6) NULL,
-  `k3` int(11) NULL,
-  `k4` bigint(20) NULL,
-  `k5` decimal(9, 3) NULL
-) ENGINE=JDBC
-PROPERTIES (
-"resource" = "jdbc_resource",
-"table" = "baseall",
-"table_type"="mysql"
-);
-```
-
-Parameter Description:
-
-| Parameter        | Description                                                  |
-| ---------------- | ------------------------------------------------------------ |
-| **type**         | "jdbc"; required; specifies the type of the Resource         |
-| **user**         | Username for accessing the external database                 |
-| **password**     | Password of the user                                         |
-| **jdbc_url**     | JDBC URL protocol, including the database type, IP address, port number, and database name; Please be aware of the different formats of different database protocols. For example, MySQL: "jdbc:mysql://127.0.0.1:3306/test?useCursorFetch=true". |
-| **driver_class** | Class of the driver used to access the external database. For example, to access MySQL data: com.mysql.jdbc.Driver. |
-| **driver_url**   | Driver URL for downloading the Jar file package that is used to access the external database, for example, http://IP:port/mysql-connector-java-5.1.47.jar. For local stand-alone testing, you can put the Jar file package in a local path: "driver_url"="file:///home/disk1/pathTo/mysql-connector-java-5.1.47.jar"; for local multi-machine testing, please ensure the consistency of the paths. |
-| **resource**     | Name of the Resource that the Doris External Table depends on; should be the same as the name set in Resource creation. |
-| **table**        | Name of the external table to be mapped in Doris             |
-| **table_type**   | The database from which the external table comes, such as mysql, postgresql, sqlserver, and oracle. |
-
-> **Note:**
->
-> For local testing, please make sure you put the Jar file package in the FE and BE nodes, too.
-
-<version since="1.2.1">
-
-> In Doris 1.2.1 and newer versions, if you have put the driver in the  `jdbc_drivers`  directory of FE/BE, you can simply specify the file name in the driver URL: `"driver_url" = "mysql-connector-java-5.1.47.jar"`, and the system will automatically find the file in the `jdbc_drivers` directory.
-
-</version>
-
-### Query
-
-```
-select * from mysql_table where k1 > 1000 and k3 ='term';
-```
-
-In some cases, the keywords in the database might be used as the field names. For queries to function normally in these cases, Doris will add escape characters to the field names and tables names in SQL statements based on the rules of different databases, such as (``) for MySQL, ([]) for SQLServer, and ("") for PostgreSQL and Oracle. This might require extra attention on case sensitivity. You can view the query statements sent to these various databases via ```explain sql```.
-
-### Write Data
-
-After creating a JDBC External Table in Doris, you can write data or query results to it using the `insert into` statement. You can also ingest data from one JDBC External Table to another JDBC External Table.
-
-
-```
-insert into mysql_table values(1, "doris");
-insert into mysql_table select * from table;
-```
-
-#### Transaction
-
-In Doris, data is written to External Tables in batches. If the ingestion process is interrupted, rollbacks might be required. That's why JDBC External Tables support data writing transactions. You can utilize this feature by setting the session variable: `enable_odbc_transcation ` (ODBC transactions are also controlled by this variable).
-
-```
-set enable_odbc_transcation = true; 
-```
-
-The transaction mechanism ensures the atomicity of data writing to JDBC External Tables, but it reduces performance to a certain extent. You may decide whether to enable transactions based on your own tradeoff.
-
-#### 1.MySQL Test
-
-| MySQL Version | MySQL JDBC Driver Version       |
-| ------------- | ------------------------------- |
-| 8.0.30        | mysql-connector-java-5.1.47.jar |
-
-#### 2.PostgreSQL Test
-
-| PostgreSQL Version | PostgreSQL JDBC Driver Version |
-| ------------------ | ------------------------------ |
-| 14.5               | postgresql-42.5.0.jar          |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_pg
-properties (
-    "type"="jdbc",
-    "user"="postgres",
-    "password"="123456",
-    "jdbc_url"="jdbc:postgresql://127.0.0.1:5442/postgres?currentSchema=doris_test",
-    "driver_url"="http://127.0.0.1:8881/postgresql-42.5.0.jar",
-    "driver_class"="org.postgresql.Driver"
-);
-
-CREATE EXTERNAL TABLE `ext_pg` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_pg",
-    "table" = "pg_tbl",
-    "table_type"="postgresql"
-);
-```
-
-#### 3.SQLServer Test
-
-| SQLServer Version | SQLServer JDBC Driver Version |
-| ----------------- | ----------------------------- |
-| 2022              | mssql-jdbc-11.2.0.jre8.jar    |
-
-#### 4.Oracle Test
-
-| Oracle Version | Oracle JDBC Driver Version |
-| -------------- | -------------------------- |
-| 11             | ojdbc6.jar                 |
-
-Test information on more versions will be provided in the future.
-
-#### 5.ClickHouse Test
-
-| ClickHouse Version | ClickHouse JDBC Driver Version        |
-| ------------------ | ------------------------------------- |
-| 22           | clickhouse-jdbc-0.3.2-patch11-all.jar |
-| 22           | clickhouse-jdbc-0.4.1-all.jar         |
-
-#### 6.Sap Hana Test
-
-| Sap Hana Version | Sap Hana JDBC Driver Version |
-|------------------|------------------------------|
-| 2.0              | ngdbc.jar                    |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_hana
-properties (
-    "type"="jdbc",
-    "user"="SYSTEM",
-    "password"="SAPHANA",
-    "jdbc_url" = "jdbc:sap://localhost:31515/TEST",
-    "driver_url" = "file:///path/to/ngdbc.jar",
-    "driver_class" = "com.sap.db.jdbc.Driver"
-);
-
-CREATE EXTERNAL TABLE `ext_hana` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_hana",
-    "table" = "TEST.HANA",
-    "table_type"="sap_hana"
-);
-```
-
-#### 7.Trino Test
-
-| Trino Version | Trino JDBC Driver Version |
-|---------------|---------------------------|
-| 389           | trino-jdbc-389.jar        |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_trino
-properties (
-    "type"="jdbc",
-    "user"="hadoop",
-    "password"="",
-    "jdbc_url" = "jdbc:trino://localhost:8080/hive",
-    "driver_url" = "file:///path/to/trino-jdbc-389.jar",
-    "driver_class" = "io.trino.jdbc.TrinoDriver"
-);
-
-CREATE EXTERNAL TABLE `ext_trino` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_trino",
-    "table" = "hive.test",
-    "table_type"="trino"
-);
-```
-
-**Note:**
-<version since="dev" type="inline"> Connections using the Presto JDBC Driver are also supported </version>
-
-#### 8.OceanBase Test
-
-| OceanBase Version | OceanBase JDBC Driver Version |
-|-------------------|-------------------------------|
-| 3.2.3             | oceanbase-client-2.4.2.jar    |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_oceanbase
-properties (
-    "type"="jdbc",
-    "user"="root",
-    "password"="",
-    "jdbc_url" = "jdbc:oceanbase://localhost:2881/test",
-    "driver_url" = "file:///path/to/oceanbase-client-2.4.2.jar",
-    "driver_class" = "com.oceanbase.jdbc.Driver"
-);
-
-mysql mode
-CREATE EXTERNAL TABLE `ext_oceanbase` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_oceanbase",
-    "table" = "test.test",
-    "table_type"="oceanbase"
-);
-
-oracle mode
-CREATE EXTERNAL TABLE `ext_oceanbase` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_oceanbase",
-    "table" = "test.test",
-    "table_type"="oceanbase_oracle"
-);
-```
-
-### 9.NebulaGraphTest (only supports queries)
-| Nebula version | Nebula JDBC Driver Version |
-|------------|-------------------|
-| 3.0.0       | nebula-jdbc-3.0.0-jar-with-dependencies.jar         |
-
-
-```
-#step1.crate test data in nebula
-#1.1 create tag
-(root@nebula) [basketballplayer]> CREATE TAG test(t_str string, 
-    t_int int, 
-    t_date date,
-    t_datetime datetime,
-    t_bool bool,
-    t_timestamp timestamp,
-    t_float float,
-    t_double double
-);
-#1.2 insert test data
-(root@nebula) [basketballplayer]> INSERT VERTEX test_type(t_str,t_int,t_date,t_datetime,t_bool,t_timestamp,t_float,t_double) values "zhangshan":("zhangshan",1000,date("2023-01-01"),datetime("2023-01-23 15:23:32"),true,1234242423,1.2,1.35);
-#1.3 check the data
-(root@nebula) [basketballplayer]> match (v:test_type) where id(v)=="zhangshan" return v.test_type.t_str,v.test_type.t_int,v.test_type.t_date,v.test_type.t_datetime,v.test_type.t_bool,v.test_type.t_timestamp,v.test_type.t_float,v.test_type.t_double,v limit 30;
-+-------------------+-------------------+--------------------+----------------------------+--------------------+-------------------------+---------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| v.test_type.t_str | v.test_type.t_int | v.test_type.t_date | v.test_type.t_datetime     | v.test_type.t_bool | v.test_type.t_timestamp | v.test_type.t_float | v.test_type.t_double | v                                                                                                                                                                                                         |
-+-------------------+-------------------+--------------------+----------------------------+--------------------+-------------------------+---------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| "zhangshan"       | 1000              | 2023-01-01         | 2023-01-23T15:23:32.000000 | true               | 1234242423              | 1.2000000476837158  | 1.35                 | ("zhangshan" :test_type{t_bool: true, t_date: 2023-01-01, t_datetime: 2023-01-23T15:23:32.000000, t_double: 1.35, t_float: 1.2000000476837158, t_int: 1000, t_str: "zhangshan", t_timestamp: 1234242423}) |
-+-------------------+-------------------+--------------------+----------------------------+--------------------+-------------------------+---------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-Got 1 rows (time spent 1616/2048 us)
-Mon, 17 Apr 2023 17:23:14 CST
-#step2. create table in doris
-#2.1 create a resource
-MySQL [test_db]> CREATE EXTERNAL RESOURCE gg_jdbc_resource 
-properties (
-   "type"="jdbc",
-   "user"="root",
-   "password"="123",
-   "jdbc_url"="jdbc:nebula://127.0.0.1:9669/basketballplayer",
-   "driver_url"="file:///home/clz/baidu/bdg/doris/be/lib/nebula-jdbc-3.0.0-jar-with-dependencies.jar",  --Need to be placed in the be/lib directory--
-   "driver_class"="com.vesoft.nebula.jdbc.NebulaDriver"
-);
-#2.2 Create a facade that mainly tells Doris how to parse the data returned by Nebulagraph
-MySQL [test_db]> CREATE TABLE `test_type` ( 
- `t_str` varchar(64),
- `t_int` bigint,
- `t_date` date,
- `t_datetime` datetime,
- `t_bool` boolean,
- `t_timestamp` bigint,
- `t_float` double,
- `t_double` double,
- `t_vertx`  varchar(128) --vertex map type varchar in doris---
-) ENGINE=JDBC
-PROPERTIES (
-"resource" = "gg_jdbc_resource",
-"table" = "xx",  --please fill in any value here, we do not use it --
-"table_type"="nebula"
-);
-#2.3 Query the graph surface and use the g() function to transparently pass the nGQL of the graph to Nebula
-MySQL [test_db]> select * from test_type where g('match (v:test_type) where id(v)=="zhangshan" return v.test_type.t_str,v.test_type.t_int,v.test_type.t_date,v.test_type.t_datetime,v.test_type.t_bool,v.test_type.t_timestamp,v.test_type.t_float,v.test_type.t_double,v')\G;
-*************************** 1. row ***************************
-      t_str: zhangshan
-      t_int: 1000
-     t_date: 2023-01-01
- t_datetime: 2023-01-23 15:23:32
-     t_bool: 1
-t_timestamp: 1234242423
-    t_float: 1.2000000476837158
-   t_double: 1.35
-    t_vertx: ("zhangshan" :test_type {t_datetime: utc datetime: 2023-01-23T15:23:32.000000, timezoneOffset: 0, t_timestamp: 1234242423, t_date: 2023-01-01, t_double: 1.35, t_str: "zhangshan", t_int: 1000, t_bool: true, t_float: 1.2000000476837158})
-1 row in set (0.024 sec)
-#2.3 Associate queries with other tables in Doris
-#Assuming there is a user table
-MySQL [test_db]> select * from t_user;
-+-----------+------+---------------------------------+
-| username  | age  | addr                            |
-+-----------+------+---------------------------------+
-| zhangshan |   26 | 北京市西二旗街道1008号          |
-+-----------+------+---------------------------------+
-| lisi |   29 | 北京市西二旗街道1007号          |
-+-----------+------+---------------------------------+
-1 row in set (0.013 sec)
-#Associate with this table to query user related information
-MySQL [test_db]> select u.* from (select t_str username  from test_type where g('match (v:test_type) where id(v)=="zhangshan" return v.test_type.t_str limit 1')) g left join t_user u on g.username=u.username;
-+-----------+------+---------------------------------+
-| username  | age  | addr                            |
-+-----------+------+---------------------------------+
-| zhangshan |   26 | 北京市西二旗街道1008号          |
-+-----------+------+---------------------------------+
-1 row in set (0.029 sec)
-```
-
-
-> **Note:**
->
-> When creating an OceanBase external table, you only need to specify the `oceanbase mode` parameter when creating a resource, and the table type of the table to be created is oceanbase
-
-## Type Mapping
-
-The followings list how data types in different databases are mapped in Doris.
-
-### MySQL
-
-|      MySQL      |  Doris   |
-| :-------------: | :------: |
-|     BOOLEAN     | BOOLEAN  |
-|     BIT(1)      | BOOLEAN  |
-|     TINYINT     | TINYINT  |
-|    SMALLINT     | SMALLINT |
-|       INT       |   INT    |
-|     BIGINT      |  BIGINT  |
-| BIGINT UNSIGNED | LARGEINT |
-|     VARCHAR     | VARCHAR  |
-|      DATE       |   DATE   |
-|      FLOAT      |  FLOAT   |
-|    DATETIME     | DATETIME |
-|     DOUBLE      |  DOUBLE  |
-|     DECIMAL     | DECIMAL  |
-
-
-### PostgreSQL
-
-| PostgreSQL |  Doris   |
-| :--------: | :------: |
-|  BOOLEAN   | BOOLEAN  |
-|  SMALLINT  | SMALLINT |
-|    INT     |   INT    |
-|   BIGINT   |  BIGINT  |
-|  VARCHAR   | VARCHAR  |
-|    DATE    |   DATE   |
-| TIMESTAMP  | DATETIME |
-|    REAL    |  FLOAT   |
-|   FLOAT    |  DOUBLE  |
-|  DECIMAL   | DECIMAL  |
-
-### Oracle
-
-|  Oracle  |  Doris   |
-| :------: | :------: |
-| VARCHAR  | VARCHAR  |
-|   DATE   | DATETIME |
-| SMALLINT | SMALLINT |
-|   INT    |   INT    |
-|   REAL   |  DOUBLE  |
-|  FLOAT   |  DOUBLE  |
-|  NUMBER  | DECIMAL  |
-
-
-### SQL server
-
-| SQLServer |  Doris   |
-| :-------: | :------: |
-|    BIT    | BOOLEAN  |
-|  TINYINT  | TINYINT  |
-| SMALLINT  | SMALLINT |
-|    INT    |   INT    |
-|  BIGINT   |  BIGINT  |
-|  VARCHAR  | VARCHAR  |
-|   DATE    |   DATE   |
-| DATETIME  | DATETIME |
-|   REAL    |  FLOAT   |
-|   FLOAT   |  DOUBLE  |
-|  DECIMAL  | DECIMAL  |
-
-### ClickHouse
-
-|                       ClickHouse                        |          Doris           |
-|:-------------------------------------------------------:|:------------------------:|
-|                         Boolean                         |         BOOLEAN          |
-|                         String                          |          STRING          |
-|                       Date/Date32                       |       DATE/DATEV2        |
-|                   DateTime/DateTime64                   |   DATETIME/DATETIMEV2    |
-|                         Float32                         |          FLOAT           |
-|                         Float64                         |          DOUBLE          |
-|                          Int8                           |         TINYINT          |
-|                       Int16/UInt8                       |         SMALLINT         |
-|                      Int32/UInt16                       |           INT            |
-|                      Int64/Uint32                       |          BIGINT          |
-|                      Int128/UInt64                      |         LARGEINT         |
-|                 Int256/UInt128/UInt256                  |          STRING          |
-|                         Decimal                         | DECIMAL/DECIMALV3/STRING |
-|                   Enum/IPv4/IPv6/UUID                   |          STRING          |
-| <version since="dev" type="inline"> Array(T) </version> |        ARRAY\<T\>        |
-
-
-**Note:**
-
-- <version since="dev" type="inline"> For Array types in ClickHouse, use Doris's Array type to match them. For basic types in an Array, see Basic type matching rules. Nested arrays are not supported. </version>
-- Some data types in ClickHouse, such as UUID, IPv4, IPv6, and Enum8, will be mapped to Varchar/String in Doris. IPv4 and IPv6 will be displayed with an `/` as a prefix. You can use the `split_part` function to remove the `/` .
-- The Point Geo type in ClickHouse cannot be mapped in Doris by far. 
-
-### SAP HANA
-
-|   SAP_HANA   |        Doris        |
-|:------------:|:-------------------:|
-|   BOOLEAN    |       BOOLEAN       |
-|   TINYINT    |       TINYINT       |
-|   SMALLINT   |      SMALLINT       |
-|   INTERGER   |         INT         |
-|    BIGINT    |       BIGINT        |
-| SMALLDECIMAL |  DECIMAL/DECIMALV3  |
-|   DECIMAL    |  DECIMAL/DECIMALV3  |
-|     REAL     |        FLOAT        |
-|    DOUBLE    |       DOUBLE        |
-|     DATE     |     DATE/DATEV2     |
-|     TIME     |        TEXT         |
-|  TIMESTAMP   | DATETIME/DATETIMEV2 |
-|  SECONDDATE  | DATETIME/DATETIMEV2 |
-|   VARCHAR    |        TEXT         |
-|   NVARCHAR   |        TEXT         |
-|   ALPHANUM   |        TEXT         |
-|  SHORTTEXT   |        TEXT         |
-|     CHAR     |        CHAR         |
-|    NCHAR     |        CHAR         |
-
-### Trino
-
-|   Trino   |        Doris        |
-|:---------:|:-------------------:|
-|  boolean  |       BOOLEAN       |
-|  tinyint  |       TINYINT       |
-| smallint  |      SMALLINT       |
-|  integer  |         INT         |
-|  bigint   |       BIGINT        |
-|  decimal  |  DECIMAL/DECIMALV3  |
-|   real    |        FLOAT        |
-|  double   |       DOUBLE        |
-|   date    |     DATE/DATEV2     |
-| timestamp | DATETIME/DATETIMEV2 |
-|  varchar  |        TEXT         |
-|   char    |        CHAR         |
-|   array   |        ARRAY        |
-|  others   |     UNSUPPORTED     |
-
-### OceanBase
-
-For MySQL mode, please refer to [MySQL type mapping](#MySQL)
-For Oracle mode, please refer to [Oracle type mapping](#Oracle)
-
-### Nebula-graph
-|   nebula   |        Doris        |
-|:------------:|:-------------------:|
-|   tinyint/samllint/int/int64    |       bigint       |
-|   double/float    |       double       |
-|   date   |      date       |
-|   timestamp   |         bigint         |
-|    datetime    |       datetime        |
-| bool |  boolean  |
-|   vertex/edge/path/list/set/time etc    |  varchar  |
-
-## Q&A
-
-See the FAQ section in [JDBC](https://doris.apache.org/docs/dev/lakehouse/multi-catalog/jdbc/).
-
diff --git a/docs/en/docs/lakehouse/external-table/odbc.md b/docs/en/docs/lakehouse/external-table/odbc.md
deleted file mode 100644
index b2616d42ef..0000000000
--- a/docs/en/docs/lakehouse/external-table/odbc.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-{
-    "title": "ODBC External Table",
-    "language": "en"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# ODBC External Table
-
-<version deprecated="1.2.0">
-
-Please use [JDBC Catalog](../multi-catalog/jdbc.md) to visit external table, this function will no longer be maintained after version 1.2.0.
-
-</version>
-
diff --git a/docs/sidebars.json b/docs/sidebars.json
index 7489d61dc7..b7a471ee60 100644
--- a/docs/sidebars.json
+++ b/docs/sidebars.json
@@ -214,16 +214,6 @@
                         "lakehouse/multi-catalog/faq"
                     ]
                 },
-                {
-                    "type": "category",
-                    "label": "External Table",
-                    "items": [
-                        "lakehouse/external-table/es",
-                        "lakehouse/external-table/jdbc",
-                        "lakehouse/external-table/odbc",
-                        "lakehouse/external-table/hive"
-                    ]
-                },
                 "lakehouse/file",
                 "lakehouse/filecache"
             ]
diff --git a/docs/zh-CN/docs/lakehouse/external-table/es.md b/docs/zh-CN/docs/lakehouse/external-table/es.md
deleted file mode 100644
index 950c59136d..0000000000
--- a/docs/zh-CN/docs/lakehouse/external-table/es.md
+++ /dev/null
@@ -1,595 +0,0 @@
----
-{
-    "title": "Elasticsearch 外表",
-    "language": "zh-CN"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Elasticsearch 外表
-
-<version deprecated="1.2.2">
-
-推荐使用 [ES Catalog](../multi-catalog/es.md) 功能访问 ES,1.2.2版本后将不再维护该功能。
-
-</version>
-
-Doris-On-ES将Doris的分布式查询规划能力和ES(Elasticsearch)的全文检索能力相结合,提供更完善的OLAP分析场景解决方案:
-
-1. ES中的多index分布式Join查询
-2. Doris和ES中的表联合查询,更复杂的全文检索过滤
-
-本文档主要介绍该功能的实现原理、使用方式等。
-
-## 名词解释
-
-### Doris相关
-* FE:Frontend,Doris 的前端节点,负责元数据管理和请求接入
-* BE:Backend,Doris 的后端节点,负责查询执行和数据存储
-
-### ES相关
-* DataNode:ES的数据存储与计算节点
-* MasterNode:ES的Master节点,管理元数据、节点、数据分布等
-* scroll:ES内置的数据集游标特性,用来对数据进行流式扫描和过滤
-* _source: 导入时传入的原始JSON格式文档内容
-* doc_values: ES/Lucene 中字段的列式存储定义
-* keyword: 字符串类型字段,ES/Lucene不会对文本内容进行分词处理
-* text: 字符串类型字段,ES/Lucene会对文本内容进行分词处理,分词器需要用户指定,默认为standard英文分词器
-
-
-## 使用方法
-
-### 创建ES索引
-
-```
-PUT test
-{
-   "settings": {
-      "index": {
-         "number_of_shards": "1",
-         "number_of_replicas": "0"
-      }
-   },
-   "mappings": {
-      "doc": { // ES 7.x版本之后创建索引时不需要指定type,会有一个默认且唯一的`_doc` type
-         "properties": {
-            "k1": {
-               "type": "long"
-            },
-            "k2": {
-               "type": "date"
-            },
-            "k3": {
-               "type": "keyword"
-            },
-            "k4": {
-               "type": "text",
-               "analyzer": "standard"
-            },
-            "k5": {
-               "type": "float"
-            }
-         }
-      }
-   }
-}
-```
-
-### ES索引导入数据
-
-```
-POST /_bulk
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Trying out Elasticsearch", "k4": "Trying out Elasticsearch", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Trying out Doris", "k4": "Trying out Doris", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Doris On ES", "k4": "Doris On ES", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "Doris", "k4": "Doris", "k5": 10.0}
-{"index":{"_index":"test","_type":"doc"}}
-{ "k1" : 100, "k2": "2020-01-01", "k3": "ES", "k4": "ES", "k5": 10.0}
-```
-
-### Doris中创建ES外表
-
-具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
-
-```
-CREATE EXTERNAL TABLE `test` // 不指定schema,自动拉取es mapping进行建表 
-ENGINE=ELASTICSEARCH 
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"type" = "doc",
-"user" = "root",
-"password" = "root"
-);
-
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH // ENGINE必须是Elasticsearch
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"type" = "doc",
-"user" = "root",
-"password" = "root"
-);
-```
-
-参数说明:
-
-参数 | 说明
----|---
-**hosts** | ES集群地址,可以是一个或多个,也可以是ES前端的负载均衡地址
-**index** | 对应的ES的index名字,支持alias,如果使用doc_value,需要使用真实的名称
-**type** | index的type,ES 7.x及以后的版本不传此参数
-**user** | ES集群用户名
-**password** | 对应用户的密码信息
-
-* ES 7.x之前的集群请注意在建表的时候选择正确的**索引类型type**
-* 认证方式目前仅支持Http Basic认证,并且需要确保该用户有访问: /\_cluster/state/、\_nodes/http等路径和index的读权限; 集群未开启安全认证,用户名和密码不需要设置
-* Doris表中的列名需要和ES中的字段名完全匹配,字段类型应该保持一致
-*  **ENGINE**必须是 **Elasticsearch**
-
-##### 过滤条件下推
-`Doris On ES`一个重要的功能就是过滤条件的下推: 过滤条件下推给ES,这样只有真正满足条件的数据才会被返回,能够显著的提高查询性能和降低Doris和Elasticsearch的CPU、memory、IO使用量
-
-下面的操作符(Operators)会被优化成如下ES Query:
-
-| SQL syntax  | ES 5.x+ syntax | 
-|-------|:---:|
-| =   | term query|
-| in  | terms query   |
-| > , < , >= , ⇐  | range query |
-| and  | bool.filter   |
-| or  | bool.should   |
-| not  | bool.must_not   |
-| not in  | bool.must_not + terms query |
-| is\_not\_null  | exists query |
-| is\_null  | bool.must_not + exists query |
-| esquery  | ES原生json形式的QueryDSL   |
-
-##### 数据类型映射
-
-Doris\ES  |  byte | short | integer | long | float | double| keyword | text | date
-------------- | ------------- | ------  | ---- | ----- | ----  | ------ | ----| --- | --- |
-tinyint  | &radic; |  |  |  |   |   |   |   |
-smallint | &radic; | &radic; |  | |   |   |   |   |
-int | &radic; |  &radic; | &radic; | |   |   |   |   |
-bigint | &radic;  | &radic;  | &radic;  | &radic; |   |   |   |   |
-float |   |   |   |   | &radic; |   |   |   |
-double |   |   |   |   |   | &radic; |   |   |
-char |   |   |   |   |   |   | &radic; | &radic; |
-varchar |  |   |   |   |   |   | &radic; | &radic; |
-date |   |   |   |   |   |   |   |   | &radic;|  
-datetime |   |   |   |   |   |   |   |   | &radic;|  
-
-
-### 启用列式扫描优化查询速度(enable\_docvalue\_scan=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"enable_docvalue_scan" = "true"
-);
-```
-
-参数说明:
-
-参数 | 说明
----|---
-**enable\_docvalue\_scan** | 是否开启通过ES/Lucene列式存储获取查询字段的值,默认为false
-
-开启后Doris从ES中获取数据会遵循以下两个原则:
-
-* **尽力而为**: 自动探测要读取的字段是否开启列式存储(doc_value: true),如果获取的字段全部有列存,Doris会从列式存储中获取所有字段的值
-* **自动降级**: 如果要获取的字段只要有一个字段没有列存,所有字段的值都会从行存`_source`中解析获取
-
-##### 优势:
-
-默认情况下,Doris On ES会从行存也就是`_source`中获取所需的所有列,`_source`的存储采用的行式+json的形式存储,在批量读取性能上要劣于列式存储,尤其在只需要少数列的情况下尤为明显,只获取少数列的情况下,docvalue的性能大约是_source性能的十几倍
-
-##### 注意
-1. `text`类型的字段在ES中是没有列式存储,因此如果要获取的字段值有`text`类型字段会自动降级为从`_source`中获取
-2. 在获取的字段数量过多的情况下(`>= 25`),从`docvalue`中获取字段值的性能会和从`_source`中获取字段值基本一样
-
-
-### 探测keyword类型字段(enable\_keyword\_sniff=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"enable_keyword_sniff" = "true"
-);
-```
-
-参数说明:
-
-参数 | 说明
----|---
-**enable\_keyword\_sniff** | 是否对ES中字符串类型分词类型(**text**) `fields` 进行探测,获取额外的未分词(**keyword**)字段名(multi-fields机制)
-
-在ES中可以不建立index直接进行数据导入,这时候ES会自动创建一个新的索引,针对字符串类型的字段ES会创建一个既有`text`类型的字段又有`keyword`类型的字段,这就是ES的multi fields特性,mapping如下:
-
-```
-"k4": {
-   "type": "text",
-   "fields": {
-      "keyword": {   
-         "type": "keyword",
-         "ignore_above": 256
-      }
-   }
-}
-```
-对k4进行条件过滤时比如=,Doris On ES会将查询转换为ES的TermQuery
-
-SQL过滤条件:
-
-```
-k4 = "Doris On ES"
-```
-
-转换成ES的query DSL为:
-
-```
-"term" : {
-    "k4": "Doris On ES"
-
-}
-```
-
-因为k4的第一字段类型为`text`,在数据导入的时候就会根据k4设置的分词器(如果没有设置,就是standard分词器)进行分词处理得到doris、on、es三个Term,如下ES analyze API分析:
-
-```
-POST /_analyze
-{
-  "analyzer": "standard",
-  "text": "Doris On ES"
-}
-```
-分词的结果是:
-
-```
-{
-   "tokens": [
-      {
-         "token": "doris",
-         "start_offset": 0,
-         "end_offset": 5,
-         "type": "<ALPHANUM>",
-         "position": 0
-      },
-      {
-         "token": "on",
-         "start_offset": 6,
-         "end_offset": 8,
-         "type": "<ALPHANUM>",
-         "position": 1
-      },
-      {
-         "token": "es",
-         "start_offset": 9,
-         "end_offset": 11,
-         "type": "<ALPHANUM>",
-         "position": 2
-      }
-   ]
-}
-```
-查询时使用的是:
-
-```
-"term" : {
-    "k4": "Doris On ES"
-}
-```
-`Doris On ES`这个term匹配不到词典中的任何term,不会返回任何结果,而启用`enable_keyword_sniff: true`会自动将`k4 = "Doris On ES"`转换成`k4.keyword = "Doris On ES"`来完全匹配SQL语义,转换后的ES query DSL为:
-
-```
-"term" : {
-    "k4.keyword": "Doris On ES"
-}
-```
-
-`k4.keyword` 的类型是`keyword`,数据写入ES中是一个完整的term,所以可以匹配
-
-### 开启节点自动发现, 默认为true(nodes\_discovery=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"nodes_discovery" = "true"
-);
-```
-
-参数说明:
-
-参数 | 说明
----|---
-**nodes\_discovery** | 是否开启es节点发现,默认为true
-
-当配置为true时,Doris将从ES找到所有可用的相关数据节点(在上面分配的分片)。如果ES数据节点的地址没有被Doris BE访问,则设置为false。ES集群部署在与公共Internet隔离的内网,用户通过代理访问
-
-### ES集群是否开启https访问模式,如果开启应设置为`true`,默认为false(http\_ssl\_enabled=true)
-
-```
-CREATE EXTERNAL TABLE `test` (
-  `k1` bigint(20) COMMENT "",
-  `k2` datetime COMMENT "",
-  `k3` varchar(20) COMMENT "",
-  `k4` varchar(100) COMMENT "",
-  `k5` float COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://192.168.0.1:8200,http://192.168.0.2:8200",
-"index" = "test",
-"user" = "root",
-"password" = "root",
-"http_ssl_enabled" = "true"
-);
-```
-
-参数说明:
-
-参数 | 说明
----|---
-**http\_ssl\_enabled** | ES集群是否开启https访问模式
-
-目前会fe/be实现方式为信任所有,这是临时解决方案,后续会使用真实的用户配置证书
-
-### 查询用法
-
-完成在Doris中建立ES外表后,除了无法使用Doris中的数据模型(rollup、预聚合、物化视图等)外并无区别
-
-#### 基本查询
-
-```
-select * from es_table where k1 > 1000 and k3 ='term' or k4 like 'fu*z_'
-```
-
-#### 扩展的esquery(field, QueryDSL)
-通过`esquery(field, QueryDSL)`函数将一些无法用sql表述的query如match_phrase、geoshape等下推给ES进行过滤处理,`esquery`的第一个列名参数用于关联`index`,第二个参数是ES的基本`Query DSL`的json表述,使用花括号`{}`包含,json的`root key`有且只能有一个,如match_phrase、geo_shape、bool等
-
-match_phrase查询:
-
-```
-select * from es_table where esquery(k4, '{
-        "match_phrase": {
-           "k4": "doris on es"
-        }
-    }');
-```
-geo相关查询:
-
-```
-select * from es_table where esquery(k4, '{
-      "geo_shape": {
-         "location": {
-            "shape": {
-               "type": "envelope",
-               "coordinates": [
-                  [
-                     13,
-                     53
-                  ],
-                  [
-                     14,
-                     52
-                  ]
-               ]
-            },
-            "relation": "within"
-         }
-      }
-   }');
-```
-
-bool查询:
-
-```
-select * from es_table where esquery(k4, ' {
-         "bool": {
-            "must": [
-               {
-                  "terms": {
-                     "k1": [
-                        11,
-                        12
-                     ]
-                  }
-               },
-               {
-                  "terms": {
-                     "k2": [
-                        100
-                     ]
-                  }
-               }
-            ]
-         }
-      }');
-```
-
-
-
-## 原理
-
-```              
-+----------------------------------------------+
-|                                              |
-| Doris      +------------------+              |
-|            |       FE         +--------------+-------+
-|            |                  |  Request Shard Location
-|            +--+-------------+-+              |       |
-|               ^             ^                |       |
-|               |             |                |       |
-|  +-------------------+ +------------------+  |       |
-|  |            |      | |    |             |  |       |
-|  | +----------+----+ | | +--+-----------+ |  |       |
-|  | |      BE       | | | |      BE      | |  |       |
-|  | +---------------+ | | +--------------+ |  |       |
-+----------------------------------------------+       |
-   |        |          | |        |         |          |
-   |        |          | |        |         |          |
-   |    HTTP SCROLL    | |    HTTP SCROLL   |          |
-+-----------+---------------------+------------+       |
-|  |        v          | |        v         |  |       |
-|  | +------+--------+ | | +------+-------+ |  |       |
-|  | |               | | | |              | |  |       |
-|  | |   DataNode    | | | |   DataNode   +<-----------+
-|  | |               | | | |              | |  |       |
-|  | |               +<--------------------------------+
-|  | +---------------+ | | |--------------| |  |       |
-|  +-------------------+ +------------------+  |       |
-|   Same Physical Node                         |       |
-|                                              |       |
-|           +-----------------------+          |       |
-|           |                       |          |       |
-|           |      MasterNode       +<-----------------+
-| ES        |                       |          |
-|           +-----------------------+          |
-+----------------------------------------------+
-
-
-```
-
-1. 创建ES外表后,FE会请求建表指定的主机,获取所有节点的HTTP端口信息以及index的shard分布信息等,如果请求失败会顺序遍历host列表直至成功或完全失败
-
-2. 查询时会根据FE得到的一些节点信息和index的元数据信息,生成查询计划并发给对应的BE节点
-
-3. BE节点会根据`就近原则`即优先请求本地部署的ES节点,BE通过`HTTP Scroll`方式流式的从ES index的每个分片中并发的从`_source`或`docvalue`中获取数据
-
-4. Doris计算完结果后,返回给用户
-
-## 最佳实践
-
-### 时间类型字段使用建议
-
-在ES中,时间类型的字段使用十分灵活,但是在Doris On ES中如果对时间类型字段的类型设置不当,则会造成过滤条件无法下推
-
-创建索引时对时间类型格式的设置做最大程度的格式兼容:
-
-```
- "dt": {
-     "type": "date",
-     "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
- }
-```
-
-在Doris中建立该字段时建议设置为`date`或`datetime`,也可以设置为`varchar`类型, 使用如下SQL语句都可以直接将过滤条件下推至ES:
-
-```
-select * from doe where k2 > '2020-06-21';
-
-select * from doe where k2 < '2020-06-21 12:00:00'; 
-
-select * from doe where k2 < 1593497011; 
-
-select * from doe where k2 < now();
-
-select * from doe where k2 < date_format(now(), '%Y-%m-%d');
-```
-
-注意:
-
-* 在ES中如果不对时间类型的字段设置`format`, 默认的时间类型字段格式为
-
-```
-strict_date_optional_time||epoch_millis
-```
-
-* 导入到ES的日期字段如果是时间戳需要转换成`ms`, ES内部处理时间戳都是按照`ms`进行处理的, 否则Doris On ES会出现显示错误
-
-### 获取ES元数据字段`_id`
-
-导入文档在不指定`_id`的情况下ES会给每个文档分配一个全局唯一的`_id`即主键, 用户也可以在导入时为文档指定一个含有特殊业务意义的`_id`; 如果需要在Doris On ES中获取该字段值,建表时可以增加类型为`varchar`的`_id`字段:
-
-```
-CREATE EXTERNAL TABLE `doe` (
-  `_id` varchar COMMENT "",
-  `city`  varchar COMMENT ""
-) ENGINE=ELASTICSEARCH
-PROPERTIES (
-"hosts" = "http://127.0.0.1:8200",
-"user" = "root",
-"password" = "root",
-"index" = "doe"
-}
-```
-
-注意:
-
-1. `_id`字段的过滤条件仅支持`=`和`in`两种
-2. `_id`字段只能是`varchar`类型
-
-## Q&A
-
-1. Doris On ES对ES的版本要求
-
-   ES主版本大于5,ES在2.x之前和5.x之后数据的扫描方式不同,目前支持仅5.x之后的
-
-2. 是否支持X-Pack认证的ES集群
-
-   支持所有使用HTTP Basic认证方式的ES集群
-3. 一些查询比请求ES慢很多
-
-   是,比如_count相关的query等,ES内部会直接读取满足条件的文档个数相关的元数据,不需要对真实的数据进行过滤
-
-4. 聚合操作是否可以下推
-
-   目前Doris On ES不支持聚合操作如sum, avg, min/max 等下推,计算方式是批量流式的从ES获取所有满足条件的文档,然后在Doris中进行计算
diff --git a/docs/zh-CN/docs/lakehouse/external-table/hive.md b/docs/zh-CN/docs/lakehouse/external-table/hive.md
deleted file mode 100644
index 7fd690f879..0000000000
--- a/docs/zh-CN/docs/lakehouse/external-table/hive.md
+++ /dev/null
@@ -1,209 +0,0 @@
----
-{
-    "title": "Hive 外表",
-    "language": "zh-CN"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Hive 外表
-
-<version deprecated="1.2.0">
-
-推荐使用 [Hive Catalog](../multi-catalog/hive.md) 访问 Hive,1.2.0版本后将不再维护该功能。
- 
-</version>
-
-Hive External Table of Doris 提供了 Doris 直接访问 Hive 外部表的能力,外部表省去了繁琐的数据导入工作,并借助 Doris 本身的 OLAP 的能力来解决 Hive 表的数据分析问题:
-
-1. 支持 Hive 数据源接入Doris
-2. 支持 Doris 与 Hive 数据源中的表联合查询,进行更加复杂的分析操作
-3. 支持 访问开启 kerberos 的 Hive 数据源
-4. 支持 访问数据存储在腾讯 CHDFS 上的 Hive 数据源
- 
-本文档主要介绍该功能的使用方式和注意事项等。
-
-## 名词解释
-
-### Doris 相关
-
-* FE:Frontend,Doris 的前端节点,负责元数据管理和请求接入
-* BE:Backend,Doris 的后端节点,负责查询执行和数据存储
-
-## 使用方法
-
-### Doris 中创建 Hive 的外表
-
-```sql
--- 语法
-CREATE [EXTERNAL] TABLE table_name (
-  col_name col_type [NULL | NOT NULL] [COMMENT "comment"]
-) ENGINE=HIVE
-[COMMENT "comment"]
-PROPERTIES (
-  'property_name'='property_value',
-  ...
-);
-
--- 例子1:创建 Hive 集群中 hive_db 下的 hive_table 表
-CREATE TABLE `t_hive` (
-  `k1` int NOT NULL COMMENT "",
-  `k2` char(10) NOT NULL COMMENT "",
-  `k3` datetime NOT NULL COMMENT "",
-  `k5` varchar(20) NOT NULL COMMENT "",
-  `k6` double NOT NULL COMMENT ""
-) ENGINE=HIVE
-COMMENT "HIVE"
-PROPERTIES (
-'hive.metastore.uris' = 'thrift://192.168.0.1:9083',
-'database' = 'hive_db',
-'table' = 'hive_table'
-);
-
--- 例子2:创建 Hive 集群中 hive_db 下的 hive_table 表,HDFS使用HA配置
-CREATE TABLE `t_hive` (
-  `k1` int NOT NULL COMMENT "",
-  `k2` char(10) NOT NULL COMMENT "",
-  `k3` datetime NOT NULL COMMENT "",
-  `k5` varchar(20) NOT NULL COMMENT "",
-  `k6` double NOT NULL COMMENT ""
-) ENGINE=HIVE
-COMMENT "HIVE"
-PROPERTIES (
-'hive.metastore.uris' = 'thrift://192.168.0.1:9083',
-'database' = 'hive_db',
-'table' = 'hive_table',
-'dfs.nameservices'='hacluster',
-'dfs.ha.namenodes.hacluster'='n1,n2',
-'dfs.namenode.rpc-address.hacluster.n1'='192.168.0.1:8020',
-'dfs.namenode.rpc-address.hacluster.n2'='192.168.0.2:8020',
-'dfs.client.failover.proxy.provider.hacluster'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-);
-
--- 例子3:创建 Hive 集群中 hive_db 下的 hive_table 表, HDFS使用HA配置并开启kerberos认证方式
-CREATE TABLE `t_hive` (
-  `k1` int NOT NULL COMMENT "",
-  `k2` char(10) NOT NULL COMMENT "",
-  `k3` datetime NOT NULL COMMENT "",
-  `k5` varchar(20) NOT NULL COMMENT "",
-  `k6` double NOT NULL COMMENT ""
-) ENGINE=HIVE
-COMMENT "HIVE"
-PROPERTIES (
-'hive.metastore.uris' = 'thrift://192.168.0.1:9083',
-'database' = 'hive_db',
-'table' = 'hive_table',
-'dfs.nameservices'='hacluster',
-'dfs.ha.namenodes.hacluster'='n1,n2',
-'dfs.namenode.rpc-address.hacluster.n1'='192.168.0.1:8020',
-'dfs.namenode.rpc-address.hacluster.n2'='192.168.0.2:8020',
-'dfs.client.failover.proxy.provider.hacluster'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
-'dfs.namenode.kerberos.principal'='hadoop/_HOST@REALM.COM'
-'hadoop.security.authentication'='kerberos',
-'hadoop.kerberos.principal'='doris_test@REALM.COM',
-'hadoop.kerberos.keytab'='/path/to/doris_test.keytab'
-);
-
--- 例子4:创建 Hive 集群中 hive_db 下的 hive_table 表, Hive数据存储在S3上
-CREATE TABLE `t_hive` (
-  `k1` int NOT NULL COMMENT "",
-  `k2` char(10) NOT NULL COMMENT "",
-  `k3` datetime NOT NULL COMMENT "",
-  `k5` varchar(20) NOT NULL COMMENT "",
-  `k6` double NOT NULL COMMENT ""
-) ENGINE=HIVE
-COMMENT "HIVE"
-PROPERTIES (
-'hive.metastore.uris' = 'thrift://192.168.0.1:9083',
-'database' = 'hive_db',
-'table' = 'hive_table',
-'AWS_ACCESS_KEY' = 'your_access_key',
-'AWS_SECRET_KEY' = 'your_secret_key',
-'AWS_ENDPOINT' = 's3.us-east-1.amazonaws.com',
-'AWS_REGION' = 'us-east-1'
-);
-
-```
-
-#### 参数说明:
-
-- 外表列
-    - 列名要于 Hive 表一一对应
-    - 列的顺序需要与 Hive 表一致
-    - 必须包含 Hive 表中的全部列
-    - Hive 表分区列无需指定,与普通列一样定义即可。
-- ENGINE 需要指定为 HIVE
-- PROPERTIES 属性:
-    - `hive.metastore.uris`:Hive Metastore 服务地址
-    - `database`:挂载 Hive 对应的数据库名
-    - `table`:挂载 Hive 对应的表名
-    - `hadoop.username`: 访问hdfs用户名,当认证为simple时需要
-    - `dfs.nameservices`:name service名称,与hdfs-site.xml保持一致
-    - `dfs.ha.namenodes.[nameservice ID]`:namenode的id列表,与hdfs-site.xml保持一致
-    - `dfs.namenode.rpc-address.[nameservice ID].[name node ID]`:Name node的rpc地址,数量与namenode数量相同,与hdfs-site.xml保持一致
-    - `dfs.client.failover.proxy.provider.[nameservice ID] `:HDFS客户端连接活跃namenode的java类,通常是"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
-
-- 访问开启kerberos的Hive数据源,需要为Hive外表额外配置如下 PROPERTIES 属性:
-    - `hadoop.security.authentication`:认证方式请设置为 kerberos,默认为simple
-    - `dfs.namenode.kerberos.principal`:HDFS namenode 服务的Kerberos 主体
-    - `hadoop.kerberos.principal`:设置 Doris 连接 HDFS 时使用的 Kerberos 主体
-    - `hadoop.kerberos.keytab`:设置 keytab 本地文件路径
-    - `AWS_ACCESS_KEY`: AWS账户的access key id.
-    - `AWS_SECRET_KEY`: AWS账户的secret access key.
-    - `AWS_ENDPOINT`: S3 endpoint. 例如:s3.us-east-1.amazonaws.com
-    - `AWS_REGION`: AWS区域. 例如:us-east-1
-
-**注意:**
-- 若要使 Doris 访问开启kerberos认证方式的hadoop集群,需要在 Doris 集群所有运行节点上部署 Kerberos 客户端 kinit,并配置 krb5.conf,填写KDC 服务信息等。
-- PROPERTIES 属性 `hadoop.kerberos.keytab` 的值需要指定 keytab 本地文件的绝对路径,并允许 Doris 进程访问该本地文件。
-- 关于HDFS集群的配置可以写入hdfs-site.xml文件中,该配置文件在fe和be的conf目录下,用户创建Hive表时,不需要再填写HDFS集群配置的相关信息。
-
-## 类型匹配
-
-支持的 Hive 列类型与 Doris 对应关系如下表:
-
-|  Hive  | Doris  |             描述              |
-| :------: | :----: | :-------------------------------: |
-|   BOOLEAN  | BOOLEAN  |                         |
-|   CHAR   |  CHAR  |            当前仅支持UTF8编码            |
-|   VARCHAR | VARCHAR |       当前仅支持UTF8编码       |
-|   TINYINT   | TINYINT |  |
-|   SMALLINT  | SMALLINT |  |
-|   INT  | INT |  |
-|   BIGINT  | BIGINT |  |
-|   FLOAT   |  FLOAT  |                                   |
-|   DOUBLE  | DOUBLE |  |
-|   DECIMAL  | DECIMAL |  |
-|   DATE   |  DATE  |                                   |
-|   TIMESTAMP  | DATETIME | Timestamp 转成 Datetime 会损失精度 |
-
-**注意:**
-- Hive 表 Schema 变更**不会自动同步**,需要在 Doris 中重建 Hive 外表。
-- 当前 Hive 的存储格式仅支持 Text,Parquet 和 ORC 类型
-- 当前默认支持的 Hive 版本为 `2.3.7、3.1.2`,未在其他版本进行测试。后续后支持更多版本。
-
-### 查询用法
-
-完成在 Doris 中建立 Hive 外表后,除了无法使用 Doris 中的数据模型(rollup、预聚合、物化视图等)外,与普通的 Doris OLAP 表并无区别
-
-```sql
-select * from t_hive where k1 > 1000 and k3 ='term' or k4 like '%doris';
-```
diff --git a/docs/zh-CN/docs/lakehouse/external-table/jdbc.md b/docs/zh-CN/docs/lakehouse/external-table/jdbc.md
deleted file mode 100644
index 669d9d22e0..0000000000
--- a/docs/zh-CN/docs/lakehouse/external-table/jdbc.md
+++ /dev/null
@@ -1,520 +0,0 @@
----
-{
-    "title": "JDBC 外表",
-    "language": "zh-CN"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# JDBC 外表
-
-<version deprecated="1.2.2">
-
-推荐使用 [JDBC Catalog](../multi-catalog/jdbc.md) 访问 JDBC 外表,1.2.2版本后将不再维护该功能。
-
-</version>
-
-<version since="1.2.0">
-
-JDBC External Table Of Doris 提供了Doris通过数据库访问的标准接口(JDBC)来访问外部表,外部表省去了繁琐的数据导入工作,让Doris可以具有了访问各式数据库的能力,并借助Doris本身的OLAP的能力来解决外部表的数据分析问题:
-
-1. 支持各种数据源接入Doris
-2. 支持Doris与各种数据源中的表联合查询,进行更加复杂的分析操作
-
-本文档主要介绍该功能的使用方式等。
-
-</version>
-
-### Doris中创建JDBC的外表
-
-具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
-
-#### 1. 通过JDBC_Resource来创建JDBC外表 
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_resource
-properties (
-    "type"="jdbc",
-    "user"="root",
-    "password"="123456",
-    "jdbc_url"="jdbc:mysql://192.168.0.1:3306/test?useCursorFetch=true",
-    "driver_url"="http://IP:port/mysql-connector-java-5.1.47.jar",
-    "driver_class"="com.mysql.jdbc.Driver"
-);
-     
-CREATE EXTERNAL TABLE `baseall_mysql` (
-  `k1` tinyint(4) NULL,
-  `k2` smallint(6) NULL,
-  `k3` int(11) NULL,
-  `k4` bigint(20) NULL,
-  `k5` decimal(9, 3) NULL
-) ENGINE=JDBC
-PROPERTIES (
-"resource" = "jdbc_resource",
-"table" = "baseall",
-"table_type"="mysql"
-);
-```
-
-参数说明:
-
-| 参数           | 说明|
-| ---------------- | ------------ |
-| **type**         | "jdbc", 必填项标志资源类型  |
-| **user**         | 访问外表数据库所使的用户名 |
-| **password**     | 该用户对应的密码信息 |
-| **jdbc_url**     | JDBC的URL协议,包括数据库类型,IP地址,端口号和数据库名,不同数据库协议格式不一样。例如mysql: "jdbc:mysql://127.0.0.1:3306/test?useCursorFetch=true"。|
-| **driver_class** | 访问外表数据库的驱动包类名,例如mysql是:com.mysql.jdbc.Driver. |
-| **driver_url**   | 用于下载访问外部数据库的jar包驱动URL。http://IP:port/mysql-connector-java-5.1.47.jar。本地单机测试时,可将jar包放在本地路径下,"driver_url"="file:///home/disk1/pathTo/mysql-connector-java-5.1.47.jar",多机时需保证具有完全相同的路径信息。 |
-| **resource**     | 在Doris中建立外表时依赖的资源名,对应上步创建资源时的名字。|
-| **table**        | 在Doris中建立外表时,与外部数据库相映射的表名。|
-| **table_type**   | 在Doris中建立外表时,该表来自那个数据库。例如mysql,postgresql,sqlserver,oracle|
-
-> **注意:**
->
-> 如果你是本地路径方式,这里数据库驱动依赖的jar包,FE、BE节点都要放置
-
-<version since="1.2.1">
-
-> 在1.2.1及之后的版本中,可以将 driver 放到 FE/BE 的 `jdbc_drivers` 目录下,并直接指定文件名,如:`"driver_url" = "mysql-connector-java-5.1.47.jar"`。系统会自动在 `jdbc_drivers` 目录寻找文件。
-
-</version>
-
-### 查询用法
-
-```
-select * from mysql_table where k1 > 1000 and k3 ='term';
-```
-由于可能存在使用数据库内部的关键字作为字段名,为解决这种状况下仍能正确查询,所以在SQL语句中,会根据各个数据库的标准自动在字段名与表名上加上转义符。例如 MYSQL(``)、PostgreSQL("")、SQLServer([])、ORACLE(""),所以此时可能会造成字段名的大小写敏感,具体可以通过explain sql,查看转义后下发到各个数据库的查询语句。
-
-### 数据写入
-
-在Doris中建立JDBC外表后,可以通过insert into语句直接写入数据,也可以将Doris执行完查询之后的结果写入JDBC外表,或者是从一个JDBC外表将数据导入另一个JDBC外表。
-
-
-```
-insert into mysql_table values(1, "doris");
-insert into mysql_table select * from table;
-```
-#### 事务
-
-Doris的数据是由一组batch的方式写入外部表的,如果中途导入中断,之前写入数据可能需要回滚。所以JDBC外表支持数据写入时的事务,事务的支持需要通过设置session variable: `enable_odbc_transcation `(ODBC事务也受此变量控制)。
-
-```
-set enable_odbc_transcation = true; 
-```
-
-事务保证了JDBC外表数据写入的原子性,但是一定程度上会降低数据写入的性能,可以考虑酌情开启该功能。
-
-#### 1.Mysql测试
-
-| Mysql版本 | Mysql JDBC驱动版本              |
-| --------- | ------------------------------- |
-| 8.0.30    | mysql-connector-java-5.1.47.jar |
-
-#### 2.PostgreSQL测试
-| PostgreSQL版本 | PostgreSQL JDBC驱动版本 |
-| -------------- | ----------------------- |
-| 14.5           | postgresql-42.5.0.jar   |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_pg
-properties (
-    "type"="jdbc",
-    "user"="postgres",
-    "password"="123456",
-    "jdbc_url"="jdbc:postgresql://127.0.0.1:5442/postgres?currentSchema=doris_test",
-    "driver_url"="http://127.0.0.1:8881/postgresql-42.5.0.jar",
-    "driver_class"="org.postgresql.Driver"
-);
-
-CREATE EXTERNAL TABLE `ext_pg` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_pg",
-    "table" = "pg_tbl",
-    "table_type"="postgresql"
-);
-```
-
-#### 3.SQLServer测试
-| SQLserver版本 | SQLserver JDBC驱动版本     |
-| ------------- | -------------------------- |
-| 2022          | mssql-jdbc-11.2.0.jre8.jar |
-
-#### 4.oracle测试
-| Oracle版本 | Oracle JDBC驱动版本 |
-| ---------- | ------------------- |
-| 11         | ojdbc6.jar          |
-
-目前只测试了这一个版本其他版本测试后补充
-
-#### 5.ClickHouse测试
-| ClickHouse版本 | ClickHouse JDBC驱动版本                   |
-|--------------|---------------------------------------|
-| 22           | clickhouse-jdbc-0.3.2-patch11-all.jar |
-| 22           | clickhouse-jdbc-0.4.1-all.jar         |
-
-#### 6.Sap Hana测试
-
-| Sap Hana版本 | Sap Hana JDBC驱动版本 |
-|------------|-------------------|
-| 2.0        | ngdbc.jar         |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_hana
-properties (
-    "type"="jdbc",
-    "user"="SYSTEM",
-    "password"="SAPHANA",
-    "jdbc_url" = "jdbc:sap://localhost:31515/TEST",
-    "driver_url" = "file:///path/to/ngdbc.jar",
-    "driver_class" = "com.sap.db.jdbc.Driver"
-);
-
-CREATE EXTERNAL TABLE `ext_hana` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_hana",
-    "table" = "TEST.HANA",
-    "table_type"="sap_hana"
-);
-```
-
-#### 7.Trino测试
-
-| Trino版本 | Trino JDBC驱动版本     |
-|----------|--------------------|
-| 389      | trino-jdbc-389.jar |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_trino
-properties (
-    "type"="jdbc",
-    "user"="hadoop",
-    "password"="",
-    "jdbc_url" = "jdbc:trino://localhost:8080/hive",
-    "driver_url" = "file:///path/to/trino-jdbc-389.jar",
-    "driver_class" = "io.trino.jdbc.TrinoDriver"
-);
-
-CREATE EXTERNAL TABLE `ext_trino` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_trino",
-    "table" = "hive.test",
-    "table_type"="trino"
-);
-```
-
-**注意:**
-<version since="dev" type="inline"> 同样支持使用 Presto JDBC Driver 进行连接 </version>
-
-#### 8.OceanBase测试
-
-| OceanBase 版本 | OceanBase JDBC驱动版本 |
-|--------------|--------------------|
-| 3.2.3        | oceanbase-client-2.4.2.jar |
-
-```sql
-CREATE EXTERNAL RESOURCE jdbc_oceanbase
-properties (
-    "type"="jdbc",
-    "user"="root",
-    "password"="",
-    "jdbc_url" = "jdbc:oceanbase://localhost:2881/test",
-    "driver_url" = "file:///path/to/oceanbase-client-2.4.2.jar",
-    "driver_class" = "com.oceanbase.jdbc.Driver"
-);
-
-mysql模式
-CREATE EXTERNAL TABLE `ext_oceanbase` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_oceanbase",
-    "table" = "test.test",
-    "table_type"="oceanbase"
-);
-
-oracle模式
-CREATE EXTERNAL TABLE `ext_oceanbase` (
-  `k1` int
-) ENGINE=JDBC
-PROPERTIES (
-    "resource" = "jdbc_oceanbase",
-    "table" = "test.test",
-    "table_type"="oceanbase_oracle"
-);
-```
-
-### 9.Nebula-graph测试 (仅支持查询)
-| nebula版本 | JDBC驱动版本 |
-|------------|-------------------|
-| 3.0.0       | nebula-jdbc-3.0.0-jar-with-dependencies.jar         |
-```
-#step1.在nebula创建测试数据
-#1.1 创建结点
-(root@nebula) [basketballplayer]> CREATE TAG test(t_str string, 
-    t_int int, 
-    t_date date,
-    t_datetime datetime,
-    t_bool bool,
-    t_timestamp timestamp,
-    t_float float,
-    t_double double
-);
-#1.2 插入数据
-(root@nebula) [basketballplayer]> INSERT VERTEX test_type(t_str,t_int,t_date,t_datetime,t_bool,t_timestamp,t_float,t_double) values "zhangshan":("zhangshan",1000,date("2023-01-01"),datetime("2023-01-23 15:23:32"),true,1234242423,1.2,1.35);
-#1.3 查询数据
-(root@nebula) [basketballplayer]> match (v:test_type) where id(v)=="zhangshan" return v.test_type.t_str,v.test_type.t_int,v.test_type.t_date,v.test_type.t_datetime,v.test_type.t_bool,v.test_type.t_timestamp,v.test_type.t_float,v.test_type.t_double,v limit 30;
-+-------------------+-------------------+--------------------+----------------------------+--------------------+-------------------------+---------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| v.test_type.t_str | v.test_type.t_int | v.test_type.t_date | v.test_type.t_datetime     | v.test_type.t_bool | v.test_type.t_timestamp | v.test_type.t_float | v.test_type.t_double | v                                                                                                                                                                                                         |
-+-------------------+-------------------+--------------------+----------------------------+--------------------+-------------------------+---------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| "zhangshan"       | 1000              | 2023-01-01         | 2023-01-23T15:23:32.000000 | true               | 1234242423              | 1.2000000476837158  | 1.35                 | ("zhangshan" :test_type{t_bool: true, t_date: 2023-01-01, t_datetime: 2023-01-23T15:23:32.000000, t_double: 1.35, t_float: 1.2000000476837158, t_int: 1000, t_str: "zhangshan", t_timestamp: 1234242423}) |
-+-------------------+-------------------+--------------------+----------------------------+--------------------+-------------------------+---------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-Got 1 rows (time spent 1616/2048 us)
-Mon, 17 Apr 2023 17:23:14 CST
-#step2.在doris中创建外表
-#2.1 创建一个resource
-MySQL [test_db]> CREATE EXTERNAL RESOURCE gg_jdbc_resource 
-properties (
-   "type"="jdbc",
-   "user"="root",
-   "password"="123",
-   "jdbc_url"="jdbc:nebula://127.0.0.1:9669/basketballplayer",
-   "driver_url"="file:///home/clz/baidu/bdg/doris/be/lib/nebula-jdbc-3.0.0-jar-with-dependencies.jar",  --仅支持本地路径,需放到be/lib目录下--
-   "driver_class"="com.vesoft.nebula.jdbc.NebulaDriver"
-);
-#2.2 创建一个外表,这个主要是告诉doris如何解析nebulagraph返回的数据
-MySQL [test_db]> CREATE TABLE `test_type` ( 
- `t_str` varchar(64),
- `t_int` bigint,
- `t_date` date,
- `t_datetime` datetime,
- `t_bool` boolean,
- `t_timestamp` bigint,
- `t_float` double,
- `t_double` double,
- `t_vertx`  varchar(128) --vertex对应doris类型是varchar---
-) ENGINE=JDBC
-PROPERTIES (
-"resource" = "gg_jdbc_resource",
-"table" = "xx",  --因为graph没有表的概念,这里随便填一个值--
-"table_type"="nebula"
-);
-#2.3 查询graph外表,用g()函数把图的nGQL透传给nebula
-MySQL [test_db]> select * from test_type where g('match (v:test_type) where id(v)=="zhangshan" return v.test_type.t_str,v.test_type.t_int,v.test_type.t_date,v.test_type.t_datetime,v.test_type.t_bool,v.test_type.t_timestamp,v.test_type.t_float,v.test_type.t_double,v')\G;
-*************************** 1. row ***************************
-      t_str: zhangshan
-      t_int: 1000
-     t_date: 2023-01-01
- t_datetime: 2023-01-23 15:23:32
-     t_bool: 1
-t_timestamp: 1234242423
-    t_float: 1.2000000476837158
-   t_double: 1.35
-    t_vertx: ("zhangshan" :test_type {t_datetime: utc datetime: 2023-01-23T15:23:32.000000, timezoneOffset: 0, t_timestamp: 1234242423, t_date: 2023-01-01, t_double: 1.35, t_str: "zhangshan", t_int: 1000, t_bool: true, t_float: 1.2000000476837158})
-1 row in set (0.024 sec)
-#2.3 与doris的其他表进行关联查询
-#假设有张用户表
-MySQL [test_db]> select * from t_user;
-+-----------+------+---------------------------------+
-| username  | age  | addr                            |
-+-----------+------+---------------------------------+
-| zhangshan |   26 | 北京市西二旗街道1008号          |
-+-----------+------+---------------------------------+
-| lisi |   29 | 北京市西二旗街道1007号          |
-+-----------+------+---------------------------------+
-1 row in set (0.013 sec)
-#与这张用表关联查询用户相关的信息
-MySQL [test_db]> select u.* from (select t_str username  from test_type where g('match (v:test_type) where id(v)=="zhangshan" return v.test_type.t_str limit 1')) g left join t_user u on g.username=u.username;
-+-----------+------+---------------------------------+
-| username  | age  | addr                            |
-+-----------+------+---------------------------------+
-| zhangshan |   26 | 北京市西二旗街道1008号          |
-+-----------+------+---------------------------------+
-1 row in set (0.029 sec)
-```
-
-
-> **注意:**
->
-> 在创建OceanBase外表时,只需在创建Resource时指定`oceanbase_mode`参数,创建外表的table_type为oceanbase。
-
-## 类型匹配
-
-各个数据库之间数据类型存在不同,这里列出了各个数据库中的类型和Doris之中数据类型匹配的情况。
-
-### MySQL
-
-|  MySQL   |  Doris   |
-| :------: | :------: |
-| BOOLEAN  | BOOLEAN  |
-| BIT(1)   | BOOLEAN  |
-| TINYINT  | TINYINT  |
-| SMALLINT | SMALLINT |
-|   INT    |   INT    |
-|  BIGINT  |  BIGINT  |
-|BIGINT UNSIGNED|LARGEINT|
-| VARCHAR  | VARCHAR  |
-|   DATE   |   DATE   |
-|  FLOAT   |  FLOAT   |
-| DATETIME | DATETIME |
-|  DOUBLE  |  DOUBLE  |
-| DECIMAL  | DECIMAL  |
-
-
-### PostgreSQL
-
-|    PostgreSQL    |  Doris   |
-| :--------------: | :------: |
-|     BOOLEAN      | BOOLEAN  |
-|     SMALLINT     | SMALLINT |
-|       INT        |   INT    |
-|      BIGINT      |  BIGINT  |
-|     VARCHAR      | VARCHAR  |
-|       DATE       |   DATE   |
-|    TIMESTAMP     | DATETIME |
-|       REAL       |  FLOAT   |
-|      FLOAT       |  DOUBLE  |
-|     DECIMAL      | DECIMAL  |
-
-### Oracle
-
-|  Oracle  |  Doris   |
-| :------: | :------: |
-| VARCHAR  | VARCHAR  |
-|   DATE   | DATETIME |
-| SMALLINT | SMALLINT |
-|   INT    |   INT    |
-|   REAL   |   DOUBLE |
-|   FLOAT  |   DOUBLE |
-|  NUMBER  | DECIMAL  |
-
-
-### SQL server
-
-| SQLServer |  Doris   |
-| :-------: | :------: |
-|    BIT    | BOOLEAN  |
-|  TINYINT  | TINYINT  |
-| SMALLINT  | SMALLINT |
-|    INT    |   INT    |
-|  BIGINT   |  BIGINT  |
-|  VARCHAR  | VARCHAR  |
-|   DATE    |   DATE   |
-| DATETIME  | DATETIME |
-|   REAL    |  FLOAT   |
-|   FLOAT   |  DOUBLE  |
-|  DECIMAL  | DECIMAL  |
-
-### ClickHouse
-
-|                        ClickHouse                        |          Doris           |
-|:--------------------------------------------------------:|:------------------------:|
-|                         Boolean                          |         BOOLEAN          |
-|                          String                          |          STRING          |
-|                       Date/Date32                        |       DATE/DATEV2        |
-|                   DateTime/DateTime64                    |   DATETIME/DATETIMEV2    |
-|                         Float32                          |          FLOAT           |
-|                         Float64                          |          DOUBLE          |
-|                           Int8                           |         TINYINT          |
-|                       Int16/UInt8                        |         SMALLINT         |
-|                       Int32/UInt16                       |           INT            |
-|                       Int64/Uint32                       |          BIGINT          |
-|                      Int128/UInt64                       |         LARGEINT         |
-|                  Int256/UInt128/UInt256                  |          STRING          |
-|                         Decimal                          | DECIMAL/DECIMALV3/STRING |
-|                   Enum/IPv4/IPv6/UUID                    |          STRING          |
-| <version since="dev" type="inline"> Array(T)  </version> |        ARRAY\<T\>        |
-
-**注意:**
-
-- <version since="dev" type="inline"> 对于ClickHouse里的Array类型,可用Doris的Array类型来匹配,Array内的基础类型匹配参考基础类型匹配规则即可,不支持嵌套Array </version>
-- 对于ClickHouse里的一些特殊类型,如UUID,IPv4,IPv6,Enum8可以用Doris的Varchar/String类型来匹配,但是在显示上IPv4,IPv6会额外在数据最前面显示一个`/`,需要自己用`split_part`函数处理
-- 对于ClickHouse的Geo类型Point,无法进行匹配
-
-### SAP HANA
-
-|   SAP_HANA   |        Doris        |
-|:------------:|:-------------------:|
-|   BOOLEAN    |       BOOLEAN       |
-|   TINYINT    |       TINYINT       |
-|   SMALLINT   |      SMALLINT       |
-|   INTERGER   |         INT         |
-|    BIGINT    |       BIGINT        |
-| SMALLDECIMAL |  DECIMAL/DECIMALV3  |
-|   DECIMAL    |  DECIMAL/DECIMALV3  |
-|     REAL     |        FLOAT        |
-|    DOUBLE    |       DOUBLE        |
-|     DATE     |     DATE/DATEV2     |
-|     TIME     |        TEXT         |
-|  TIMESTAMP   | DATETIME/DATETIMEV2 |
-|  SECONDDATE  | DATETIME/DATETIMEV2 |
-|   VARCHAR    |        TEXT         |
-|   NVARCHAR   |        TEXT         |
-|   ALPHANUM   |        TEXT         |
-|  SHORTTEXT   |        TEXT         |
-|     CHAR     |        CHAR         |
-|    NCHAR     |        CHAR         |
-
-### Trino
-
-|   Trino   |        Doris        |
-|:---------:|:-------------------:|
-|  boolean  |       BOOLEAN       |
-|  tinyint  |       TINYINT       |
-| smallint  |      SMALLINT       |
-|  integer  |         INT         |
-|  bigint   |       BIGINT        |
-|  decimal  |  DECIMAL/DECIMALV3  |
-|   real    |        FLOAT        |
-|  double   |       DOUBLE        |
-|   date    |     DATE/DATEV2     |
-| timestamp | DATETIME/DATETIMEV2 |
-|  varchar  |        TEXT         |
-|   char    |        CHAR         |
-|   array   |        ARRAY        |
-|  others   |     UNSUPPORTED     |
-
-### OceanBase
-
-MySQL 模式请参考 [MySQL类型映射](#MySQL)
-Oracle 模式请参考 [Oracle类型映射](#Oracle)
-
-### Nebula-graph
-|   nebula   |        Doris        |
-|:------------:|:-------------------:|
-|   tinyint/samllint/int/int64    |       bigint       |
-|   double/float    |       double       |
-|   date   |      date       |
-|   timestamp   |         bigint         |
-|    datetime    |       datetime        |
-| bool |  boolean  |
-|   vertex/edge/path/list/set/time等    |  varchar  |
-
-## Q&A
-
-请参考 [JDBC Catalog](../multi-catalog/jdbc.md) 中的 常见问题一节。
diff --git a/docs/zh-CN/docs/lakehouse/external-table/odbc.md b/docs/zh-CN/docs/lakehouse/external-table/odbc.md
deleted file mode 100644
index 63618337e7..0000000000
--- a/docs/zh-CN/docs/lakehouse/external-table/odbc.md
+++ /dev/null
@@ -1,406 +0,0 @@
----
-{
-    "title": "ODBC 外表",
-    "language": "zh-CN"
-}
----
-
-<!-- 
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# ODBC 外表
-
-<version deprecated="1.2.0">
-
-请使用 [JDBC Catalog](../multi-catalog/jdbc.md) 功能访问外表,1.2.0版本后将不再维护该功能。
-
-</version>
-
-ODBC External Table Of Doris 提供了Doris通过数据库访问的标准接口(ODBC)来访问外部表,外部表省去了繁琐的数据导入工作,让Doris可以具有了访问各式数据库的能力,并借助Doris本身的OLAP的能力来解决外部表的数据分析问题:
-
-1. 支持各种数据源接入Doris
-2. 支持Doris与各种数据源中的表联合查询,进行更加复杂的分析操作
-3. 通过insert into将Doris执行的查询结果写入外部的数据源
-
-本文档主要介绍该功能的实现原理、使用方式等。
-
-## 名词解释
-
-### Doris相关
-* FE:Frontend,Doris 的前端节点,负责元数据管理和请求接入
-* BE:Backend,Doris 的后端节点,负责查询执行和数据存储
-
-## 使用方法
-
-### Doris中创建ODBC的外表
-
-具体建表语法参照:[CREATE TABLE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
-
-#### 1. 不使用Resource创建ODBC的外表
-
-```
-CREATE EXTERNAL TABLE `baseall_oracle` (
-  `k1` decimal(9, 3) NOT NULL COMMENT "",
-  `k2` char(10) NOT NULL COMMENT "",
-  `k3` datetime NOT NULL COMMENT "",
-  `k5` varchar(20) NOT NULL COMMENT "",
-  `k6` double NOT NULL COMMENT ""
-) ENGINE=ODBC
-COMMENT "ODBC"
-PROPERTIES (
-"host" = "192.168.0.1",
-"port" = "8086",
-"user" = "test",
-"password" = "test",
-"database" = "test",
-"table" = "baseall",
-"driver" = "Oracle 19 ODBC driver",
-"odbc_type" = "oracle"
-);
-```
-
-#### 2. 通过ODBC_Resource来创建ODBC外表 (推荐使用的方式)
-```sql
-CREATE EXTERNAL RESOURCE `oracle_odbc`
-PROPERTIES (
-"type" = "odbc_catalog",
-"host" = "192.168.0.1",
-"port" = "8086",
-"user" = "test",
-"password" = "test",
-"database" = "test",
-"odbc_type" = "oracle",
-"driver" = "Oracle 19 ODBC driver"
-);
-     
-CREATE EXTERNAL TABLE `baseall_oracle` (
-  `k1` decimal(9, 3) NOT NULL COMMENT "",
-  `k2` char(10) NOT NULL COMMENT "",
-  `k3` datetime NOT NULL COMMENT "",
-  `k5` varchar(20) NOT NULL COMMENT "",
-  `k6` double NOT NULL COMMENT ""
-) ENGINE=ODBC
-COMMENT "ODBC"
-PROPERTIES (
-"odbc_catalog_resource" = "oracle_odbc",
-"database" = "test",
-"table" = "baseall"
-);
-```
-参数说明:
-
-参数 | 说明
----|---
-**hosts** | 外表数据库的IP地址
-**port** | 外表数据库的服务端口号
-**driver** | ODBC外表的Driver名,该名字需要和be/conf/odbcinst.ini中的Driver名一致。
-**odbc_type** | 外表数据库的类型,当前支持oracle, mysql, postgresql
-**user** | 外表数据库的用户名
-**password** | 对应用户的密码信息
-**charset** | 数据库连接使用的字符集(对sqlserver无效)
-
-备注:
-`PROPERTIES` 中除了可以添加上述参数之外,还支持每个数据库的ODBC driver 实现的专用参数,比如mysql 的`sslverify`、sqlserver 的`ClientCharset`等
-
->注意:
->
->如果你是SQL Server 2017 及之后的版本,因为SQL Server 2017及之后版本默认开启了安全认证,你需要在定义 ODBC Resources的时候加上 `"TrustServerCertificate"="Yes"`
-
-##### ODBC Driver的安装和配置
-
-各大主流数据库都会提供ODBC的访问Driver,用户可以执行参照参照各数据库官方推荐的方式安装对应的ODBC Driver LiB库。
-
-
-安装完成之后,查找对应的数据库的Driver Lib库的路径,并且修改be/conf/odbcinst.ini的配置:
-```
-[MySQL Driver]
-Description     = ODBC for MySQL
-Driver          = /usr/lib64/libmyodbc8w.so
-FileUsage       = 1 
-```
-* 上述配置`[]`里的对应的是Driver名,在建立外部表时需要保持外部表的Driver名和配置文件之中的一致。
-* `Driver=`  这个要根据实际BE安装Driver的路径来填写,本质上就是一个动态库的路径,这里需要保证该动态库的前置依赖都被满足。
-
-**切记,这里要求所有的BE节点都安装上相同的Driver,并且安装路径相同,同时有相同的be/conf/odbcinst.ini的配置。**
-
-
-### 查询用法
-
-完成在Doris中建立ODBC外表后,除了无法使用Doris中的数据模型(rollup、预聚合、物化视图等)外,与普通的Doris表并无区别
-
-
-```
-select * from oracle_table where k1 > 1000 and k3 ='term' or k4 like '%doris';
-```
-
-### 数据写入
-
-在Doris中建立ODBC外表后,可以通过insert into语句直接写入数据,也可以将Doris执行完查询之后的结果写入ODBC外表,或者是从一个ODBC外表将数据导入另一个ODBC外表。
-
-
-```
-insert into oracle_table values(1, "doris");
-insert into oracle_table select * from postgre_table;
-```
-#### 事务
-
-Doris的数据是由一组batch的方式写入外部表的,如果中途导入中断,之前写入数据可能需要回滚。所以ODBC外表支持数据写入时的事务,事务的支持需要通过session variable:`enable_odbc_transcation `设置。
-
-```
-set enable_odbc_transcation = true; 
-```
-
-事务保证了ODBC外表数据写入的原子性,但是一定程度上会降低数据写入的性能,可以考虑酌情开启该功能。
-
-## 数据库ODBC版本对应关系
-
-### Centos操作系统
-
-使用的unixODBC版本是:2.3.1,Doris 0.15,centos 7.9,全部使用yum方式安装。
-
-#### 1.mysql
-
-| Mysql版本 | Mysql ODBC版本 |
-| --------- | -------------- |
-| 8.0.27    | 8.0.27,8.026   |
-| 5.7.36    | 5.3.11,5.3.13  |
-| 5.6.51    | 5.3.11,5.3.13  |
-| 5.5.62    | 5.3.11,5.3.13  |
-
-#### 2.PostgreSQL
-
-PostgreSQL的yum 源 rpm包地址:
-
-```
-https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
-```
-
-这里面包含PostgreSQL从9.x 到 14.x的全部版本,包括对应的ODBC版本,可以根据需要选择安装。
-
-| PostgreSQL版本 | PostgreSQL ODBC版本          |
-| -------------- | ---------------------------- |
-| 12.9           | postgresql12-odbc-13.02.0000 |
-| 13.5           | postgresql13-odbc-13.02.0000 |
-| 14.1           | postgresql14-odbc-13.02.0000 |
-| 9.6.24         | postgresql96-odbc-13.02.0000 |
-| 10.6           | postgresql10-odbc-13.02.0000 |
-| 11.6           | postgresql11-odbc-13.02.0000 |
-
-#### 3.Oracle
-
-| Oracle版本                                                   | Oracle ODBC版本                            |
-| ------------------------------------------------------------ | ------------------------------------------ |
-| Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production | oracle-instantclient19.13-odbc-19.13.0.0.0 |
-| Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production | oracle-instantclient19.13-odbc-19.13.0.0.0 |
-| Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production | oracle-instantclient19.13-odbc-19.13.0.0.0 |
-| Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production | oracle-instantclient19.13-odbc-19.13.0.0.0 |
-| Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production | oracle-instantclient19.13-odbc-19.13.0.0.0 |
-
-Oracle ODBC驱动版本下载地址:
-
-```
-https://download.oracle.com/otn_software/linux/instantclient/1913000/oracle-instantclient19.13-sqlplus-19.13.0.0.0-2.x86_64.rpm
-https://download.oracle.com/otn_software/linux/instantclient/1913000/oracle-instantclient19.13-devel-19.13.0.0.0-2.x86_64.rpm
-https://download.oracle.com/otn_software/linux/instantclient/1913000/oracle-instantclient19.13-odbc-19.13.0.0.0-2.x86_64.rpm
-https://download.oracle.com/otn_software/linux/instantclient/1913000/oracle-instantclient19.13-basic-19.13.0.0.0-2.x86_64.rpm
-```
-
-#### 4.SQLServer
-
-| SQLServer版本 | SQLServer ODBC版本 |
-| --------- | -------------- |
-| SQL Server 2016 Enterprise | freetds-1.2.21 |
-
-目前只测试了这一个版本其他版本测试后补充
-
-### Ubuntu操作系统
-
-使用的unixODBC版本是:2.3.4,Doris 0.15,Ubuntu 20.04
-
-#### 1.Mysql
-
-| Mysql版本 | Mysql ODBC版本 |
-| --------- | -------------- |
-| 8.0.27    | 8.0.11,5.3.13  |
-
-目前只测试了这一个版本其他版本测试后补充
-
-#### 2.PostgreSQL
-
-| PostgreSQL版本 | PostgreSQL ODBC版本 |
-| -------------- | ------------------- |
-| 12.9           | psqlodbc-12.02.0000 |
-
-其他版本只要下载和数据库大版本相符合的ODBC驱动版本,问题不大,这块后续会持续补充其他版本在Ubuntu系统下的测试结果。
-
-#### 3.Oracle
-
-同上Centos操作系统的Oracle数据库及ODBC对应关系,在ubuntu下安装rpm软件包使用下面方式。
-
-为了在ubuntu下可以进行安装rpm包,我们还需要安装一个alien,这是一个可以将rpm包转换成deb安装包的工具
-
-```
-sudo apt-get install alien
-```
-
-然后执行安装上面四个包
-
-```
-sudo alien -i  oracle-instantclient19.13-basic-19.13.0.0.0-2.x86_64.rpm
-sudo alien -i  oracle-instantclient19.13-devel-19.13.0.0.0-2.x86_64.rpm
-sudo alien -i  oracle-instantclient19.13-odbc-19.13.0.0.0-2.x86_64.rpm
-sudo alien -i  oracle-instantclient19.13-sqlplus-19.13.0.0.0-2.x86_64.rpm
-```
-
-#### 4.SQLServer
-
-| SQLServer版本 | SQLServer ODBC版本 |
-| --------- | -------------- |
-| SQL Server 2016 Enterprise | freetds-1.2.21 |
-
-目前只测试了这一个版本其他版本测试后补充
-
-
-## 类型匹配
-
-各个数据库之间数据类型存在不同,这里列出了各个数据库中的类型和Doris之中数据类型匹配的情况。
-
-### MySQL
-
-|  MySQL  | Doris  |             替换方案              |
-| :------: | :----: | :-------------------------------: |
-|  BOOLEAN  | BOOLEAN  |                         |
-|   CHAR   |  CHAR  |            当前仅支持UTF8编码            |
-| VARCHAR | VARCHAR |       当前仅支持UTF8编码       |
-|   DATE   |  DATE  |                                   |
-|  FLOAT   |  FLOAT  |                                   |
-|   TINYINT   | TINYINT |  |
-|   SMALLINT  | SMALLINT |  |
-|   INT  | INT |  |
-|   BIGINT  | BIGINT |  |
-|   DOUBLE  | DOUBLE |  |
-|   DATETIME  | DATETIME |  |
-|   DECIMAL  | DECIMAL |  |
-
-### PostgreSQL
-
-|  PostgreSQL  | Doris  |             替换方案              |
-| :------: | :----: | :-------------------------------: |
-|  BOOLEAN  | BOOLEAN  |                         |
-|   CHAR   |  CHAR  |            当前仅支持UTF8编码            |
-| VARCHAR | VARCHAR |       当前仅支持UTF8编码       |
-|   DATE   |  DATE  |                                   |
-|  REAL   |  FLOAT  |                                   |
-|   SMALLINT  | SMALLINT |  |
-|   INT  | INT |  |
-|   BIGINT  | BIGINT |  |
-|   DOUBLE  | DOUBLE |  |
-|   TIMESTAMP  | DATETIME |  |
-|   DECIMAL  | DECIMAL |  |
-
-### Oracle
-
-|  Oracle  | Doris  |             替换方案              |
-| :------: | :----: | :-------------------------------: |
-|  不支持 | BOOLEAN  |          Oracle可用number(1) 替换boolean               |
-|   CHAR   |  CHAR  |                       |
-| VARCHAR | VARCHAR |              |
-|   DATE   |  DATE  |                                   |
-|  FLOAT   |  FLOAT  |                                   |
-|  无   | TINYINT | Oracle可由NUMMBER替换 |
-|   SMALLINT  | SMALLINT |  |
-|   INT  | INT |  |
-|   无  | BIGINT |  Oracle可由NUMMBER替换 |
-|   无  | DOUBLE | Oracle可由NUMMBER替换 |
-|   DATETIME  | DATETIME |  |
-|   NUMBER  | DECIMAL |  |
-
-### SQLServer
-
-| SQLServer  | Doris  |             替换方案              |
-| :------: | :----: | :-------------------------------: |
-|  BOOLEAN  | BOOLEAN  |                         |
-|   CHAR   |  CHAR  |            当前仅支持UTF8编码            |
-| VARCHAR | VARCHAR |       当前仅支持UTF8编码       |
-|   DATE   |  DATE  |                                   |
-|  REAL   |  FLOAT  |                                   |
-|   TINYINT   | TINYINT |  |
-|   SMALLINT  | SMALLINT |  |
-|   INT  | INT |  |
-|   BIGINT  | BIGINT |  |
-|   FLOAT  | DOUBLE |  |
-|   DATETIME/DATETIME2  | DATETIME |  |
-|   DECIMAL/NUMERIC | DECIMAL |  |
-
-## 最佳实践
-
-适用于少数据量的同步
-
-例如Mysql中一张表有100万数据,想同步到doris,就可以采用ODBC的方式将数据映射过来,在使用[insert into](../../data-operate/import/import-way/insert-into-manual.md) 方式将数据同步到Doris中,如果想同步大批量数据,可以分批次使用[insert into](../../data-operate/import/import-way/insert-into-manual.md)同步(不建议使用)
-
-## Q&A
-
-1. 与原先的MySQL外表的关系
-
-   在接入ODBC外表之后,原先的访问MySQL外表的方式将被逐渐弃用。如果之前没有使用过MySQL外表,建议新接入的MySQL表直接使用ODBC的MySQL外表。
-
-2. 除了MySQL,Oracle,PostgreSQL,SQLServer是否能够支持更多的数据库
-
-   目前Doris只适配了MySQL,Oracle,PostgreSQL,SQLServer,关于其他的数据库的适配工作正在规划之中,原则上来说任何支持ODBC访问的数据库都能通过ODBC外表来访问。如果您有访问其他外表的需求,欢迎修改代码并贡献给Doris。
-
-3. 什么场合适合通过外表访问
-
-   通常在外表数据量较小,少于100W条时,可以通过外部表的方式访问。由于外表无法发挥Doris在存储引擎部分的能力和会带来额外的网络开销,所以建议根据实际对查询的访问时延要求来确定是否通过外部表访问还是将数据导入Doris之中。
-
-4. 通过Oracle访问出现乱码
-
-   尝试在BE启动脚本之中添加如下参数:`export NLS_LANG=AMERICAN_AMERICA.AL32UTF8`, 并重新启动所有BE
-
-5. ANSI Driver or Unicode Driver ?
-
-   当前ODBC支持ANSI 与 Unicode 两种Driver形式,当前Doris只支持Unicode Driver。如果强行使用ANSI Driver可能会导致查询结果出错。
-
-6. 报错 `driver connect Err: 01000 [unixODBC][Driver Manager]Can't open lib 'Xxx' : file not found (0)`
-
-   没有在每一个BE上安装好对应数据的Driver,或者是没有在be/conf/odbcinst.ini配置正确的路径,亦或是建表是Driver名与be/conf/odbcinst.ini不同
-
-7. 报错 `Fail to convert odbc value 'PALO ' TO INT on column:'A'`
-
-   ODBC外表的A列类型转换出错,说明外表的实际列与ODBC的映射列的数据类型不同,需要修改列的类型映射
-
-8. 同时使用旧的MySQL表与ODBC外表的Driver时出现程序Crash
-
-   这个是MySQL数据库的Driver与现有Doris依赖MySQL外表的兼容问题。推荐解决的方式如下:
-    * 方式1:通过ODBC外表替换旧的MySQL外表,并重新编译BE,关闭WITH_MYSQL的选项
-    * 方式2:不使用最新8.X的MySQL的ODBC Driver,而是使用5.X的MySQL的ODBC Driver
-
-9. 过滤条件下推
-   当前ODBC外表支持过滤条件下推,目前MySQL的外表是能够支持所有条件下推的。其他的数据库的函数与Doris不同会导致下推查询失败。目前除MySQL外表之外,其他的数据库不支持函数调用的条件下推。Doris是否将所需过滤条件下推,可以通过`explain` 查询语句进行确认。
-
-10. 报错`driver connect Err: xxx`
-
-    通常是连接数据库失败,Err部分代表了不同的数据库连接失败的报错。这种情况通常是配置存在问题。可以检查是否错配了ip地址,端口或账号密码。
-    
-11. 读写mysql外表的emoji表情出现乱码
-
-    Doris进行odbc外表连接时,默认采用的编码为utf8,由于mysql之中默认的utf8编码为utf8mb3,无法表示需要4字节编码的emoji表情。这里需要在建立mysql外表时设置`charset`=`utf8mb4`,便可以正常读写emoji表情😀。
-
-12. 读写sqlserver外表的编码配置
-
-    由于sqlserver的odbc外表连接时,无法直接通过`charset`来配置编码,用户可以使用`ClientCharset`(for freetds)配置项来设置, 比如 "ClientCharset" = "UTF-8"。


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@doris.apache.org
For additional commands, e-mail: commits-help@doris.apache.org