You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by zh...@apache.org on 2021/10/23 15:43:11 UTC

[shardingsphere] branch master updated: New blogs 1 (#13164)

This is an automated email from the ASF dual-hosted git repository.

zhangliang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/shardingsphere.git


The following commit(s) were added to refs/heads/master by this push:
     new d692e30  New blogs 1 (#13164)
d692e30 is described below

commit d692e3006725b2b8c2014d402bb98aecb7f646dc
Author: Yacine Si Tayeb <86...@users.noreply.github.com>
AuthorDate: Sat Oct 23 23:42:26 2021 +0800

    New blogs 1 (#13164)
    
    * Adding pictures for blog
    
    Adding pics
    
    * Adding markdown files with pics
    
    * Uploading blogs
    
    Uploading new blog
---
 ...re\342\200\231s_Metadata_Loading_Process.en.md" | 276 +++++++++++++++++
 ...iddleware_Ecosystem_Driven_by_Open_Source.en.md |  72 +++++
 ..._Participate_in_Open_Source_Communities_?.en.md | 119 +++++++
 ...inute_Quick_Start_Guide_to_ShardingSphere.en.md | 344 +++++++++++++++++++++
 ...here_Database_Metadata_Structure_Diagram_en.png | Bin 0 -> 73800 bytes
 .../img/Blog_17_img_2_Tang_Guocheng_Photo.png      | Bin 0 -> 391821 bytes
 docs/blog/static/img/Blog_19_img_1_community.png   | Bin 0 -> 532320 bytes
 .../static/img/Blog_19_img_2_Pan_Juan_Photo.jpg    | Bin 0 -> 14688 bytes
 ...mg_1_Popularity_of_Java_versions_in_2020_en.png | Bin 0 -> 182544 bytes
 .../img/Blog_20_img_2_Liang_Longtao_Photo.png      | Bin 0 -> 1131789 bytes
 .../static/img/Blog_20_img_3_Hou_Yang_Photo.png    | Bin 0 -> 1417615 bytes
 11 files changed, 811 insertions(+)

diff --git "a/docs/blog/content/material/Oct_12_1_ShardingSphere\342\200\231s_Metadata_Loading_Process.en.md" "b/docs/blog/content/material/Oct_12_1_ShardingSphere\342\200\231s_Metadata_Loading_Process.en.md"
new file mode 100644
index 0000000..b8be8cd
--- /dev/null
+++ "b/docs/blog/content/material/Oct_12_1_ShardingSphere\342\200\231s_Metadata_Loading_Process.en.md"
@@ -0,0 +1,276 @@
++++
+title = "ShardingSphere’s Metadata Loading Process"
+weight = 17
+chapter = true
++++
+
+# ShardingSphere’s Metadata Loading Process
+
+**1. Overview**
+
+  Metadata is the data that constitutes the data. In database terms, any data that describes the database is metadata. Column names, database names, usernames, table names, etc. and data customization library tables that store information about database objects are metadata. ShardingSphere core functions such as data sharding, encryption and decryption are all based on the database metadata.
+
+  
+  This shows that metadata is the core of the ShardingSphere system and is also the core data of every data storage related middleware or component. With the injection of metadata, it is equivalent to having a nerve center for the whole system, which can be combined with metadata to perform personalized operations on libraries, tables, and columns, such as data sharding, data encryption, SQL rewriting, etc.
+
+  For the ShardingSphere metadata loading process, it is first necessary to clarify the type and hierarchy of metadata in ShardingSphere. The metadata in ShardingSphere is mainly based around the `ShardingSphereMetaData`, the core of which is the `ShardingSphereSchema`, which is the metadata of the database and is also the top-level object of the data source metadata. The structure of the database metadata in ShardingSphere is shown as below, for each layer, the upper layer data comes fr [...]
+
+![](../../static/img/Blog_17_img_1_ShardingSphere_Database_Metadata_Structure_Diagram_en.png)
+
+
+**2. ColumMetaData and IndexMetaData**
+
+`ColumMetaData` and `IndexMetaData` are the basic elements that make up `TableMetaData`. In the following, we will analyze the structure of the two metadata types and the loading process separately. `ColumMetaData` has the following main structure:  
+
+ ~~~
+ public final class ColumnMetaData {
+    // 
+    private final String name;
+    // 
+    private final int dataType;
+    // 
+    private final boolean primaryKey;
+    // 
+    private final boolean generated;
+    //
+    private final boolean caseSensitive;
+}
+ ~~~
+ 
+The loading process is mainly encapsulated in the `org.apache.shardingsphere.infra.metadata.schema.builder.loader.ColumnMetaDataLoader#load` method, and its main process is to load the metadata of all the columns under a table name by getting the metadata matching the table name through the database link. The core code is as follows:
+
+ ~~~
+/**
+ * Load column meta data list.
+ *
+ * @param connection connection
+ * @param tableNamePattern table name pattern
+ * @param databaseType database type
+ * @return column meta data list
+ * @throws SQLException SQL exception
+ */
+public static Collection<ColumnMetaData> load(final Connection connection, final String tableNamePattern, final DatabaseType databaseType) throws SQLException {
+    Collection<ColumnMetaData> result = new LinkedList<>();
+    Collection<String> primaryKeys = loadPrimaryKeys(connection, tableNamePattern);
+    List<String> columnNames = new ArrayList<>();
+    List<Integer> columnTypes = new ArrayList<>();
+    List<String> columnTypeNames = new ArrayList<>();
+    List<Boolean> isPrimaryKeys = new ArrayList<>();
+    List<Boolean> isCaseSensitives = new ArrayList<>();
+    try (ResultSet resultSet = connection.getMetaData().getColumns(connection.getCatalog(), connection.getSchema(), tableNamePattern, "%")) {
+        while (resultSet.next()) {
+            String tableName = resultSet.getString(TABLE_NAME);
+            if (Objects.equals(tableNamePattern, tableName)) {
+                String columnName = resultSet.getString(COLUMN_NAME);
+                columnTypes.add(resultSet.getInt(DATA_TYPE));
+                columnTypeNames.add(resultSet.getString(TYPE_NAME));
+                isPrimaryKeys.add(primaryKeys.contains(columnName));
+                columnNames.add(columnName);
+            }
+        }
+    }
+    try (Statement statement = connection.createStatement(); ResultSet resultSet = statement.executeQuery(generateEmptyResultSQL(tableNamePattern, databaseType))) {
+        for (int i = 0; i < columnNames.size(); i++) {
+            isCaseSensitives.add(resultSet.getMetaData().isCaseSensitive(resultSet.findColumn(columnNames.get(i))));
+            result.add(new ColumnMetaData(columnNames.get(i), columnTypes.get(i), isPrimaryKeys.get(i),
+                    resultSet.getMetaData().isAutoIncrement(i + 1), isCaseSensitives.get(i)));
+        }
+    }
+    return result;
+}
+ ~~~
+ 
+`IndexMetaData`is the name of the index in the table, so there are no complex structural properties, just a name. Instead of going into detail, we rather focus on the loading process. Its loading process is similar with that of column, and the main process is in the `org.apache.shardingsphere.infra.metadata.schema.builder.loader.IndexMetaDataLoader#load` method. The basic process is also through the database link to obtain core `IndexMetaData` in the `IndexInfo` organization of the relev [...]
+
+~~~
+public static Collection<IndexMetaData> load(final Connection connection, final String table) throws SQLException {
+    Collection<IndexMetaData> result = new HashSet<>();
+    try (ResultSet resultSet = connection.getMetaData().getIndexInfo(connection.getCatalog(), connection.getSchema(), table, false, false)) {
+        while (resultSet.next()) {
+            String indexName = resultSet.getString(INDEX_NAME);
+            if (null != indexName) {
+                result.add(new IndexMetaData(indexName));
+            }
+        }
+    } catch (final SQLException ex) {
+        if (ORACLE_VIEW_NOT_APPROPRIATE_VENDOR_CODE != ex.getErrorCode()) {
+            throw ex;
+        }
+    }
+    return result;
+}
+~~~
+
+**3. TableMetaData**  
+
+This class is the basic element of `ShardingSphereMetaData` and has the following structure:
+
+~~~
+public final class TableMetaData {
+    // Table Name
+    private final String name;
+    // Column Metadata
+    private final Map<String, ColumnMetaData> columns;
+    // Index Metadata
+    private final Map<String, IndexMetaData> indexes;
+    //Omit Method
+}
+~~~
+
+From the above structure we can see that `TableMetaData` is assembled from `ColumnMetaData` and `IndexMetaData`, so the loading process of `TableMetaData` can be understood as an intermediate layer, and the specific implementation still depends on `ColumnMetaDataLoader` and `IndexMetaDataLoader` to get the table name and related links for data loading. So the relatively simple `TableMetaData` loading process is mainly in the `org.apache.shardingsphere.infra.metadata.schema.builder.loader [...]
+
+~~~
+public static Optional<TableMetaData> load(final DataSource dataSource, final String tableNamePattern, final DatabaseType databaseType) throws SQLException {
+    // Get the Link
+    try (MetaDataLoaderConnectionAdapter connectionAdapter = new MetaDataLoaderConnectionAdapter(databaseType, dataSource.getConnection())) {
+        // Format fuzzy matching field of the table name, according to database type
+        String formattedTableNamePattern = databaseType.formatTableNamePattern(tableNamePattern);
+        // LoadColumnMetaData and IndexMetaData to assemble TableMetaData
+        return isTableExist(connectionAdapter, formattedTableNamePattern)
+                ? Optional.of(new TableMetaData(tableNamePattern, ColumnMetaDataLoader.load(
+                        connectionAdapter, formattedTableNamePattern, databaseType), IndexMetaDataLoader.load(connectionAdapter, formattedTableNamePattern)))
+                : Optional.empty();
+    }
+}
+~~~
+
+**4. SchemaMetaData**  
+
+According to the analysis of the two lower layers, it’s clear that this layer is the outermost layer of metadata exposure, and the outermost layer is structured as a `ShardingSphereSchema` with the following main structure:  
+
+~~~
+/**
+ * ShardingSphere schema.
+ */
+@Getter
+public final class ShardingSphereSchema {
+
+    private final Map<String, TableMetaData> tables;
+
+    @SuppressWarnings("CollectionWithoutInitialCapacity")
+    public ShardingSphereSchema() {
+        tables = new ConcurrentHashMap<>();
+    }
+
+    public ShardingSphereSchema(final Map<String, TableMetaData> tables) {
+        this.tables = new ConcurrentHashMap<>(tables.size(), 1);
+        tables.forEach((key, value) -> this.tables.put(key.toLowerCase(), value));
+    }
+~~~
+
+In line with the schema concept, it contains several tables. The attribute of `ShardingSphereSchema` is a map structure, the key is `tableName`, and the value is the metadata of the table corresponding to the `tableName`.  
+
+Initialization is done primarily through the constructor. So again, the focus is on the table metadata loading, let’s follow up from the entry.  
+
+The core entry point for the entire metadata load is in `org.apache.shardingsphere.infra.context.metadata.MetaDataContextsBuilder#build`. In the build, we assemble and load the corresponding metadata through configuration rules. The core code is as follows:  
+
+~~~
+/**
+ * Build meta data contexts.
+ * 
+ * @exception SQLException SQL exception
+ * @return meta data contexts
+ */
+public StandardMetaDataContexts build() throws SQLException {
+    Map<String, ShardingSphereMetaData> metaDataMap = new HashMap<>(schemaRuleConfigs.size(), 1);
+    Map<String, ShardingSphereMetaData> actualMetaDataMap = new HashMap<>(schemaRuleConfigs.size(), 1);
+    for (String each : schemaRuleConfigs.keySet()) {
+        Map<String, DataSource> dataSourceMap = dataSources.get(each);
+        Collection<RuleConfiguration> ruleConfigs = schemaRuleConfigs.get(each);
+        DatabaseType databaseType = DatabaseTypeRecognizer.getDatabaseType(dataSourceMap.values());
+        // Obtain configuration rules
+        Collection<ShardingSphereRule> rules = ShardingSphereRulesBuilder.buildSchemaRules(each, ruleConfigs, databaseType, dataSourceMap);
+        // Load actualTableMetaData and logicTableMetaData
+        Map<TableMetaData, TableMetaData> tableMetaDatas = SchemaBuilder.build(new SchemaBuilderMaterials(databaseType, dataSourceMap, rules, props));
+        // Assemble rule metadata
+        ShardingSphereRuleMetaData ruleMetaData = new ShardingSphereRuleMetaData(ruleConfigs, rules);
+        // Assemble data source metadata
+        ShardingSphereResource resource = buildResource(databaseType, dataSourceMap);
+        // Assemble database metadata
+        ShardingSphereSchema actualSchema = new ShardingSphereSchema(tableMetaDatas.keySet().stream().filter(Objects::nonNull).collect(Collectors.toMap(TableMetaData::getName, v -> v)));
+        actualMetaDataMap.put(each, new ShardingSphereMetaData(each, resource, ruleMetaData, actualSchema));
+        metaDataMap.put(each, new ShardingSphereMetaData(each, resource, ruleMetaData, buildSchema(tableMetaDatas)));
+    }
+    // 
+    OptimizeContextFactory optimizeContextFactory = new OptimizeContextFactory(actualMetaDataMap);
+    return new StandardMetaDataContexts(metaDataMap, buildGlobalSchemaMetaData(metaDataMap), executorEngine, props, optimizeContextFactory);
+}
+~~~  
+
+The above code shows that in the build method, the basic database data such as database type, database connection pool, etc. are loaded based on the configured schemarule, through which the assembly of the `ShardingSphereResource` is completed; the assembly of the `ShardingSphereRuleMetaData` such as configuration rules, encryption rules, authentication rules, etc. are assembled; the necessary database metadata in the `ShardingSphereSchema` are loaded. Trace to find the method for loadin [...]
+
+Then what is an actualTable and what is a `logicTable`? Simply put for `t_order_1`, `t_order_2` is considered a node of `t_order`, so in the concept of analysis, `t_order` is `logicTable`, while `t_order_1` and `t_order_2` is `actualTable`. With these two concepts clearly defined, we then look at the build method together, mainly divided into the following two steps.  
+
+**i) actualTableMetaData loading**  
+
+`ActualTableMetaData` is the basic table of system sharding. In the 5.0 beta version, we adopt the method of database dialect to use SQL to query and load metadata, so the basic process is to query and load database metadata through SQL first. If no database dialect loader is found, the JDBC driver connection is used to obtain it, and then the metadata of the configuration table is loaded in combination with the table name configured in ShardingSphereRule. The core code is shown below.  
+
+~~~ 
+private static Map<String, TableMetaData> buildActualTableMetaDataMap(final SchemaBuilderMaterials materials) throws SQLException {
+    Map<String, TableMetaData> result = new HashMap<>(materials.getRules().size(), 1);
+    // Database SQL Loading Metadata
+    appendRemainTables(materials, result);
+    for (ShardingSphereRule rule : materials.getRules()) {
+        if (rule instanceof TableContainedRule) {
+            for (String table : ((TableContainedRule) rule).getTables()) {
+                if (!result.containsKey(table)) {
+                    TableMetaDataBuilder.load(table, materials).map(optional -> result.put(table, optional));
+                }
+            }
+        }
+    }
+    return result;
+}
+~~~ 
+
+**ii) logicTableMetaData loading**
+
+From the above concept we can see that `logicTable` is an actual logical node assembled from an `actualTable` based on different rules, which may be a sharded node or a cryptographic node or something else. Therefore, the `logicTableMetaData` is based on the `actualTableMetaData`, combined with specific configuration rules such as library and table rules and other associated nodes.
+In terms of the specific process, it first obtains the table name of the configuration rule, then determines whether the `actualTableMetaData` has been loaded, and generates the metadata of the relevant logical node by combining the configuration rule with the `TableMetaDataBuilder#decorate` method. The core code flow is shown below:  
+
+~~~ 
+private static Map<String, TableMetaData> buildLogicTableMetaDataMap(final SchemaBuilderMaterials materials, final Map<String, TableMetaData> tables) throws SQLException {
+    Map<String, TableMetaData> result = new HashMap<>(materials.getRules().size(), 1);
+    for (ShardingSphereRule rule : materials.getRules()) {
+        if (rule instanceof TableContainedRule) {
+            for (String table : ((TableContainedRule) rule).getTables()) {
+                if (tables.containsKey(table)) {
+                    TableMetaData metaData = TableMetaDataBuilder.decorate(table, tables.get(table), materials.getRules());
+                    result.put(table, metaData);
+                }
+            }
+        }
+    }
+    return result;
+}
+~~~   
+
+At this point, the core metadata is loaded and encapsulated into a Map for return, for use in each requirement scenario.
+
+    
+**5. Metadata Loading Optimization Analysis**  
+
+Although metadata is the essential core of our system, data loading during system startup will inevitably increase system load and lower system startup efficiency. Therefore, we need to optimize the loading process. At present, we are exploring the following two ways:
+
+**A. Replace Native JDBC Driver Connections with SQL Queries**
+
+Prior to the 5.0 beta version, the approach used was to load via the native JDBC driver. In 5.0 beta, we have gradually adopted a multi-threaded approach to metadata loading using a database dialect, via SQL queries. The speed of loading system data has been further improved. A detailed dialect loader can be found in the related implementation of `org.apache.shardingsphere.infra.metadata.schema.builder.spi.DialectTableMetaDataLoader`.
+
+**B. Reduce Metadata Load Times**  
+
+For the loading of resources common to the system, we follow the concept of “one-time loading for multiple uses”. Of course, we must consider space and time in this process. As a result, we are constantly optimizing to reduce duplicate loading of metadata to enhance overall system efficiency.
+
+    
+**About The Author**
+
+![](../../static/img/Blog_17_img_2_Tang_Guocheng_Photo.png)
+
+Tang Guocheng, a software engineer at Xiaomi, is mainly responsible for the development of the MIUI browser server side. He is a technology and Open-Source enthusiast, loves to explore and is keen on researching and learning about Open-Source middleware solutions. He is a proud member of the ShardingSphere community and is working hard to improve his skills with the support of the community, and to contribute to the development of the ShardingSphere community.
+
+**ShardingSphere Community:**
+
+ ShardingSphere Github: [https://github.com/apache/shardingsphere]() 
+ 
+ ShardingSphere Twitter: [https://twitter.com/ShardingSphere]()
+ 
+ ShardingSphere Slack Channel:[ShardingSphere Slack Channel:]()
diff --git a/docs/blog/content/material/Oct_12_2_A_Distributed_Database_Middleware_Ecosystem_Driven_by_Open_Source.en.md b/docs/blog/content/material/Oct_12_2_A_Distributed_Database_Middleware_Ecosystem_Driven_by_Open_Source.en.md
new file mode 100644
index 0000000..fa59bf3
--- /dev/null
+++ b/docs/blog/content/material/Oct_12_2_A_Distributed_Database_Middleware_Ecosystem_Driven_by_Open_Source.en.md
@@ -0,0 +1,72 @@
++++
+title = "A Distributed Database Middleware Ecosystem Driven by Open Source"
+weight = 18
+chapter = true
++++
+
+# A Distributed Database Middleware Ecosystem Driven by Open Source  
+
+On July 21, 2021, Pan Juan, the SphereEx Co-Founder and Apache ShardingSphere PMC, was invited to give a keynote session at the 2021 AWS Cloud Summit Shanghai, on “Apache ShardingSphere: Open-Source Distributed Database Middleware Ecosystem Building”.
+
+She introduced the expansion of the Open-Source project, community building, and how ShardingSphere practices the “Apache Way”. This article is a summary of Pan Juan’ s ideas.
+
+## A New Ecosystem Layer Positioned Above the Database & Under Business Applications
+
+
+Different industries, different users, different positionings, different requirements. Today’s databases are faced with more complex data application scenarios, and increasingly personalized and customized data processing requirements than in the past. Demanding environments are driving different databases to continuously maximize data read and write speed, latency, throughput, and other performance indicators.
+
+Gradually, data application scenarios with a clear division of labor lead to the fragmentation of the database market, and it is difficult to produce a database that can perfectly adapt to all scenarios. Therefore, it’s very common for enterprises to choose different databases in different business scenarios.
+
+Different databases bring about different challenges. From a macro perspective, there are some commonalities among these challenges, and it’s possible to base on the commonalities and form a set of factual standards. When you can build a platform layer that can uniformly apply and manage data on top of these databases, even if underlying database differences still exist, you can develop a systemin accordance with certain fixed standards. This standardized solution will greatly reduce the [...]
+
+Apache ShardingSphere is the platform layer. Thanks to its repurposing of an original database, it can help a technical team develop incremental capabilities such as fragmentation, encryption, and decryption, etc. It does not need to consider the configuration of an underlying database and can shield users’ perception. Therefore, it can quickly connect business-oriented databases in a direct way and easily manage large-scale data clusters.  
+
+## How to Practice the Apache Way
+
+When a business grows bigger, one database can no longer support a large volume of business data and thus it’s necessary to expand the database horizontally. That is the problem of distributed management. ShardingSphere builds a hot-plugging function layer above a database, while providing traditional database operations, shields users’ perception of the underlying database changes, and enables developers to manage large-scale database clusters by using a single database. ShardingSphere  [...]
+
+* **Sharding Strategy**
+
+When the volume of a business increases, the pressure of data fragmentation will increase, and thus its fragmentation strategy will become increasingly complex. ShardingSphere enables users to unlock more fragmentation strategies apart from horizontal scaling in a flexible and scalable way at the minimum cost. It also supports custom scaling.
+
+* **Read and Write Splitting**
+
+Usually, master-slave deployment can effectively relieve database pressure, but if there is a problem in a machine or a table of a certain cluster, it’s impossible to have read and write operations, and this problem will have a great impact on the business. To avoid this, developers usually need to rewrite a set of highly available strategies to change the position of master/slave between read and write tables. ShardingSphere can automatically explore all cluster states, so it can immedi [...]
+
+* **Sharding Scaling**
+
+As a business grows, it’s necessary to split data clusters again. ShardingSphere’s The scaling component enables a user to start a task with only one SQL command and shows the running status in real time in the background. Thanks to the “pipeline-like” scaling, the old database ecosystems are connected to a new database ecosystem . 
+
+* **Data Encryption and Decryption**
+
+In terms of database applications, encryption and decryption of key data is very important. If a system fails to monitor data in a standardized way, some sensitive data may be stored as plaintext, and users would need to encrypt them later. It’s a common problem for many teams.
+
+ShardingSphere standardizes the capability and integrates it into its middleware ecosystem, and therefore it can automate new/old data desensitization and encryption/decryption for users. The whole process can be achieved automatically. At the same time, it has a variety of built-in data encryption and decryption/desensitization algorithms, and users can customize and expand their own data algorithms if necessary.
+
+## A Pluggable Database Plus Platform
+
+Faced with various requirements and usage scenarios, ShardingSphere provides developers of different fields with three accesses: JDBC for Java, Proxy for heterogeneous databases and Sidecar for Cloud. Users can make a choice based on what they need, and operate on fragmentation, read and write separation, and data migration of original clusters.
+
+* **JDBC Access:** an enhanced JDBC driver that allows users to fully use JDBC mode, because it is compatible with JDBC and various ORM frameworks. Thus, without additional deployment and dependence required, users can realize distributed management, horizontal scaling, desensitization and so forth.
+
+* **Proxy Access:** a simulation database service that uses Proxy to manage underlying database clusters, which means that users do not need to change their existing mode .
+
+* **Cloud-based Mesh Access:** a deployment form that ShardingSphere designs for public cloud. Recently, SphereEx has joined the startup program of Amazon Web Services (aws), and will cooperate with aws in its China marketplace and beyond and provide aws users with more powerful image proxy deployment. aws and SphereEx will jointly create a more mature cloud environment for enterprise applications.
+
+## Open-Source Makes Personal Work Connected to the World
+
+ShardingSphere is quite influential in its industry. Now, when users need to find a horizontal scaling tool in China, ShardingSphere is usually on their candidate list. Of course, ShardingSphere’s development is not only due to the project maintenance team making valuable contributions over the years, but also to the increasingly active Open-Source community in China.
+
+In the past, most of Chinese Open-Source communities’ users just downloaded programs and looked for code references, but they rarely involved in community building. In recent years, the Open-Source concept is becoming increasingly popular in China, and thus more and more people with strong technical skills have joined the community. It is with their participation that the ShardingSphere community has become increasingly active. But how to evaluate a good Open-Source project? The criteria [...]
+
+For this reason, ShardingSphere, one of Apache’s Top-Level projects, still actively calls on more people to join Open-Source communities. These communities are an excellent way to broaden one’s horizons, be more open-minded and cooperative, and rediscover self-value.
+
+**Project Links:**
+
+ShardingSphere Github: [https://github.com/apache/shardingsphere]()
+
+ShardingSphere Twitter: [https://twitter.com/ShardingSphere
+]()
+
+ShardingSphere Slack Channel: [https://bit.ly/3qB2GGc]()
+
diff --git a/docs/blog/content/material/Oct_12_3_How_Can_Students_Participate_in_Open_Source_Communities_?.en.md b/docs/blog/content/material/Oct_12_3_How_Can_Students_Participate_in_Open_Source_Communities_?.en.md
new file mode 100644
index 0000000..c37f0d3
--- /dev/null
+++ b/docs/blog/content/material/Oct_12_3_How_Can_Students_Participate_in_Open_Source_Communities_?.en.md
@@ -0,0 +1,119 @@
++++
+title = "How Can Students Participate in Open-Source Communities?"
+weight = 19
+chapter = true
++++
+
+# How Can Students Participate in Open-Source Communities?
+
+![](../../static/img/Blog_19_img_1_community.png)
+
+Having some experience in Open-Source projects or communities is quite common for developers nowadays. Actually, not only adults but students should, and are increasingly likely to get involved in Open-Source projects.
+
+If you want to know more about why you should consider being part of Open-Source projects, please refer to [Why you should get involved in open-source community](https://medium.com/nerd-for-tech/why-should-you-get-involved-in-an-open-source-community-f5516657324) [1].
+
+The last 2 years have been challenging to say the least. The Covid-19 pandemic has forever changed us in more ways than one. It has changed the way we approach and cheerish life, the way we work, and the way we network — ultimately making webinars or online activities the new normal.
+
+Students have been affected too. Online learning meant they had to adapt in the way they assimilate their curricula, and also adapt to finding their first professional opportunity or internships in these unprecedented times. Online internships have been an option for students to go through this tough time by making the most of it, and gaining valuable experience. In order to connect people around the world, Google[2], Anitab[3], and ISCAS[4] are hosting many online Open-Source programs o [...]
+
+In short, students are set to gain the following benefits by joining an Open-Source community:
+
+- Practice their skills
+
+- Unlock internship opportunities or gain more internship experience
+
+- Network, create bonds and cooperate with your fellows and mentors worldwide
+
+- Earn scholarships
+
+## The next question is how to leverage these attractive programs to join Open-Source?
+
+Different programs have different and often specific rules, although the basic process is generally the same, i.e.:
+
+- Choose an Open-Source project or community partnering with the relevant program
+
+- Submit a student application
+
+- Get in touch with your mentor
+
+- Contribute & code (you can contribute in more ways than one, not only by coding)
+
+- Await evaluation
+
+- Colllect the final results
+
+Based on these steps, Here are some critical tips from my mentoring experiences at GSoC[2], OSD[3] and Summer of Code[4] that I’m happy to share with you.
+
+* **Choose the Appropriate Program**
+
+As I mentioned above, many organizations are initiating Open-Source programs with different schedules, task durations, and qualification and scholarship conditions. You’d better do some research for various programs and pick the one that best fits you.
+
+* **Choose a Particular Project Well in Advance:**
+
+Never wait until the official deadline date to choose your program if you want a priority ticket. Commonly, the committee of these programs will choose outstanding Open-Source communities to be their partners to provide mentoring. For instance, before becoming official GSoC mentors, we had already created many candidate tasks while waiting for the public’s applications [5].
+
+We welcomed comments from students in advance. This allows students and mentors to have sufficient time to know each other. Maybe a student may not be the right one, but their initiative will impress the mentors a lot and he/she will be selected.
+
+* **Write Your Proposal Concisely:**
+
+The proposal for this program is supposed to be concise rather than redundant. To achieve this, you are suggested to refer to your mentor’s task details (some mentors will describe the task in detail, while some won’t) or directly ask what your mentors’ expectations and concerns. Guessing or operating under the guise of wishful thinking when it comes to the task or the task’s aim is the most inefficient way to prepare your proposal. Both mentors and students want things to advance smooth [...]
+
+* **Actively Contact Your Mentor:**
+
+Imagine your proposal is accepted, then the next phase is to contribute or start coding. This is a significant chance for you to make an impact.
+
+You could be a coding wizard, yet you are going to meet issues that you’ll be uncertain about or sometimes you won’t know what your mentor thinks of your work. You should also consider that some mentors take charge of two or three students at a time, and are busy with their lives and work as well. That being said, it is beneficial to take the initiative and contact your mentor to ask questions or report your progress regularly. If you wait until your name comes to your mentors’ mind, it  [...]
+
+* **Tricky Questions:**
+
+While it is necessary to keep in close touch with your mentor, that does not mean you are a baby. Consider what questions you’ll be asking beforehand, and do your own legwork and research to see if this is something that can be resolved with your critical thinking, or everyone’s trusted friend — Google.
+
+If you fail to do so, mentors might regard you as someone lacking problem solving and analytical skills, or worse, as someone that is not willing to learn by doing. If you are still confused after doing your research, then ask your mentor your questions with specific terms and with the support of your research report. This not only ensures that you get your mentor’s attention, but wil make the mentor understand that you’re really having some difficulties, and you’ll receive the mentor’s  [...]
+
+After mentoring countless local and international students, this is the perspective i can share with you. I sincerely encourage you to have a look at the above mentioned interesting and meaningful programs to enrich your academic life, enhance your skills, and expand your friends circle. Last but not least, if you are interested in joining an Open-Source distributed database ecosystem, I am waiting for you here [6].
+
+**Author:**
+
+**Juan Pan | Trista**
+
+![](../../static/img/Blog_19_img_2_Pan_Juan_Photo.jpg)
+
+SphereEx Co-Founder, Apache Member, Apache ShardingSphere PMC, Apache brpc(Incubating) & Apache AGE(Incubating) mentor.
+
+Senior DBA at JD Technology, she was responsible for the design and development of JD Digital Science and Technology’s intelligent database platform. She now focuses on distributed databases & middleware ecosystems, and the open-source community.
+
+Recipient of the “2020 China Open-Source Pioneer” award, she is frequently invited to speak and share her insights at relevant conferences in the fields of database & database architecture.
+
+Bio: [https://tristazero.github.io]()
+
+LinkedIn: [https://tristazero.github.io]()
+
+GitHub: [https://github.com/tristaZero]()
+
+Twitter: [https://twitter.com/trista86934690]()
+
+**Open-Source Project Links:**
+
+ShardingSphere Github: [https://github.com/apache/shardingsphere
+]()
+
+ShardingSphere Twitter: [https://twitter.com/ShardingSphere
+]() 
+
+ShardingSphere Slack Channel: [https://join.slack.com/t/apacheshardingsphere/shared_invite/zt-sbdde7ie-SjDqo9~I4rYcR18bq0SYTg
+]()
+
+**Other Links:**
+
+  [1][https://medium.com/nerd-for-tech/why-should-you-get-involved-in-an-open-source-community-f5516657324]() 
+  
+  [2][https://summerofcode.withgoogle.com/]()
+  
+  [3][https://anitab-org.github.io/open-source-day]()
+  
+  [4][https://summer.iscas.ac.cn/#/homepage?lang=en]()
+  
+  [5][https://issues.apache.org/jira/browse/COMDEV-385]()
+  
+  [6][https://github.com/apache/shardingsphere]()
+  
diff --git a/docs/blog/content/material/Oct_12_4_Updates_and_FAQ_Your_1_Minute_Quick_Start_Guide_to_ShardingSphere.en.md b/docs/blog/content/material/Oct_12_4_Updates_and_FAQ_Your_1_Minute_Quick_Start_Guide_to_ShardingSphere.en.md
new file mode 100644
index 0000000..8e1fead
--- /dev/null
+++ b/docs/blog/content/material/Oct_12_4_Updates_and_FAQ_Your_1_Minute_Quick_Start_Guide_to_ShardingSphere.en.md
@@ -0,0 +1,344 @@
++++
+title = "Updates and FAQ — Your 1 Minute Quick Start Guide to ShardingSphere"
+weight = 20
+chapter = true
++++
+
+# Updates and FAQ — Your 1 Minute Quick Start Guide to ShardingSphere
+
+## Background
+
+Apache ShardingSphere is an Apache Top-Level project and is one of the most popular open-source big data projects. It was started about 5 years ago, and now ShardingSphere has over 14K+ stars and 270+ contributors in its community.
+
+The successful project has already launched and updated many versions. Apache ShardingSphere now supports many powerful features and keeps optimizing its configuration rules. We want to help users understand all features and configuration rules, to help them quickly test and run components,and ultimately help them achieve best performance, so we decide to start the shardingsphere-example project.
+
+shardingsphere-example is an independent Maven project. It’s preserved in the “examples” file of Apache ShardingSphere. Link:
+
+[https://github.com/apache/shardingsphere/tree/master/examples]()
+
+## Modules & Explanation
+
+The shardingsphere-example project contains many modules. It provides users with guides and configuration examples of features like horizontal scaling, read and write separation, distributed governance, distributed transaction, data encryption, hint manager, shadow database, etc.
+
+It also includes common tools such as Java API, YAML, Spring Boot, Spring Namespace. In addition to ShardingSphere-JDBC, now we add use examples of ShardingSphere-Proxy and ShardingSphere-Parser in shardingsphere-example. You can easily find all features of Apache ShardingSphere as well as their scenarios, and their flexible configurations in our official repo. The table below shows how the modules are distributed in shardingsphere-example.
+
+~~~
+shardingsphere-example
+├── example-core
+│ ├── config-utility
+│ ├── example-api
+│ ├── example-raw-jdbc
+│ ├── example-spring-jpa
+│ └── example-spring-mybatis
+├── shardingsphere-jdbc-example
+│ ├── sharding-example
+│ │ ├── sharding-raw-jdbc-example
+│ │ ├── sharding-spring-boot-jpa-example
+│ │ ├── sharding-spring-boot-mybatis-example
+│ │ ├── sharding-spring-namespace-jpa-example
+│ │ └── sharding-spring-namespace-mybatis-example
+│ ├── governance-example
+│ │ ├── governance-raw-jdbc-example
+│ │ ├── governance-spring-boot-mybatis-example
+│ │ └── governance-spring-namespace-mybatis-example
+│ ├── transaction-example
+│ │ ├── transaction-2pc-xa-atomikos-raw-jdbc-example
+│ │ ├── transaction-2pc-xa-bitronix-raw-jdbc-example
+│ │ ├── transaction-2pc-xa-narayana-raw-jdbc-example
+│ │ ├── transaction-2pc-xa-spring-boot-example
+│ │ ├── transaction-2pc-xa-spring-namespace-example
+│ │ ├── transaction-base-seata-raw-jdbc-example
+│ │ └── transaction-base-seata-spring-boot-example
+│ ├── other-feature-example
+│ │ ├── encrypt-example
+│ │ │ ├── encrypt-raw-jdbc-example
+│ │ │ ├── encrypt-spring-boot-mybatis-example
+│ │ │ └── encrypt-spring-namespace-mybatis-example
+│ │ ├── hint-example
+│ │ │ └── hint-raw-jdbc-example
+│ │ └── shadow-example
+│ │ │ ├── shadow-raw-jdbc-example
+│ │ │ ├── shadow-spring-boot-mybatis-example
+│ │ │ └── shadow-spring-namespace-mybatis-example
+│ ├── extension-example
+│ │ └── custom-sharding-algortihm-example
+├── shardingsphere-parser-example
+├── shardingsphere-proxy-example
+│ ├── shardingsphere-proxy-boot-mybatis-example
+│ └── shardingsphere-proxy-hint-example
+└── src/resources
+└── manual_schema.sql
+~~~
+
+**example-core**
+
+The module example-core contains entity, interface definition and other public codes
+
+**shardingsphere-jdbc-example**
+
+The example module ShardingSphere-JDBC displays ShardingSphere-JDBC features and how to use them.
+
+**sharding-example**
+
+The module displays how to use ShardingSphere-JDBC to scale out in scenarios like sharding, horizontal scaling, veritical scaling, read and write seperation, as well as read and write seperation plus sharding.
+
+In terms of integration with ORM, this module also provides users with examples of MyBatis and JPA integrations.
+
+**governance-example**
+
+This module is about the distributed governance of ShardingSphere-JDBC, and includes related scenerios combined with features like sharding, read and write seperation, data encryption, shadow database.
+
+>Note: The example of distributed governance depends on Apache Zookeeper. Please adopt self-deloyment.
+
+**transaction-example**
+
+This module displays the multiple ways of distributed transaction management ShardingSphere-JDBC supports. A user can base on his application and choose an appropriate distributed transaction coordinator. Given the complexity of distributed transactions, all examples in this module are based on vertical scaling, horizontal scaling and sharding.
+
+>Note: When you use Seata, please adopt self-deployment.
+
+**other-feature-example**
+
+This module gives examples of some ShardingSphere-JDBC features, i.e., encrypt (data encryption), hint (hint manager), shadow (shadow database).
+
+**encrypt-example**
+
+This module displays examples of data encryption. It also tells users how to use and access Java API, YAML, Spring Boot, Spring Namespace.
+
+**hint-example**
+
+This shows examples of hint manager. However, at present, there is only YAML configuration example. We welcome more scenarios.
+
+**shadow-example**
+
+This gives examples of shadow database, including its application combined with data encryption, sharding, and read/write separation.
+
+**extension-example**
+
+The module tells users how to use custom extension of ShardingSphere-JDBC. Users can leverage SPI or other ways provided by ShardingSphere to extend features.
+
+**custom-sharding-algortihm algorithm-example**
+
+The module shows how a user can use ‘CLASS_BASED’ and customize his sharding algorithm.
+
+**shardingsphere-parser-example**
+
+SQLParserEngine is the SQL parse engine of Apache ShardingSphere. It is also the base of ShardingSphere-JDBC and ShardingSphere-Proxy. When a user inputs a SQL text, SQLParserEngine parses it and makes it recognizable expressions. Then it’s fine to have enhancement such as routing or rewriting.
+
+Following the release of its 5.0.0-alpha version, Apache ShardingSphere’s core feature SQL Parser is totally open to its users. They can use API and call SQLParserEngin. This way they can meet more of their own business demands by having such an effective SQL parsing in their systems.
+
+In the module, users can learn how to use SQLParserEngine API. It provides different syntactical rules of languages, such as MySQL, PostgreSQL, Oracle, SQL Server and SQL 92.
+
+**shardingsphere-proxy-example**
+
+The example module of ShardingSphere-Proxy includes configuration examples of common scenarios like sharding, read and write separation and hint manager. Since features of ShardingSphere-Proxy are almost the same as that of ShardingSphere-JDBC, users can refer to shardingsphere-jdbc-example when they fail to find the example they want in shardingsphere-proxy-example.
+
+**shardingsphere-proxy-boot-mybatis-example**
+
+In the module users can learn how they can use Proxy to configure sharding, and how they use SpringBoot + MyBatis to access data.
+
+**shardingsphere-proxy-hint-example**
+
+In this module, a user can know how to use Proxy to configure hint manager and how to use Java cliend-end to access data.
+
+## New Optimization
+
+Apache ShardingSphere 5.0.0-beta version is coming soon, so the community contributors also have updated shardingsphere-example. They optimized the following:
+
+* JDK version
+
+* Component version
+
+* ClassName
+
+* Configuration profiles
+
+* SQL script
+
+Related details are as follows:
+
+**JDK version upgrade**
+
+According to JetBrains’s “A Picture of Java in 2020”, Java8 LTS is the most popular version among Java developers.
+
+![](../../static/img/Blog_20_img_1_Popularity_of_Java_versions_in_2020_en.png)
+
+Following this update, shardingsphere-example uses Java 8 and newer versions. If you use Java 7 or earlier versions, please update your JDK version first.
+
+**String dependency upgrade**
+
+In shardingsphere-example, we update string dependency components.
+
+* spring-boot version from1.5.17 to 2.0.9.RELEASE
+
+* springframework version from 4.3.20.RELEASE to 5.0.13.RELEASE
+
+* mybatis-spring-boot-start version from 1.3.0 to 2.0.1
+
+*  mybatis-spring version from 1.3.0 to 2.0.1
+
+**Persistence framework upgrade**
+
+In sharding-sphere-example, we update the persistence frameworks MyBatis and Hibernate.
+
+* mybatis version from 3.4.2 to 3.5.1
+
+* hibernate version from 4.3.11.Final to 5.2.18.Final
+
+**Connection pooling upgrade**
+
+In sharding-sphere-example, we update the database connection pool HikariCP.
+
+* HikariCP artifactId from HikariCP-java7 to HikariCP
+
+* HikariCP version from 2.4.11 to 3.4.2
+
+**Database driver upgrade**
+
+In sharding-sphere-example, we update the database connection drivers of MySQL and PostgreSQL
+
+* mysql-connector-java version from 5.1.42 to 5.1.47
+
+* postgresql version from 42.2.5.jre7 to 42.2.5
+
+## Example
+
+In this section, we give several typical examples and show you how to configure and run shardingsphere-example.
+
+There are many modules in the project shardingsphere-example. But for now, we only choose several popular application scenarios of ShardingSphere-JDBC.
+
+**Preparation**
+
+1. Maven is the project’s build tool of shardingsphere-example. Please prepare for it first;
+
+2. Prepare Apache ShardingSphere. If you have not downloaded Apache ShardingSphere, please download and compile it first. You can use the reference below:
+
+> git clone https://github.com/apache/shardingsphere.git
+cd shardingsphere
+mvn clean install -Prelease
+
+3. Import the shardingsphere-example project to your IDE;、
+
+4. Prepare a manageable database environment, such as local MySQL examples;
+
+5. If you need to test read and write separation, please make sure that your your master-slave synchronization is OK;
+
+6. Execute the DB init script::examples/src/resources/manual_schema.sql
+
+## Scenarios & Examples
+
+**sharding-spring-boot-mybatis-example: Sharding**
+
+**1. Path**
+
+examples/shardingsphere-jdbc-example/sharding-example/sharding-spring-boot-mybatis-example
+
+**2. Goal**
+
+This example shows the application of using ShardingSphere-JDBC in combination with SpringBoot and MyBatis to realize sharding. The sharding goal is to shard one table into four evenly preserved in two different databases.
+
+**3. Preparation**
+
+* Configure application.properties
+
+* set spring.profiles.active as sharding-databases-tables
+
+* Configure application-sharding-databases-tables.
+
+* Change jdbc-url with your database location and setup your user ID, password, etc.
+
+* Set the attribute of spring.shardingsphere.props.sql-show as true
+
+See more details in _Configuration Manual_
+
+**4. Run**
+
+Run at startup:ShardingSpringBootMybatisExample.java
+
+Now, you can observe routing of all SQL expressions in Logic SQL and Actual SQL of logs and understand how sharding works.
+
+**Sharding-raw-jdbc-example: Read and write splitting**
+
+**1. Path**
+
+examples/shardingsphere-jdbc-example/sharding-example/sharding-raw-jdbc-example
+
+**2. Goal**
+
+The example shows how users can use YAML and configure the feature read and write splitting of ShardingSphere-JDBC . The goal is to separate one writing database and two reading databases.
+
+**3. Preparation**
+
+* Configure META-INF/readwrite-splitting.yaml
+
+* Change jdbc-url with your database location and setup your user ID, password, etc.
+
+* Set props.sql-show as true
+
+See more details in _Configuration Manual_.
+
+**4. Run**
+
+Open the startup: ShardingRawYamlConfigurationExample.javaand set *shardingType* as *ShardingType.READWRITE_SPLITTING*. Run the startup.
+
+Now, you can observe routing of all SQL expressions in Logic SQL and Actual SQL of logs and understand how read and write splitting works.
+
+*>Note: when master-slave database synchronization fails, there will be query errors.*
+
+**Custom-sharding-algortihm-example: Custom algorithm**
+
+**1. Path**
+
+examples/shardingsphere-jdbc-example/extension-example/custom-sharding-algortihm-example/class-based-sharding-algorithm-example
+
+**2. Goal**
+
+The example shows how users can use CLASS_BASED and extend their own custom algorithm . By doing so, ShardingSphere-JDBC can base on a custom algorithm and calculate sharding results during sharding routing. The scenario is how to use custom sharding algorithm to scale out.
+
+**3. Preparation**
+
+Prepare your sharding algorithm that shall base on business needs and use any interfaces of `StandardShardingAlgorithm`, `ComplexKeysShardingAlgorithm` or `HintShardingAlgorithm`. In the example, we show how to use `ClassBasedStandardShardingAlgorithmFixture`
+
+* Configure META-INF/sharding-databases.yaml
+
+* Change jdbc-url with your database location and setup your user ID, password, etc.
+
+* Set props.sql-show as true.
+
+*> Note: For shardingAlgorithms, when a type is CLASS_BASED, you can use props and assign class and absolute path of a custom algorithm. Thus, the configuration is done.*
+
+See more details in _Configuration Manual_.
+
+**4. Run**
+
+Run the startup: YamlClassBasedShardingAlgorithmExample.java
+
+Now you can use logs and observe your database. You can also use methods like DEBUG and check input and output of your custom algorithm.
+
+## Summary
+
+Our brief ends here. In the future, we will share with you more examples of ShardingSphere-JDBC, ShardingSphere-Proxy and ShardingSphere-Parser.
+
+If you have any questions or have found any issues, we are looking forward to your comments on our _GitHub issue_, or you can submit your pull request and join us, or join our _Slack community_. We welcome anyone who would like to be part of this Top-Level project and make a contribution. For more information please visit our _Contributor Guide_.
+
+**Authors**
+
+![](../../static/img/Blog_20_img_2_Liang_Longtao_Photo.png)
+
+I’m Jiang Longtao, SphereEx middleware engineer & Apache ShardingSphere contributor. At present, I focus on ShardingSphere database middleware and its open source community.
+
+![](../../static/img/Blog_20_img_3_Hou_Yang_Photo.png)
+
+I’m Hou Yang, and I am a middleware enigneer at SphereEx. I love open source and I want to contribute to building a better community with everyone.
+
+**ShardingSphere Community:**
+
+ShardingSphere Github: [https://github.com/apache/shardingsphere]()
+
+ShardingSphere Twitter: [https://twitter.com/ShardingSphere]()
+
+ShardingSphere Slack Channel: [apacheshardingsphere.slack.com]()
+
+
+
+
+
diff --git a/docs/blog/static/img/Blog_17_img_1_ShardingSphere_Database_Metadata_Structure_Diagram_en.png b/docs/blog/static/img/Blog_17_img_1_ShardingSphere_Database_Metadata_Structure_Diagram_en.png
new file mode 100644
index 0000000..a23a7f7
Binary files /dev/null and b/docs/blog/static/img/Blog_17_img_1_ShardingSphere_Database_Metadata_Structure_Diagram_en.png differ
diff --git a/docs/blog/static/img/Blog_17_img_2_Tang_Guocheng_Photo.png b/docs/blog/static/img/Blog_17_img_2_Tang_Guocheng_Photo.png
new file mode 100644
index 0000000..59b3872
Binary files /dev/null and b/docs/blog/static/img/Blog_17_img_2_Tang_Guocheng_Photo.png differ
diff --git a/docs/blog/static/img/Blog_19_img_1_community.png b/docs/blog/static/img/Blog_19_img_1_community.png
new file mode 100644
index 0000000..88fc138
Binary files /dev/null and b/docs/blog/static/img/Blog_19_img_1_community.png differ
diff --git a/docs/blog/static/img/Blog_19_img_2_Pan_Juan_Photo.jpg b/docs/blog/static/img/Blog_19_img_2_Pan_Juan_Photo.jpg
new file mode 100644
index 0000000..71696a1
Binary files /dev/null and b/docs/blog/static/img/Blog_19_img_2_Pan_Juan_Photo.jpg differ
diff --git a/docs/blog/static/img/Blog_20_img_1_Popularity_of_Java_versions_in_2020_en.png b/docs/blog/static/img/Blog_20_img_1_Popularity_of_Java_versions_in_2020_en.png
new file mode 100644
index 0000000..5477fcd
Binary files /dev/null and b/docs/blog/static/img/Blog_20_img_1_Popularity_of_Java_versions_in_2020_en.png differ
diff --git a/docs/blog/static/img/Blog_20_img_2_Liang_Longtao_Photo.png b/docs/blog/static/img/Blog_20_img_2_Liang_Longtao_Photo.png
new file mode 100644
index 0000000..b9b9315
Binary files /dev/null and b/docs/blog/static/img/Blog_20_img_2_Liang_Longtao_Photo.png differ
diff --git a/docs/blog/static/img/Blog_20_img_3_Hou_Yang_Photo.png b/docs/blog/static/img/Blog_20_img_3_Hou_Yang_Photo.png
new file mode 100644
index 0000000..309266a
Binary files /dev/null and b/docs/blog/static/img/Blog_20_img_3_Hou_Yang_Photo.png differ