You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@shardingsphere.apache.org by GitBox <gi...@apache.org> on 2020/05/25 03:49:20 UTC

[GitHub] [shardingsphere] 15168326318 opened a new issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

15168326318 opened a new issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776


   BLOOD_PACK_DETAIL  对这个表分表;
   但这个表已经使用一年多了,有很多数据,我现在2020年6月要对这个表分表。
   
    我想6月份上线sharding jdbc,如何完美实现对开发来说是无感知的,我已经建立里BLOOD_PACK_DETAIL_202006,BLOOD_PACK_DETAIL202007等主要的表。


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] cws1981 commented on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
cws1981 commented on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633363169


   按月分表规则很简单,实现PreciseShardingAlgorithm 接口按月分表就可以了,sharding-jdbc分表对上层调用本来就是透明了


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] cws1981 edited a comment on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
cws1981 edited a comment on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633364282


   **我写的一个文档**:
   
   ### 实际操作指北
   
   ##### 一、确定分表策略实现接口
   
   选分片算法
   
   - 精确分片算法
   
   对应PreciseShardingAlgorithm接口,用于处理使用单一键作为分片键的=与IN进行分片的场景。需要配合StandardShardingStrategy使用。
   
   参考:
   
   ```java
   /**
   * 按日期分表算法实现
   */
   public class IDPrecise implements PreciseShardingAlgorithm<Long> {
   
       public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> preciseShardingValue) {
           //按主键ID分表
           long id = preciseShardingValue.getValue();
           LocalDateTime datetime = getDatetime(id);
           StringBuilder sb = new StringBuilder();
           sb.append(preciseShardingValue.getLogicTableName())
                   .append("_")
                   .append(datetime.getYear())
                   .append("_")
                   .append(datetime.getMonthValue());
   
           String dataNode = sb.toString();
   
           ImmutableSet<String> immutableSet = ImmutableSet.copyOf(availableTargetNames);
           if (!immutableSet.contains(dataNode)){
               return preciseShardingValue.getLogicTableName();
           }
           return dataNode;
       }
   
       private LocalDateTime getDatetime(long id) {
           long timestamp = id >> 22;
           timestamp += SnowflakeShardingKeyGenerator.EPOCH;
           Instant instant = Instant.ofEpochMilli(timestamp);
           return LocalDateTime.ofInstant(instant, ZoneId.systemDefault());
       }
   }
   
   ```
   
   
   
   - 范围分片算法
   
   对应RangeShardingAlgorithm,用于处理使用单一键作为分片键的BETWEEN AND、>、<、>=、<=进行分片的场景。需要配合StandardShardingStrategy使用。
   
   - 复合分片算法
   
   对应ComplexKeysShardingAlgorithm,用于处理使用多键作为分片键进行分片的场景,包含多个分片键的逻辑较复杂,需要应用开发者自行处理其中的复杂度。需要配合ComplexShardingStrategy使用。
   
   ##### 二、配置
   
   ```xml
   <!--按月分表算法实现-->
       <bean id="idPrecise" class="com.xxx.base.shardingjdbc.shardingAlgorithm.IDPrecise"/>
   
       <bean:properties id="key_generator_properties">
           <prop key="worker.id">123</prop>
       </bean:properties>
   <!--分布式ID生成-->
       <sharding:key-generator id="snowFlakeKeyGen" type="SNOWFLAKE" column="id" props-ref="key_generator_properties"/>
   
       <!--分表策略 指定实现的接口 以及分片用的字段 这里是ID 主键-->
       <sharding:standard-strategy id="tableShardingStrategy_date" sharding-column="id" precise-algorithm-ref="idPrecise"/>
   
       <!-- sharding-jdbc dataSource 相当于普通的数据源-->
       <sharding:data-source id="shardingDataSource">
          <!--指定实际的数据源-->
           <sharding:sharding-rule data-source-names="dataSource">
               <sharding:table-rules>
                   <!--分表规则配置 指定需要分片的逻辑表名 以及分片的实际表名 支持行表达式-->
                   <sharding:table-rule logic-table="log_process" actual-data-nodes="dataSource.log_process,dataSource.log_process_2020_$->{3..12}" table-strategy-ref="tableShardingStrategy_date" key-generator-ref="snowFlakeKeyGen"/>
               </sharding:table-rules>
           </sharding:sharding-rule>
           <sharding:props>
               <prop key="sql.show">false</prop>
           </sharding:props>
       </sharding:data-source>
   
   ```
   
   
   
   ##### 三、mapper 修改
   
   ​    把分表的mapper 单独移动到一个包路径比如:shardingDao
   
   修改maybatis 配置 (**这里提一个踩的坑**:*由于配置了sharding-jdbc 数据源后,会解析sql 对不需要分表的表也会解析,这样会对复杂sql 支持不好有的会报异常,这里就通过只对分表的mapper 注入shardingDataSource 避免所有表都是用分片数据源,固尔要修改mapper 的包路径分开扫描*)
   
   ```xml
   <!-- 分表数据源的sqlSessionFactory配置 -->
       <bean id="sqlSessionFactory_sharding" class="org.mybatis.spring.SqlSessionFactoryBean">
           <property name="dataSource" ref="shardingDataSource" />
           <property name="configLocation" value="classpath:conf/mybatis-config.xml" />
           <property name="mapperLocations" value="classpath:com/xxx/xxx/shardingMappers/**/*Mapper.xml" />
           <property name="plugins">
               <array>
                   <bean class="com.github.pagehelper.PageInterceptor">
                       <property name="properties">
                           <!--使用下面的方式配置参数,一行配置一个 -->
                           <value>
                               reasonable=true
                           </value>
                       </property>
                   </bean>
               </array>
           </property>
       </bean>
   
       <!-- scan for mappers and let them be autowired -->
       <bean class="tk.mybatis.spring.mapper.MapperScannerConfigurer">
           <property name="basePackage" value="com.xxx.xxx.shardingDao.**" />
           <property name="sqlSessionFactoryBeanName" value="sqlSessionFactory_sharding" />
       </bean>
   ```
   
   ##### 四、id 生成算法
   
   数据分片后就不同用原来的自动增长ID了会重复,这用snowflake 算法生成ID,sharding-jdbc 提供了snowflake算法可以直接配置。


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] cws1981 commented on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
cws1981 commented on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633364282


   **我写的一个文档**:
   
   ### 实际操作指北
   
   #####一、确定分表策略实现接口
   
   选分片算法
   
   - 精确分片算法
   
   对应PreciseShardingAlgorithm接口,用于处理使用单一键作为分片键的=与IN进行分片的场景。需要配合StandardShardingStrategy使用。
   
   参考:
   
   ```java
   /**
   * 按日期分表算法实现
   */
   public class IDPrecise implements PreciseShardingAlgorithm<Long> {
   
       public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> preciseShardingValue) {
           //按主键ID分表
           long id = preciseShardingValue.getValue();
           LocalDateTime datetime = getDatetime(id);
           StringBuilder sb = new StringBuilder();
           sb.append(preciseShardingValue.getLogicTableName())
                   .append("_")
                   .append(datetime.getYear())
                   .append("_")
                   .append(datetime.getMonthValue());
   
           String dataNode = sb.toString();
   
           ImmutableSet<String> immutableSet = ImmutableSet.copyOf(availableTargetNames);
           if (!immutableSet.contains(dataNode)){
               return preciseShardingValue.getLogicTableName();
           }
           return dataNode;
       }
   
       private LocalDateTime getDatetime(long id) {
           long timestamp = id >> 22;
           timestamp += SnowflakeShardingKeyGenerator.EPOCH;
           Instant instant = Instant.ofEpochMilli(timestamp);
           return LocalDateTime.ofInstant(instant, ZoneId.systemDefault());
       }
   }
   
   ```
   
   
   
   - 范围分片算法
   
   对应RangeShardingAlgorithm,用于处理使用单一键作为分片键的BETWEEN AND、>、<、>=、<=进行分片的场景。需要配合StandardShardingStrategy使用。
   
   - 复合分片算法
   
   对应ComplexKeysShardingAlgorithm,用于处理使用多键作为分片键进行分片的场景,包含多个分片键的逻辑较复杂,需要应用开发者自行处理其中的复杂度。需要配合ComplexShardingStrategy使用。
   
   #####二、配置
   
   ```xml
   <!--按月分表算法实现-->
       <bean id="idPrecise" class="com.xxx.base.shardingjdbc.shardingAlgorithm.IDPrecise"/>
   
       <bean:properties id="key_generator_properties">
           <prop key="worker.id">123</prop>
       </bean:properties>
   <!--分布式ID生成-->
       <sharding:key-generator id="snowFlakeKeyGen" type="SNOWFLAKE" column="id" props-ref="key_generator_properties"/>
   
       <!--分表策略 指定实现的接口 以及分片用的字段 这里是ID 主键-->
       <sharding:standard-strategy id="tableShardingStrategy_date" sharding-column="id" precise-algorithm-ref="idPrecise"/>
   
       <!-- sharding-jdbc dataSource 相当于普通的数据源-->
       <sharding:data-source id="shardingDataSource">
          <!--指定实际的数据源-->
           <sharding:sharding-rule data-source-names="dataSource">
               <sharding:table-rules>
                   <!--分表规则配置 指定需要分片的逻辑表名 以及分片的实际表名 支持行表达式-->
                   <sharding:table-rule logic-table="log_process" actual-data-nodes="dataSource.log_process,dataSource.log_process_2020_$->{3..12}" table-strategy-ref="tableShardingStrategy_date" key-generator-ref="snowFlakeKeyGen"/>
               </sharding:table-rules>
           </sharding:sharding-rule>
           <sharding:props>
               <prop key="sql.show">false</prop>
           </sharding:props>
       </sharding:data-source>
   
   ```
   
   
   
   #####三、mapper 修改
   
   ​    把分表的mapper 单独移动到一个包路径比如:shardingDao
   
   修改maybatis 配置 (**这里提一个踩的坑**:*由于配置了sharding-jdbc 数据源后,会解析sql 对不需要分表的表也会解析,这样会对复杂sql 支持不好有的会报异常,这里就通过只对分表的mapper 注入shardingDataSource 避免所有表都是用分片数据源,固尔要修改mapper 的包路径分开扫描*)
   
   ```xml
   <!-- 分表数据源的sqlSessionFactory配置 -->
       <bean id="sqlSessionFactory_sharding" class="org.mybatis.spring.SqlSessionFactoryBean">
           <property name="dataSource" ref="shardingDataSource" />
           <property name="configLocation" value="classpath:conf/mybatis-config.xml" />
           <property name="mapperLocations" value="classpath:com/xxx/xxx/shardingMappers/**/*Mapper.xml" />
           <property name="plugins">
               <array>
                   <bean class="com.github.pagehelper.PageInterceptor">
                       <property name="properties">
                           <!--使用下面的方式配置参数,一行配置一个 -->
                           <value>
                               reasonable=true
                           </value>
                       </property>
                   </bean>
               </array>
           </property>
       </bean>
   
       <!-- scan for mappers and let them be autowired -->
       <bean class="tk.mybatis.spring.mapper.MapperScannerConfigurer">
           <property name="basePackage" value="com.xxx.xxx.shardingDao.**" />
           <property name="sqlSessionFactoryBeanName" value="sqlSessionFactory_sharding" />
       </bean>
   ```
   
   #####四、id 生成算法
   
   数据分片后就不同用原来的自动增长ID了会重复,这用snowflake 算法生成ID,sharding-jdbc 提供了snowflake算法可以直接配置。


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] 15168326318 edited a comment on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
15168326318 edited a comment on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633383148


   比如我下个月要上线分表,历史数据怎么处理 ?
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] cws1981 removed a comment on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
cws1981 removed a comment on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633364282






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] 15168326318 commented on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
15168326318 commented on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633383148


   比我我下个月要上线分表,历史数据怎么处理 ?
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] terrymanu closed issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
terrymanu closed issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] big-mountain-z commented on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
big-mountain-z commented on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633518733


   Try to custom sharding-algorithm for special history table. For example, If time before 2020/06, SQL execute in table "BLOOD_PACK_DETAIL". OtherWise according to year and month, select sharding table.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] terrymanu commented on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
terrymanu commented on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-634125283


   **For English only**, other languages will not accept.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [shardingsphere] 15168326318 commented on issue #5776: Partition by time, if the time is 202006, I want to insert tableA_202006, if it is a query, modify and delete, I need to operate tableA_202006 and tabelA table.Since tableA table is a history table, which is the historical data before sharding and JDBC is used, we have used the sub-table since 202006. How can we realize this?

Posted by GitBox <gi...@apache.org>.
15168326318 commented on issue #5776:
URL: https://github.com/apache/shardingsphere/issues/5776#issuecomment-633382942


   @cws1981 你这个是分片的算法实现,我要问的是历史数据如何处理 ,算法我已经写好了啊 。
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org