You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2020/05/20 13:48:54 UTC

[GitHub] [hbase] Apache9 opened a new pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Apache9 opened a new pull request #1746:
URL: https://github.com/apache/hbase/pull/1746


   …he location of meta table


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
Apache9 commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633224524


   OK, good, all green. Any other concerns? @saintstack 
   
   Thanks.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 merged pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
Apache9 merged pull request #1746:
URL: https://github.com/apache/hbase/pull/1746


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-632497694


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |  33m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 13s |  master passed  |
   | +0 :ok: |  refguide  |   4m 45s |  branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.  |
   | +1 :green_heart: |  spotbugs  |   4m  0s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 19s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m  8s |  hbase-server: The patch generated 1 new + 166 unchanged - 4 fixed = 167 total (was 170)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | +0 :ok: |  refguide  |   4m 36s |  patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.  |
   | +1 :green_heart: |  hadoopcheck  |  11m  4s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   4m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate ASF License warnings.  |
   |  |   |  85m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle refguide xml |
   | uname | Linux cd7fb531bd8f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 80b64ef4dc |
   | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-general-check/output/branch-site/book.html |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt |
   | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-general-check/output/patch-site/book.html |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-procedure hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-632663931


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   6m 13s |  master passed  |
   | +1 :green_heart: |  compile  |   3m  3s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 24s |  branch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 31s |  hbase-client in master failed.  |
   | -0 :warning: |  javadoc  |   0m 19s |  hbase-common in master failed.  |
   | -0 :warning: |  javadoc  |   0m 20s |  hbase-procedure in master failed.  |
   | -0 :warning: |  javadoc  |   0m 46s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  0s |  patch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 30s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 20s |  hbase-procedure in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 49s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 14s |  hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  8s |  hbase-procedure in the patch passed.  |
   | -1 :x: |  unit  | 150m 51s |  hbase-server in the patch failed.  |
   |  |   | 205m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux f5cbe8b01d9c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 80b64ef4dc |
   | Default Java | 2020-01-14 |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-procedure.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-procedure.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/testReport/ |
   | Max. process+thread count | 3811 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-procedure hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache9 commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428401590



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -866,6 +877,50 @@ protected AssignmentManager createAssignmentManager(MasterServices master) {
     return new AssignmentManager(master);
   }
 
+  private void createRootTable() throws IOException, KeeperException {
+    RootTable rootTable = new RootTable(this, cleanerPool);
+    rootTable.initialize();
+    // try migrate data from zookeeper
+    try (RegionScanner scanner =
+      rootTable.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows = scanner.next(cells);
+      if (!cells.isEmpty() || moreRows) {
+        // notice that all replicas for a region are in the same row, so the migration can be
+        // done with in a one row put, which means if we have data in root table then we can make
+        // sure that the migration is done.
+        LOG.info("Root table already has data in it, skip migrating...");
+        this.rootTable = rootTable;
+        return;
+      }
+    }
+    // start migrating
+    byte[] row = MetaTableAccessor.getMetaKeyForRegion(RegionInfoBuilder.FIRST_META_REGIONINFO);

Review comment:
       Yes, just like what we have done for normal regions which are stored in meta table.
   
   I think we should change the class and method name, as francis suggested. Can be done in another issue.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache9 commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428401823



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
##########
@@ -552,4 +553,6 @@ default SplitWALManager getSplitWALManager(){
    * @return The state of the load balancer, or false if the load balancer isn't defined.
    */
   boolean isBalancerOn();
+
+  RootTable getRootTable();

Review comment:
       Maybe we could add it as a parameter to the constructor of AssignmentManager?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633125465


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 48s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 39s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 12s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 15s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 14s |  hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 229m 42s |  hbase-server in the patch passed.  |
   |  |   | 261m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b6bf623fbb47 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c303f9d329 |
   | Default Java | 1.8.0_232 |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/testReport/ |
   | Max. process+thread count | 3665 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache9 commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428402534



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/root/RootTable.java
##########
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.root;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
+import org.apache.hadoop.hbase.region.LocalRegion;
+import org.apache.hadoop.hbase.region.LocalRegionParams;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Used to store the location of meta region.
+ */
+@InterfaceAudience.Private
+public class RootTable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(RootTable.class);
+
+  static final String MAX_WALS_KEY = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_MAX_WALS = 10;
+
+  static final String USE_HSYNC_KEY = "hbase.root.table.region.wal.hsync";
+
+  static final String RING_BUFFER_SLOT_COUNT = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_RING_BUFFER_SLOT_COUNT = 64;
+
+  static final String ROOT_TABLE_DIR = "RootTable";
+
+  static final String HFILECLEANER_PLUGINS = "hbase.root.table.region.hfilecleaner.plugins";
+
+  static final String FLUSH_SIZE_KEY = "hbase.root.table.region.flush.size";
+
+  static final long DEFAULT_FLUSH_SIZE = TableDescriptorBuilder.DEFAULT_MEMSTORE_FLUSH_SIZE;
+
+  static final String FLUSH_PER_CHANGES_KEY = "hbase.root.table.region.flush.per.changes";
+
+  private static final long DEFAULT_FLUSH_PER_CHANGES = 1_000_000;
+
+  static final String FLUSH_INTERVAL_MS_KEY = "hbase.root.table.region.flush.interval.ms";
+
+  // default to flush every 15 minutes, for safety
+  private static final long DEFAULT_FLUSH_INTERVAL_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final String COMPACT_MIN_KEY = "hbase.root.table.region.compact.min";
+
+  private static final int DEFAULT_COMPACT_MIN = 4;
+
+  static final String ROLL_PERIOD_MS_KEY = "hbase.root.table.region.walroll.period.ms";
+
+  private static final long DEFAULT_ROLL_PERIOD_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final TableName TABLE_NAME = TableName.valueOf("master:root");
+
+  private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME)
+    .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY)
+      .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true)
+      .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL)
+      .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build())
+    .build();
+
+  private final Server server;
+
+  private final DirScanPool cleanerPool;
+
+  private LocalRegion region;
+
+  public RootTable(Server server, DirScanPool cleanerPool) {
+    this.server = server;
+    this.cleanerPool = cleanerPool;
+  }
+
+  public void initialize() throws IOException {
+    LOG.info("Initializing root table...");
+    LocalRegionParams params = new LocalRegionParams().server(server).regionDirName(ROOT_TABLE_DIR)
+      .tableDescriptor(TABLE_DESC);
+    Configuration conf = server.getConfiguration();
+    long flushSize = conf.getLong(FLUSH_SIZE_KEY, DEFAULT_FLUSH_SIZE);
+    long flushPerChanges = conf.getLong(FLUSH_PER_CHANGES_KEY, DEFAULT_FLUSH_PER_CHANGES);
+    long flushIntervalMs = conf.getLong(FLUSH_INTERVAL_MS_KEY, DEFAULT_FLUSH_INTERVAL_MS);
+    int compactMin = conf.getInt(COMPACT_MIN_KEY, DEFAULT_COMPACT_MIN);
+    params.flushSize(flushSize).flushPerChanges(flushPerChanges).flushIntervalMs(flushIntervalMs)
+      .compactMin(compactMin);
+    int maxWals = conf.getInt(MAX_WALS_KEY, DEFAULT_MAX_WALS);
+    params.maxWals(maxWals);
+    if (conf.get(USE_HSYNC_KEY) != null) {
+      params.useHsync(conf.getBoolean(USE_HSYNC_KEY, false));
+    }
+    params.ringBufferSlotCount(conf.getInt(RING_BUFFER_SLOT_COUNT, DEFAULT_RING_BUFFER_SLOT_COUNT));
+    long rollPeriodMs = conf.getLong(ROLL_PERIOD_MS_KEY, DEFAULT_ROLL_PERIOD_MS);
+    params.rollPeriodMs(rollPeriodMs)
+      .archivedWalSuffix(MasterProcedureUtil.ARCHIVED_PROC_WAL_SUFFIX)
+      .hfileCleanerPlugins(HFILECLEANER_PLUGINS).cleanerPool(cleanerPool);
+    region = LocalRegion.create(params);
+  }
+
+  public void update(Put put) throws IOException {
+    region.update(r -> r.put(put));
+  }
+
+  public void delete(Delete delete) throws IOException {
+    region.update(r -> r.delete(delete));
+  }
+
+  public RegionScanner getScanner(Scan scan) throws IOException {
+    return region.getScanner(scan);
+  }
+
+  public void close(boolean abort) {
+    LOG.info("Closing root table, isAbort={}", abort);
+    if (region != null) {
+      region.close(abort);
+    }
+  }
+}

Review comment:
       Yes. I've posted on jira, we can introduce a single region, maybe called 'master-local-store', to store all the master local data, such as the root table and procedures. If we think this is a better way, we should be hurry to finish the work before 2.3.0 is out, otherwise we need to deal with the migration from the procedure store region again...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] saintstack commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
saintstack commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428743081



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/root/RootTable.java
##########
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.root;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
+import org.apache.hadoop.hbase.region.LocalRegion;
+import org.apache.hadoop.hbase.region.LocalRegionParams;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Used to store the location of meta region.
+ */
+@InterfaceAudience.Private
+public class RootTable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(RootTable.class);
+
+  static final String MAX_WALS_KEY = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_MAX_WALS = 10;
+
+  static final String USE_HSYNC_KEY = "hbase.root.table.region.wal.hsync";
+
+  static final String RING_BUFFER_SLOT_COUNT = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_RING_BUFFER_SLOT_COUNT = 64;
+
+  static final String ROOT_TABLE_DIR = "RootTable";
+
+  static final String HFILECLEANER_PLUGINS = "hbase.root.table.region.hfilecleaner.plugins";
+
+  static final String FLUSH_SIZE_KEY = "hbase.root.table.region.flush.size";
+
+  static final long DEFAULT_FLUSH_SIZE = TableDescriptorBuilder.DEFAULT_MEMSTORE_FLUSH_SIZE;
+
+  static final String FLUSH_PER_CHANGES_KEY = "hbase.root.table.region.flush.per.changes";
+
+  private static final long DEFAULT_FLUSH_PER_CHANGES = 1_000_000;
+
+  static final String FLUSH_INTERVAL_MS_KEY = "hbase.root.table.region.flush.interval.ms";
+
+  // default to flush every 15 minutes, for safety
+  private static final long DEFAULT_FLUSH_INTERVAL_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final String COMPACT_MIN_KEY = "hbase.root.table.region.compact.min";
+
+  private static final int DEFAULT_COMPACT_MIN = 4;
+
+  static final String ROLL_PERIOD_MS_KEY = "hbase.root.table.region.walroll.period.ms";
+
+  private static final long DEFAULT_ROLL_PERIOD_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final TableName TABLE_NAME = TableName.valueOf("master:root");
+
+  private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME)
+    .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY)
+      .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true)
+      .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL)
+      .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build())
+    .build();
+
+  private final Server server;
+
+  private final DirScanPool cleanerPool;
+
+  private LocalRegion region;
+
+  public RootTable(Server server, DirScanPool cleanerPool) {
+    this.server = server;
+    this.cleanerPool = cleanerPool;
+  }
+
+  public void initialize() throws IOException {
+    LOG.info("Initializing root table...");
+    LocalRegionParams params = new LocalRegionParams().server(server).regionDirName(ROOT_TABLE_DIR)

Review comment:
       Yeah, would be good if a general location for master local store.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] infraio commented on a change in pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
infraio commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r430349196



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##########
@@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException {
     // Start the Assignment Thread
     startAssignmentThread();
 
-    // load meta region state
-    ZKWatcher zkw = master.getZooKeeper();
-    // it could be null in some tests
-    if (zkw != null) {
-      RegionState regionState = MetaTableLocator.getMetaRegionState(zkw);
-      RegionStateNode regionNode =
-        regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO);
-      regionNode.lock();
-      try {
-        regionNode.setRegionLocation(regionState.getServerName());
-        regionNode.setState(regionState.getState());
-        if (regionNode.getProcedure() != null) {
-          regionNode.getProcedure().stateLoaded(this, regionNode);
+    // load meta region states.
+    // notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new
+    // replicas, or remove excess replicas.
+    try (RegionScanner scanner =
+      localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows;
+      do {
+        moreRows = scanner.next(cells);
+        if (cells.isEmpty()) {
+          continue;
         }
-        setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN);
-      } finally {
-        regionNode.unlock();
+        Result result = Result.create(cells);
+        cells.clear();
+        RegionStateStore
+          .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> {
+            RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo);
+            regionNode.lock();
+            try {
+              regionNode.setState(state);
+              regionNode.setLastHost(lastHost);

Review comment:
       Seems the old code didn't setLastHost and setOpenSeqNum? Why add this?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
##########
@@ -43,73 +49,103 @@
 
   private final HMaster master;
 
-  public MasterMetaBootstrap(HMaster master) {
+  private final LocalStore localStore;
+
+  public MasterMetaBootstrap(HMaster master, LocalStore localStore) {
     this.master = master;
+    this.localStore = localStore;
   }
 
   /**
    * For assigning hbase:meta replicas only.
-   * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same
-   * server as the hbase:meta region.
    */
-  void assignMetaReplicas()
-      throws IOException, InterruptedException, KeeperException {
+  void assignMetaReplicas() throws IOException, InterruptedException, KeeperException {
     int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM,
-           HConstants.DEFAULT_META_REPLICA_NUM);
-    if (numReplicas <= 1) {
-      // No replicaas to assign. Return.
-      return;
-    }
-    final AssignmentManager assignmentManager = master.getAssignmentManager();
-    if (!assignmentManager.isMetaLoaded()) {
-      throw new IllegalStateException("hbase:meta must be initialized first before we can " +
-          "assign out its replicas");
-    }
-    ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper());
-    for (int i = 1; i < numReplicas; i++) {
-      // Get current meta state for replica from zk.
-      RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i);
-      RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica(
-          RegionInfoBuilder.FIRST_META_REGIONINFO, i);
-      LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState);
-      if (metaServername.equals(metaState.getServerName())) {
-        metaState = null;
-        LOG.info(hri.getRegionNameAsString() +
-          " old location is same as current hbase:meta location; setting location as null...");
+      HConstants.DEFAULT_META_REPLICA_NUM);
+    // only try to assign meta replicas when there are more than 1 replicas
+    if (numReplicas > 1) {
+      final AssignmentManager am = master.getAssignmentManager();
+      if (!am.isMetaLoaded()) {
+        throw new IllegalStateException(
+          "hbase:meta must be initialized first before we can " + "assign out its replicas");
       }
-      // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
-      // down hosting server which calls AM#stop.
-      if (metaState != null && metaState.getServerName() != null) {
-        // Try to retain old assignment.
-        assignmentManager.assign(hri, metaState.getServerName());
-      } else {
-        assignmentManager.assign(hri);
+      RegionStates regionStates = am.getRegionStates();
+      for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+        if (!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+          continue;
+        }
+        RegionState regionState = regionStates.getRegionState(regionInfo);
+        Set<ServerName> metaServerNames = new HashSet<ServerName>();
+        if (regionState.getServerName() != null) {
+          metaServerNames.add(regionState.getServerName());
+        }
+        for (int i = 1; i < numReplicas; i++) {
+          RegionInfo secondaryRegionInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, i);
+          RegionState secondaryRegionState = regionStates.getRegionState(secondaryRegionInfo);
+          ServerName sn = null;
+          if (secondaryRegionState != null) {
+            sn = secondaryRegionState.getServerName();
+            if (sn != null && !metaServerNames.add(sn)) {
+              LOG.info("{} old location {} is same with other hbase:meta replica location;" +
+                " setting location as null...", secondaryRegionInfo.getRegionNameAsString(), sn);

Review comment:
       Donot need unassign first?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStateStore.java
##########
@@ -216,12 +200,32 @@ private void updateUserRegionLocation(RegionInfo regionInfo, State state,
         .build());
     LOG.info(info.toString());
     updateRegionLocation(regionInfo, state, put);
+    if (regionInfo.isMetaRegion() && regionInfo.isFirst()) {
+      // mirror the meta location to

Review comment:
       mirror the meta location to zookeeper?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
##########
@@ -43,73 +49,103 @@
 
   private final HMaster master;
 
-  public MasterMetaBootstrap(HMaster master) {
+  private final LocalStore localStore;
+
+  public MasterMetaBootstrap(HMaster master, LocalStore localStore) {
     this.master = master;
+    this.localStore = localStore;
   }
 
   /**
    * For assigning hbase:meta replicas only.
-   * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same
-   * server as the hbase:meta region.
    */
-  void assignMetaReplicas()
-      throws IOException, InterruptedException, KeeperException {
+  void assignMetaReplicas() throws IOException, InterruptedException, KeeperException {
     int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM,
-           HConstants.DEFAULT_META_REPLICA_NUM);
-    if (numReplicas <= 1) {
-      // No replicaas to assign. Return.
-      return;
-    }
-    final AssignmentManager assignmentManager = master.getAssignmentManager();
-    if (!assignmentManager.isMetaLoaded()) {
-      throw new IllegalStateException("hbase:meta must be initialized first before we can " +
-          "assign out its replicas");
-    }
-    ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper());
-    for (int i = 1; i < numReplicas; i++) {
-      // Get current meta state for replica from zk.
-      RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i);
-      RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica(
-          RegionInfoBuilder.FIRST_META_REGIONINFO, i);
-      LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState);
-      if (metaServername.equals(metaState.getServerName())) {
-        metaState = null;
-        LOG.info(hri.getRegionNameAsString() +
-          " old location is same as current hbase:meta location; setting location as null...");
+      HConstants.DEFAULT_META_REPLICA_NUM);
+    // only try to assign meta replicas when there are more than 1 replicas
+    if (numReplicas > 1) {
+      final AssignmentManager am = master.getAssignmentManager();
+      if (!am.isMetaLoaded()) {
+        throw new IllegalStateException(
+          "hbase:meta must be initialized first before we can " + "assign out its replicas");
       }
-      // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
-      // down hosting server which calls AM#stop.
-      if (metaState != null && metaState.getServerName() != null) {
-        // Try to retain old assignment.
-        assignmentManager.assign(hri, metaState.getServerName());
-      } else {
-        assignmentManager.assign(hri);
+      RegionStates regionStates = am.getRegionStates();
+      for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+        if (!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+          continue;
+        }
+        RegionState regionState = regionStates.getRegionState(regionInfo);
+        Set<ServerName> metaServerNames = new HashSet<ServerName>();
+        if (regionState.getServerName() != null) {
+          metaServerNames.add(regionState.getServerName());
+        }
+        for (int i = 1; i < numReplicas; i++) {
+          RegionInfo secondaryRegionInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, i);
+          RegionState secondaryRegionState = regionStates.getRegionState(secondaryRegionInfo);
+          ServerName sn = null;
+          if (secondaryRegionState != null) {
+            sn = secondaryRegionState.getServerName();
+            if (sn != null && !metaServerNames.add(sn)) {
+              LOG.info("{} old location {} is same with other hbase:meta replica location;" +
+                " setting location as null...", secondaryRegionInfo.getRegionNameAsString(), sn);
+              sn = null;
+            }
+          }
+          // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
+          // down hosting server which calls AM#stop.
+          if (sn != null) {
+            am.assign(secondaryRegionInfo, sn);
+          } else {
+            am.assign(secondaryRegionInfo);
+          }
+        }
       }
     }
+    // always try to remomve excess meta replicas
     unassignExcessMetaReplica(numReplicas);
   }
 
   private void unassignExcessMetaReplica(int numMetaReplicasConfigured) {
-    final ZKWatcher zooKeeper = master.getZooKeeper();
-    // unassign the unneeded replicas (for e.g., if the previous master was configured
-    // with a replication of 3 and now it is 2, we need to unassign the 1 unneeded replica)
-    try {
-      List<String> metaReplicaZnodes = zooKeeper.getMetaReplicaNodes();
-      for (String metaReplicaZnode : metaReplicaZnodes) {
-        int replicaId = zooKeeper.getZNodePaths().getMetaReplicaIdFromZnode(metaReplicaZnode);
-        if (replicaId >= numMetaReplicasConfigured) {
-          RegionState r = MetaTableLocator.getMetaRegionState(zooKeeper, replicaId);
-          LOG.info("Closing excess replica of meta region " + r.getRegion());
-          // send a close and wait for a max of 30 seconds
-          ServerManager.closeRegionSilentlyAndWait(master.getAsyncClusterConnection(),
-              r.getServerName(), r.getRegion(), 30000);
-          ZKUtil.deleteNode(zooKeeper, zooKeeper.getZNodePaths().getZNodeForReplica(replicaId));
+    ZKWatcher zooKeeper = master.getZooKeeper();
+    AssignmentManager am = master.getAssignmentManager();
+    RegionStates regionStates = am.getRegionStates();
+    Map<RegionInfo, Integer> region2MaxReplicaId = new HashMap<>();
+    for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+      RegionInfo primaryRegionInfo = RegionReplicaUtil.getRegionInfoForDefaultReplica(regionInfo);
+      region2MaxReplicaId.compute(primaryRegionInfo,
+        (k, v) -> v == null ? regionInfo.getReplicaId() : Math.max(v, regionInfo.getReplicaId()));
+      if (regionInfo.getReplicaId() < numMetaReplicasConfigured) {
+        continue;
+      }
+      RegionState regionState = regionStates.getRegionState(regionInfo);
+      try {
+        ServerManager.closeRegionSilentlyAndWait(master.getAsyncClusterConnection(),
+          regionState.getServerName(), regionInfo, 30000);
+        if (regionInfo.isFirst()) {
+          // for compatibility, also try to remove the replicas on zk.
+          ZKUtil.deleteNode(zooKeeper,
+            zooKeeper.getZNodePaths().getZNodeForReplica(regionInfo.getReplicaId()));
         }
+      } catch (Exception e) {
+        // ignore the exception since we don't want the master to be wedged due to potential
+        // issues in the cleanup of the extra regions. We can do that cleanup via hbck or manually
+        LOG.warn("Ignoring exception " + e);
       }
-    } catch (Exception ex) {
-      // ignore the exception since we don't want the master to be wedged due to potential
-      // issues in the cleanup of the extra regions. We can do that cleanup via hbck or manually
-      LOG.warn("Ignoring exception " + ex);
+      regionStates.deleteRegion(regionInfo);
     }
+    region2MaxReplicaId.forEach((regionInfo, maxReplicaId) -> {
+      if (maxReplicaId >= numMetaReplicasConfigured) {
+        byte[] metaRow = MetaTableAccessor.getMetaKeyForRegion(regionInfo);
+        Delete delete = MetaTableAccessor.removeRegionReplica(metaRow, numMetaReplicasConfigured,

Review comment:
       Why don't delete in previous for-loop? Same with the delete for znode.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##########
@@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException {
     // Start the Assignment Thread
     startAssignmentThread();
 
-    // load meta region state
-    ZKWatcher zkw = master.getZooKeeper();
-    // it could be null in some tests
-    if (zkw != null) {
-      RegionState regionState = MetaTableLocator.getMetaRegionState(zkw);
-      RegionStateNode regionNode =
-        regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO);
-      regionNode.lock();
-      try {
-        regionNode.setRegionLocation(regionState.getServerName());
-        regionNode.setState(regionState.getState());
-        if (regionNode.getProcedure() != null) {
-          regionNode.getProcedure().stateLoaded(this, regionNode);
+    // load meta region states.
+    // notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new
+    // replicas, or remove excess replicas.
+    try (RegionScanner scanner =
+      localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows;
+      do {
+        moreRows = scanner.next(cells);
+        if (cells.isEmpty()) {
+          continue;
         }
-        setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN);
-      } finally {
-        regionNode.unlock();
+        Result result = Result.create(cells);
+        cells.clear();
+        RegionStateStore
+          .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> {
+            RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo);
+            regionNode.lock();
+            try {
+              regionNode.setState(state);
+              regionNode.setLastHost(lastHost);
+              regionNode.setRegionLocation(regionLocation);
+              regionNode.setOpenSeqNum(openSeqNum);
+              if (regionNode.getProcedure() != null) {
+                regionNode.getProcedure().stateLoaded(this, regionNode);
+              }
+              if (RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+                setMetaAssigned(regionInfo, state == State.OPEN);
+              }
+            } finally {
+              regionNode.unlock();
+            }
+            if (regionInfo.isFirst()) {
+              // for compatibility, mirror the meta region state to zookeeper

Review comment:
       This compatibility is for which case?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
##########
@@ -43,73 +49,103 @@
 
   private final HMaster master;
 
-  public MasterMetaBootstrap(HMaster master) {
+  private final LocalStore localStore;
+
+  public MasterMetaBootstrap(HMaster master, LocalStore localStore) {
     this.master = master;
+    this.localStore = localStore;
   }
 
   /**
    * For assigning hbase:meta replicas only.
-   * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same
-   * server as the hbase:meta region.
    */
-  void assignMetaReplicas()
-      throws IOException, InterruptedException, KeeperException {
+  void assignMetaReplicas() throws IOException, InterruptedException, KeeperException {
     int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM,
-           HConstants.DEFAULT_META_REPLICA_NUM);
-    if (numReplicas <= 1) {
-      // No replicaas to assign. Return.
-      return;
-    }
-    final AssignmentManager assignmentManager = master.getAssignmentManager();
-    if (!assignmentManager.isMetaLoaded()) {
-      throw new IllegalStateException("hbase:meta must be initialized first before we can " +
-          "assign out its replicas");
-    }
-    ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper());
-    for (int i = 1; i < numReplicas; i++) {
-      // Get current meta state for replica from zk.
-      RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i);
-      RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica(
-          RegionInfoBuilder.FIRST_META_REGIONINFO, i);
-      LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState);
-      if (metaServername.equals(metaState.getServerName())) {
-        metaState = null;
-        LOG.info(hri.getRegionNameAsString() +
-          " old location is same as current hbase:meta location; setting location as null...");
+      HConstants.DEFAULT_META_REPLICA_NUM);
+    // only try to assign meta replicas when there are more than 1 replicas
+    if (numReplicas > 1) {
+      final AssignmentManager am = master.getAssignmentManager();
+      if (!am.isMetaLoaded()) {
+        throw new IllegalStateException(
+          "hbase:meta must be initialized first before we can " + "assign out its replicas");
       }
-      // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
-      // down hosting server which calls AM#stop.
-      if (metaState != null && metaState.getServerName() != null) {
-        // Try to retain old assignment.
-        assignmentManager.assign(hri, metaState.getServerName());
-      } else {
-        assignmentManager.assign(hri);
+      RegionStates regionStates = am.getRegionStates();
+      for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+        if (!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+          continue;
+        }
+        RegionState regionState = regionStates.getRegionState(regionInfo);
+        Set<ServerName> metaServerNames = new HashSet<ServerName>();
+        if (regionState.getServerName() != null) {
+          metaServerNames.add(regionState.getServerName());
+        }
+        for (int i = 1; i < numReplicas; i++) {
+          RegionInfo secondaryRegionInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, i);
+          RegionState secondaryRegionState = regionStates.getRegionState(secondaryRegionInfo);
+          ServerName sn = null;
+          if (secondaryRegionState != null) {
+            sn = secondaryRegionState.getServerName();
+            if (sn != null && !metaServerNames.add(sn)) {
+              LOG.info("{} old location {} is same with other hbase:meta replica location;" +
+                " setting location as null...", secondaryRegionInfo.getRegionNameAsString(), sn);

Review comment:
       Yes. Maybe another issuer.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##########
@@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException {
     // Start the Assignment Thread
     startAssignmentThread();
 
-    // load meta region state
-    ZKWatcher zkw = master.getZooKeeper();
-    // it could be null in some tests
-    if (zkw != null) {
-      RegionState regionState = MetaTableLocator.getMetaRegionState(zkw);
-      RegionStateNode regionNode =
-        regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO);
-      regionNode.lock();
-      try {
-        regionNode.setRegionLocation(regionState.getServerName());
-        regionNode.setState(regionState.getState());
-        if (regionNode.getProcedure() != null) {
-          regionNode.getProcedure().stateLoaded(this, regionNode);
+    // load meta region states.
+    // notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new
+    // replicas, or remove excess replicas.
+    try (RegionScanner scanner =
+      localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows;
+      do {
+        moreRows = scanner.next(cells);
+        if (cells.isEmpty()) {
+          continue;
         }
-        setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN);
-      } finally {
-        regionNode.unlock();
+        Result result = Result.create(cells);
+        cells.clear();
+        RegionStateStore
+          .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> {
+            RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo);
+            regionNode.lock();
+            try {
+              regionNode.setState(state);
+              regionNode.setLastHost(lastHost);
+              regionNode.setRegionLocation(regionLocation);
+              regionNode.setOpenSeqNum(openSeqNum);
+              if (regionNode.getProcedure() != null) {
+                regionNode.getProcedure().stateLoaded(this, regionNode);
+              }
+              if (RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+                setMetaAssigned(regionInfo, state == State.OPEN);
+              }
+            } finally {
+              regionNode.unlock();
+            }
+            if (regionInfo.isFirst()) {
+              // for compatibility, mirror the meta region state to zookeeper

Review comment:
       Got it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-632677434


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 59s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  3s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  7s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  4s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 27s |  hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 12s |  hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  1s |  hbase-procedure in the patch passed.  |
   | -1 :x: |  unit  | 202m  2s |  hbase-server in the patch failed.  |
   |  |   | 242m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c2d4f27dfe2a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 80b64ef4dc |
   | Default Java | 1.8.0_232 |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/testReport/ |
   | Max. process+thread count | 3751 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-procedure hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] saintstack commented on a change in pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
saintstack commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r430831105



##########
File path: hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
##########
@@ -1403,6 +1403,21 @@ private static void deleteFromMetaTable(final Connection connection, final List<
     }
   }
 
+  public static Delete removeRegionReplica(byte[] metaRow, int replicaIndexToDeleteFrom,
+    int numReplicasToRemove) {
+    int absoluteIndex = replicaIndexToDeleteFrom + numReplicasToRemove;
+    long now = EnvironmentEdgeManager.currentTime();
+    Delete deleteReplicaLocations = new Delete(metaRow);
+    for (int i = replicaIndexToDeleteFrom; i < absoluteIndex; i++) {
+      deleteReplicaLocations.addColumns(getCatalogFamily(), getServerColumn(i), now);
+      deleteReplicaLocations.addColumns(getCatalogFamily(), getSeqNumColumn(i), now);
+      deleteReplicaLocations.addColumns(getCatalogFamily(), getStartCodeColumn(i), now);
+      deleteReplicaLocations.addColumns(getCatalogFamily(), getServerNameColumn(i), now);
+      deleteReplicaLocations.addColumns(getCatalogFamily(), getRegionStateColumn(i), now);

Review comment:
       Man. Region Replicas are messy in hbase:meta. Not your fault. For later.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -866,8 +874,50 @@ protected void initializeZKBasedSystemTrackers()
 
   // Will be overriden in test to inject customized AssignmentManager
   @VisibleForTesting
-  protected AssignmentManager createAssignmentManager(MasterServices master) {
-    return new AssignmentManager(master);
+  protected AssignmentManager createAssignmentManager(MasterServices master,
+    LocalStore localStore) {
+    return new AssignmentManager(master, localStore);

Review comment:
       Good.
   
   Was going to suggest AM ask the Master for its localStore but that probably TMI for the AM to know of. This is better separation of concerns.
   
   Good.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/store/LocalStore.java
##########
@@ -87,6 +90,10 @@
   public static final byte[] PROC_FAMILY = Bytes.toBytes("proc");
 
   private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME)
+    .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY)
+      .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true)
+      .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL)
+      .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build())

Review comment:
       Yeah, the ROW_INDEX_V1 is good change.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##########
@@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException {
     // Start the Assignment Thread
     startAssignmentThread();
 
-    // load meta region state
-    ZKWatcher zkw = master.getZooKeeper();

Review comment:
       Radical. No ZKW in AM. Good.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] saintstack commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
saintstack commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428743594



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
##########
@@ -552,4 +553,6 @@ default SplitWALManager getSplitWALManager(){
    * @return The state of the load balancer, or false if the load balancer isn't defined.
    */
   boolean isBalancerOn();
+
+  RootTable getRootTable();

Review comment:
       If it the only consumer... yeah, maybe. No hurry.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633123758


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 47s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 52s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 37s |  branch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 30s |  hbase-client in master failed.  |
   | -0 :warning: |  javadoc  |   0m 47s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 47s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 45s |  patch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 34s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 48s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 39s |  hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  | 215m 57s |  hbase-server in the patch passed.  |
   |  |   | 251m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c240576b4a3e 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c303f9d329 |
   | Default Java | 2020-01-14 |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/testReport/ |
   | Max. process+thread count | 3343 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633021510


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  5s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 45s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 32s |  hbase-client: The patch generated 1 new + 46 unchanged - 1 fixed = 47 total (was 47)  |
   | -0 :warning: |  checkstyle  |   1m 17s |  hbase-server: The patch generated 1 new + 153 unchanged - 0 fixed = 154 total (was 153)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 39s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 23s |  The patch does not generate ASF License warnings.  |
   |  |   |  41m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle |
   | uname | Linux df8bf1a4231d 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c303f9d329 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-632604320


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 49s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 21s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 13s |  master passed  |
   | +0 :ok: |  refguide  |   4m 47s |  branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.  |
   | +1 :green_heart: |  spotbugs  |   4m  5s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 30s |  hbase-client: The patch generated 1 new + 46 unchanged - 1 fixed = 47 total (was 47)  |
   | -0 :warning: |  checkstyle  |   1m  8s |  hbase-server: The patch generated 1 new + 165 unchanged - 4 fixed = 166 total (was 169)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML file.  |
   | +0 :ok: |  refguide  |   4m 53s |  patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 10s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   4m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 49s |  The patch does not generate ASF License warnings.  |
   |  |   |  55m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle refguide xml |
   | uname | Linux dac3991d5f30 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 80b64ef4dc |
   | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-general-check/output/branch-site/book.html |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt |
   | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/artifact/yetus-general-check/output/patch-site/book.html |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-procedure hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-631590061


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 42s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 32s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  3s |  hbase-client in the patch passed.  |
   | -1 :x: |  unit  | 136m 20s |  hbase-server in the patch failed.  |
   |  |   | 164m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 0ff072dbba63 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 86a2692dc4 |
   | Default Java | 1.8.0_232 |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/testReport/ |
   | Max. process+thread count | 5081 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] saintstack commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
saintstack commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428740739



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/region/LocalRegion.java
##########
@@ -0,0 +1,328 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.region;
+
+import static org.apache.hadoop.hbase.HConstants.HREGION_LOGDIR_NAME;
+
+import java.io.IOException;
+import java.util.Collections;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegion.FlushResult;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.RecoverLeaseFSUtils;
+import org.apache.hadoop.hbase.wal.AbstractFSWALProvider;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALFactory;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.math.IntMath;
+
+/**
+ * A region that stores data in a separated directory on WAL file system.

Review comment:
       Pardon me. 'snowflakes' are purportedly unique; no two are alike. 'snowflaking' is making something 'special', a one-off. I was asking if we need to do the special trick where this local region runs on the WAL FS exclusively making it different to how other Regions do their storage spread across FS's... one for data and another for WAL.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-632566475


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  0s |  master passed  |
   | +1 :green_heart: |  compile  |   2m  2s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  6s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 59s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 29s |  hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 11s |  hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  1s |  hbase-procedure in the patch passed.  |
   | -1 :x: |  unit  | 196m 37s |  hbase-server in the patch failed.  |
   |  |   | 238m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4cba8922db8a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 80b64ef4dc |
   | Default Java | 1.8.0_232 |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/testReport/ |
   | Max. process+thread count | 3791 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-procedure hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-632532187


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 20s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 18s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 43s |  branch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 28s |  hbase-client in master failed.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-common in master failed.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-procedure in master failed.  |
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 43s |  patch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 26s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-procedure in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  hbase-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  6s |  hbase-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 36s |  hbase-procedure in the patch passed.  |
   | -1 :x: |  unit  | 123m 28s |  hbase-server in the patch failed.  |
   |  |   | 166m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 06eea27ccb2e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 80b64ef4dc |
   | Default Java | 2020-01-14 |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-procedure.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-procedure.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/testReport/ |
   | Max. process+thread count | 4390 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-client hbase-procedure hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633082611


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 41s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 29s |  hbase-client: The patch generated 1 new + 46 unchanged - 1 fixed = 47 total (was 47)  |
   | -0 :warning: |  checkstyle  |   1m 12s |  hbase-server: The patch generated 1 new + 154 unchanged - 0 fixed = 155 total (was 154)  |
   | -0 :warning: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  hadoopcheck  |  12m 12s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 22s |  The patch does not generate ASF License warnings.  |
   |  |   |  39m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle |
   | uname | Linux 32f6b7a69965 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c303f9d329 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt |
   | whitespace | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/artifact/yetus-general-check/output/whitespace-eol.txt |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/5/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] saintstack commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
saintstack commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428196462



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -866,6 +877,50 @@ protected AssignmentManager createAssignmentManager(MasterServices master) {
     return new AssignmentManager(master);
   }
 
+  private void createRootTable() throws IOException, KeeperException {
+    RootTable rootTable = new RootTable(this, cleanerPool);
+    rootTable.initialize();
+    // try migrate data from zookeeper
+    try (RegionScanner scanner =
+      rootTable.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows = scanner.next(cells);
+      if (!cells.isEmpty() || moreRows) {
+        // notice that all replicas for a region are in the same row, so the migration can be
+        // done with in a one row put, which means if we have data in root table then we can make
+        // sure that the migration is done.

Review comment:
       Good

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -866,6 +877,50 @@ protected AssignmentManager createAssignmentManager(MasterServices master) {
     return new AssignmentManager(master);
   }
 
+  private void createRootTable() throws IOException, KeeperException {

Review comment:
       Would be cool if this was not inline in HMaster class. It too big as it is.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -866,6 +877,50 @@ protected AssignmentManager createAssignmentManager(MasterServices master) {
     return new AssignmentManager(master);
   }
 
+  private void createRootTable() throws IOException, KeeperException {
+    RootTable rootTable = new RootTable(this, cleanerPool);
+    rootTable.initialize();
+    // try migrate data from zookeeper
+    try (RegionScanner scanner =
+      rootTable.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows = scanner.next(cells);
+      if (!cells.isEmpty() || moreRows) {
+        // notice that all replicas for a region are in the same row, so the migration can be
+        // done with in a one row put, which means if we have data in root table then we can make
+        // sure that the migration is done.
+        LOG.info("Root table already has data in it, skip migrating...");
+        this.rootTable = rootTable;
+        return;
+      }
+    }
+    // start migrating
+    byte[] row = MetaTableAccessor.getMetaKeyForRegion(RegionInfoBuilder.FIRST_META_REGIONINFO);
+    Put put = new Put(row);
+    List<String> metaReplicaNodes = zooKeeper.getMetaReplicaNodes();
+    StringBuilder info = new StringBuilder("Migrating meta location:");
+    for (String metaReplicaNode : metaReplicaNodes) {
+      int replicaId = zooKeeper.getZNodePaths().getMetaReplicaIdFromZnode(metaReplicaNode);
+      RegionState state = MetaTableLocator.getMetaRegionState(zooKeeper, replicaId);
+      info.append(" ").append(state);
+      put.setTimestamp(state.getStamp());
+      MetaTableAccessor.addRegionInfo(put, state.getRegion());
+      if (state.getServerName() != null) {
+        MetaTableAccessor.addLocation(put, state.getServerName(), HConstants.NO_SEQNUM, replicaId);
+      }
+      put.add(CellBuilderFactory.create(CellBuilderType.SHALLOW_COPY).setRow(put.getRow())
+        .setFamily(HConstants.CATALOG_FAMILY)
+        .setQualifier(RegionStateStore.getStateColumn(replicaId)).setTimestamp(put.getTimestamp())
+        .setType(Cell.Type.Put).setValue(Bytes.toBytes(state.getState().name())).build());
+    }
+    if (!put.isEmpty()) {
+      LOG.info(info.toString());
+    } else {
+      LOG.info("No meta location avaiable on zookeeper, skip migrating...");
+    }
+    this.rootTable = rootTable;
+  }
+

Review comment:
       good

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/region/LocalRegion.java
##########
@@ -0,0 +1,328 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.region;
+
+import static org.apache.hadoop.hbase.HConstants.HREGION_LOGDIR_NAME;
+
+import java.io.IOException;
+import java.util.Collections;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegion.FlushResult;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.RecoverLeaseFSUtils;
+import org.apache.hadoop.hbase.wal.AbstractFSWALProvider;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALFactory;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.math.IntMath;
+
+/**
+ * A region that stores data in a separated directory on WAL file system.

Review comment:
       We still need to do this snowflaking?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -866,6 +877,50 @@ protected AssignmentManager createAssignmentManager(MasterServices master) {
     return new AssignmentManager(master);
   }
 
+  private void createRootTable() throws IOException, KeeperException {
+    RootTable rootTable = new RootTable(this, cleanerPool);
+    rootTable.initialize();
+    // try migrate data from zookeeper
+    try (RegionScanner scanner =
+      rootTable.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows = scanner.next(cells);
+      if (!cells.isEmpty() || moreRows) {
+        // notice that all replicas for a region are in the same row, so the migration can be
+        // done with in a one row put, which means if we have data in root table then we can make
+        // sure that the migration is done.
+        LOG.info("Root table already has data in it, skip migrating...");
+        this.rootTable = rootTable;
+        return;
+      }
+    }
+    // start migrating
+    byte[] row = MetaTableAccessor.getMetaKeyForRegion(RegionInfoBuilder.FIRST_META_REGIONINFO);

Review comment:
       So, this row key is just serialized meta region name?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
##########
@@ -552,4 +553,6 @@ default SplitWALManager getSplitWALManager(){
    * @return The state of the load balancer, or false if the load balancer isn't defined.
    */
   boolean isBalancerOn();
+
+  RootTable getRootTable();

Review comment:
       It can't be kept totally internal?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
##########
@@ -552,4 +553,6 @@ default SplitWALManager getSplitWALManager(){
    * @return The state of the load balancer, or false if the load balancer isn't defined.
    */
   boolean isBalancerOn();
+
+  RootTable getRootTable();

Review comment:
       I suppose AM gets it this way or some other internal services?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/root/RootTable.java
##########
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.root;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
+import org.apache.hadoop.hbase.region.LocalRegion;
+import org.apache.hadoop.hbase.region.LocalRegionParams;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Used to store the location of meta region.
+ */
+@InterfaceAudience.Private
+public class RootTable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(RootTable.class);
+
+  static final String MAX_WALS_KEY = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_MAX_WALS = 10;
+
+  static final String USE_HSYNC_KEY = "hbase.root.table.region.wal.hsync";
+
+  static final String RING_BUFFER_SLOT_COUNT = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_RING_BUFFER_SLOT_COUNT = 64;
+
+  static final String ROOT_TABLE_DIR = "RootTable";
+
+  static final String HFILECLEANER_PLUGINS = "hbase.root.table.region.hfilecleaner.plugins";
+
+  static final String FLUSH_SIZE_KEY = "hbase.root.table.region.flush.size";
+
+  static final long DEFAULT_FLUSH_SIZE = TableDescriptorBuilder.DEFAULT_MEMSTORE_FLUSH_SIZE;
+
+  static final String FLUSH_PER_CHANGES_KEY = "hbase.root.table.region.flush.per.changes";
+
+  private static final long DEFAULT_FLUSH_PER_CHANGES = 1_000_000;
+
+  static final String FLUSH_INTERVAL_MS_KEY = "hbase.root.table.region.flush.interval.ms";
+
+  // default to flush every 15 minutes, for safety
+  private static final long DEFAULT_FLUSH_INTERVAL_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final String COMPACT_MIN_KEY = "hbase.root.table.region.compact.min";
+
+  private static final int DEFAULT_COMPACT_MIN = 4;
+
+  static final String ROLL_PERIOD_MS_KEY = "hbase.root.table.region.walroll.period.ms";
+
+  private static final long DEFAULT_ROLL_PERIOD_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final TableName TABLE_NAME = TableName.valueOf("master:root");
+
+  private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME)
+    .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY)
+      .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true)
+      .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL)
+      .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build())
+    .build();
+
+  private final Server server;
+
+  private final DirScanPool cleanerPool;
+
+  private LocalRegion region;
+
+  public RootTable(Server server, DirScanPool cleanerPool) {
+    this.server = server;
+    this.cleanerPool = cleanerPool;
+  }
+
+  public void initialize() throws IOException {
+    LOG.info("Initializing root table...");
+    LocalRegionParams params = new LocalRegionParams().server(server).regionDirName(ROOT_TABLE_DIR)

Review comment:
       Is this at ${root.dir}/RootTable?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStateStore.java
##########
@@ -216,12 +196,32 @@ private void updateUserRegionLocation(RegionInfo regionInfo, State state,
         .build());
     LOG.info(info.toString());
     updateRegionLocation(regionInfo, state, put);
+    if (regionInfo.isMetaRegion() && regionInfo.isFirst()) {
+      // mirror the meta location to
+      mirrorMetaLocation(regionInfo, regionLocation, state);
+    }
   }
 
-  private void updateRegionLocation(RegionInfo regionInfo, State state, Put put)
+  public void mirrorMetaLocation(RegionInfo regionInfo, ServerName serverName, State state)
       throws IOException {
-    try (Table table = master.getConnection().getTable(TableName.META_TABLE_NAME)) {
-      table.put(put);
+    try {
+      MetaTableLocator.setMetaLocation(master.getZooKeeper(), serverName, regionInfo.getReplicaId(),
+        state);
+    } catch (KeeperException e) {
+      throw new IOException(e);
+    }
+  }
+
+  private void updateRegionLocation(RegionInfo regionInfo, State state, Put put)
+    throws IOException {
+    try {
+      if (regionInfo.isMetaRegion()) {
+        master.getRootTable().update(put);

Review comment:
       Nice

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/root/RootTable.java
##########
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.root;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
+import org.apache.hadoop.hbase.region.LocalRegion;
+import org.apache.hadoop.hbase.region.LocalRegionParams;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Used to store the location of meta region.
+ */
+@InterfaceAudience.Private
+public class RootTable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(RootTable.class);
+
+  static final String MAX_WALS_KEY = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_MAX_WALS = 10;
+
+  static final String USE_HSYNC_KEY = "hbase.root.table.region.wal.hsync";
+
+  static final String RING_BUFFER_SLOT_COUNT = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_RING_BUFFER_SLOT_COUNT = 64;
+
+  static final String ROOT_TABLE_DIR = "RootTable";
+
+  static final String HFILECLEANER_PLUGINS = "hbase.root.table.region.hfilecleaner.plugins";
+
+  static final String FLUSH_SIZE_KEY = "hbase.root.table.region.flush.size";
+
+  static final long DEFAULT_FLUSH_SIZE = TableDescriptorBuilder.DEFAULT_MEMSTORE_FLUSH_SIZE;
+
+  static final String FLUSH_PER_CHANGES_KEY = "hbase.root.table.region.flush.per.changes";
+
+  private static final long DEFAULT_FLUSH_PER_CHANGES = 1_000_000;
+
+  static final String FLUSH_INTERVAL_MS_KEY = "hbase.root.table.region.flush.interval.ms";
+
+  // default to flush every 15 minutes, for safety
+  private static final long DEFAULT_FLUSH_INTERVAL_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final String COMPACT_MIN_KEY = "hbase.root.table.region.compact.min";
+
+  private static final int DEFAULT_COMPACT_MIN = 4;
+
+  static final String ROLL_PERIOD_MS_KEY = "hbase.root.table.region.walroll.period.ms";
+
+  private static final long DEFAULT_ROLL_PERIOD_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final TableName TABLE_NAME = TableName.valueOf("master:root");
+
+  private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME)
+    .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY)
+      .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true)
+      .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL)
+      .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build())
+    .build();
+
+  private final Server server;
+
+  private final DirScanPool cleanerPool;
+
+  private LocalRegion region;
+
+  public RootTable(Server server, DirScanPool cleanerPool) {
+    this.server = server;
+    this.cleanerPool = cleanerPool;
+  }
+
+  public void initialize() throws IOException {
+    LOG.info("Initializing root table...");
+    LocalRegionParams params = new LocalRegionParams().server(server).regionDirName(ROOT_TABLE_DIR)
+      .tableDescriptor(TABLE_DESC);
+    Configuration conf = server.getConfiguration();
+    long flushSize = conf.getLong(FLUSH_SIZE_KEY, DEFAULT_FLUSH_SIZE);
+    long flushPerChanges = conf.getLong(FLUSH_PER_CHANGES_KEY, DEFAULT_FLUSH_PER_CHANGES);
+    long flushIntervalMs = conf.getLong(FLUSH_INTERVAL_MS_KEY, DEFAULT_FLUSH_INTERVAL_MS);
+    int compactMin = conf.getInt(COMPACT_MIN_KEY, DEFAULT_COMPACT_MIN);
+    params.flushSize(flushSize).flushPerChanges(flushPerChanges).flushIntervalMs(flushIntervalMs)
+      .compactMin(compactMin);
+    int maxWals = conf.getInt(MAX_WALS_KEY, DEFAULT_MAX_WALS);
+    params.maxWals(maxWals);
+    if (conf.get(USE_HSYNC_KEY) != null) {
+      params.useHsync(conf.getBoolean(USE_HSYNC_KEY, false));
+    }
+    params.ringBufferSlotCount(conf.getInt(RING_BUFFER_SLOT_COUNT, DEFAULT_RING_BUFFER_SLOT_COUNT));
+    long rollPeriodMs = conf.getLong(ROLL_PERIOD_MS_KEY, DEFAULT_ROLL_PERIOD_MS);
+    params.rollPeriodMs(rollPeriodMs)
+      .archivedWalSuffix(MasterProcedureUtil.ARCHIVED_PROC_WAL_SUFFIX)
+      .hfileCleanerPlugins(HFILECLEANER_PLUGINS).cleanerPool(cleanerPool);
+    region = LocalRegion.create(params);
+  }
+
+  public void update(Put put) throws IOException {
+    region.update(r -> r.put(put));
+  }
+
+  public void delete(Delete delete) throws IOException {
+    region.update(r -> r.delete(delete));
+  }
+
+  public RegionScanner getScanner(Scan scan) throws IOException {
+    return region.getScanner(scan);
+  }
+
+  public void close(boolean abort) {
+    LOG.info("Closing root table, isAbort={}", abort);
+    if (region != null) {
+      region.close(abort);
+    }
+  }
+}

Review comment:
       It'd have its own Region?
   
   Seems overkill having a full Region just for meta locations?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-631513438


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 38s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 58s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 28s |  hbase-client: The patch generated 1 new + 46 unchanged - 1 fixed = 47 total (was 47)  |
   | -0 :warning: |  checkstyle  |   1m  5s |  hbase-server: The patch generated 1 new + 153 unchanged - 1 fixed = 154 total (was 154)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 22s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 25s |  The patch does not generate ASF License warnings.  |
   |  |   |  39m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle |
   | uname | Linux 5481927d4966 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 86a2692dc4 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt |
   | Max. process+thread count | 95 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
Apache9 commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r430369088



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##########
@@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException {
     // Start the Assignment Thread
     startAssignmentThread();
 
-    // load meta region state
-    ZKWatcher zkw = master.getZooKeeper();
-    // it could be null in some tests
-    if (zkw != null) {
-      RegionState regionState = MetaTableLocator.getMetaRegionState(zkw);
-      RegionStateNode regionNode =
-        regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO);
-      regionNode.lock();
-      try {
-        regionNode.setRegionLocation(regionState.getServerName());
-        regionNode.setState(regionState.getState());
-        if (regionNode.getProcedure() != null) {
-          regionNode.getProcedure().stateLoaded(this, regionNode);
+    // load meta region states.
+    // notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new
+    // replicas, or remove excess replicas.
+    try (RegionScanner scanner =
+      localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows;
+      do {
+        moreRows = scanner.next(cells);
+        if (cells.isEmpty()) {
+          continue;
         }
-        setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN);
-      } finally {
-        regionNode.unlock();
+        Result result = Result.create(cells);
+        cells.clear();
+        RegionStateStore
+          .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> {
+            RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo);
+            regionNode.lock();
+            try {
+              regionNode.setState(state);
+              regionNode.setLastHost(lastHost);
+              regionNode.setRegionLocation(regionLocation);
+              regionNode.setOpenSeqNum(openSeqNum);
+              if (regionNode.getProcedure() != null) {
+                regionNode.getProcedure().stateLoaded(this, regionNode);
+              }
+              if (RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+                setMetaAssigned(regionInfo, state == State.OPEN);
+              }
+            } finally {
+              regionNode.unlock();
+            }
+            if (regionInfo.isFirst()) {
+              // for compatibility, mirror the meta region state to zookeeper

Review comment:
       For communication with old clients, they will load the meta location from zookeeper.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
##########
@@ -225,23 +230,52 @@ public void start() throws IOException, KeeperException {
     // Start the Assignment Thread
     startAssignmentThread();
 
-    // load meta region state
-    ZKWatcher zkw = master.getZooKeeper();
-    // it could be null in some tests
-    if (zkw != null) {
-      RegionState regionState = MetaTableLocator.getMetaRegionState(zkw);
-      RegionStateNode regionNode =
-        regionStates.getOrCreateRegionStateNode(RegionInfoBuilder.FIRST_META_REGIONINFO);
-      regionNode.lock();
-      try {
-        regionNode.setRegionLocation(regionState.getServerName());
-        regionNode.setState(regionState.getState());
-        if (regionNode.getProcedure() != null) {
-          regionNode.getProcedure().stateLoaded(this, regionNode);
+    // load meta region states.
+    // notice that, here we will load all replicas, and in MasterMetaBootstrap we may assign new
+    // replicas, or remove excess replicas.
+    try (RegionScanner scanner =
+      localStore.getScanner(new Scan().addFamily(HConstants.CATALOG_FAMILY))) {
+      List<Cell> cells = new ArrayList<>();
+      boolean moreRows;
+      do {
+        moreRows = scanner.next(cells);
+        if (cells.isEmpty()) {
+          continue;
         }
-        setMetaAssigned(regionState.getRegion(), regionState.getState() == State.OPEN);
-      } finally {
-        regionNode.unlock();
+        Result result = Result.create(cells);
+        cells.clear();
+        RegionStateStore
+          .visitMetaEntry((r, regionInfo, state, regionLocation, lastHost, openSeqNum) -> {
+            RegionStateNode regionNode = regionStates.getOrCreateRegionStateNode(regionInfo);
+            regionNode.lock();
+            try {
+              regionNode.setState(state);
+              regionNode.setLastHost(lastHost);

Review comment:
       It's just because we do not have these fields in the protobuf message which is stored on zookeeper...

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
##########
@@ -43,73 +49,103 @@
 
   private final HMaster master;
 
-  public MasterMetaBootstrap(HMaster master) {
+  private final LocalStore localStore;
+
+  public MasterMetaBootstrap(HMaster master, LocalStore localStore) {
     this.master = master;
+    this.localStore = localStore;
   }
 
   /**
    * For assigning hbase:meta replicas only.
-   * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same
-   * server as the hbase:meta region.
    */
-  void assignMetaReplicas()
-      throws IOException, InterruptedException, KeeperException {
+  void assignMetaReplicas() throws IOException, InterruptedException, KeeperException {
     int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM,
-           HConstants.DEFAULT_META_REPLICA_NUM);
-    if (numReplicas <= 1) {
-      // No replicaas to assign. Return.
-      return;
-    }
-    final AssignmentManager assignmentManager = master.getAssignmentManager();
-    if (!assignmentManager.isMetaLoaded()) {
-      throw new IllegalStateException("hbase:meta must be initialized first before we can " +
-          "assign out its replicas");
-    }
-    ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper());
-    for (int i = 1; i < numReplicas; i++) {
-      // Get current meta state for replica from zk.
-      RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i);
-      RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica(
-          RegionInfoBuilder.FIRST_META_REGIONINFO, i);
-      LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState);
-      if (metaServername.equals(metaState.getServerName())) {
-        metaState = null;
-        LOG.info(hri.getRegionNameAsString() +
-          " old location is same as current hbase:meta location; setting location as null...");
+      HConstants.DEFAULT_META_REPLICA_NUM);
+    // only try to assign meta replicas when there are more than 1 replicas
+    if (numReplicas > 1) {
+      final AssignmentManager am = master.getAssignmentManager();
+      if (!am.isMetaLoaded()) {
+        throw new IllegalStateException(
+          "hbase:meta must be initialized first before we can " + "assign out its replicas");
       }
-      // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
-      // down hosting server which calls AM#stop.
-      if (metaState != null && metaState.getServerName() != null) {
-        // Try to retain old assignment.
-        assignmentManager.assign(hri, metaState.getServerName());
-      } else {
-        assignmentManager.assign(hri);
+      RegionStates regionStates = am.getRegionStates();
+      for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+        if (!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+          continue;
+        }
+        RegionState regionState = regionStates.getRegionState(regionInfo);
+        Set<ServerName> metaServerNames = new HashSet<ServerName>();
+        if (regionState.getServerName() != null) {
+          metaServerNames.add(regionState.getServerName());
+        }
+        for (int i = 1; i < numReplicas; i++) {
+          RegionInfo secondaryRegionInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, i);
+          RegionState secondaryRegionState = regionStates.getRegionState(secondaryRegionInfo);
+          ServerName sn = null;
+          if (secondaryRegionState != null) {
+            sn = secondaryRegionState.getServerName();
+            if (sn != null && !metaServerNames.add(sn)) {
+              LOG.info("{} old location {} is same with other hbase:meta replica location;" +
+                " setting location as null...", secondaryRegionInfo.getRegionNameAsString(), sn);

Review comment:
       I think the logic here is that, if we have a location, then the assign will not have effect as we assign it to the same region server. If not, it means the region is not online, then we can just assign it to a new regionserver.
   
   Anyway, I agree with you that the above logic is a bit flaky, but this is old behavior. So maybe we should fix it in another issue, not only on the feature branch?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
##########
@@ -43,73 +49,103 @@
 
   private final HMaster master;
 
-  public MasterMetaBootstrap(HMaster master) {
+  private final LocalStore localStore;
+
+  public MasterMetaBootstrap(HMaster master, LocalStore localStore) {
     this.master = master;
+    this.localStore = localStore;
   }
 
   /**
    * For assigning hbase:meta replicas only.
-   * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same
-   * server as the hbase:meta region.
    */
-  void assignMetaReplicas()
-      throws IOException, InterruptedException, KeeperException {
+  void assignMetaReplicas() throws IOException, InterruptedException, KeeperException {
     int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM,
-           HConstants.DEFAULT_META_REPLICA_NUM);
-    if (numReplicas <= 1) {
-      // No replicaas to assign. Return.
-      return;
-    }
-    final AssignmentManager assignmentManager = master.getAssignmentManager();
-    if (!assignmentManager.isMetaLoaded()) {
-      throw new IllegalStateException("hbase:meta must be initialized first before we can " +
-          "assign out its replicas");
-    }
-    ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper());
-    for (int i = 1; i < numReplicas; i++) {
-      // Get current meta state for replica from zk.
-      RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i);
-      RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica(
-          RegionInfoBuilder.FIRST_META_REGIONINFO, i);
-      LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState);
-      if (metaServername.equals(metaState.getServerName())) {
-        metaState = null;
-        LOG.info(hri.getRegionNameAsString() +
-          " old location is same as current hbase:meta location; setting location as null...");
+      HConstants.DEFAULT_META_REPLICA_NUM);
+    // only try to assign meta replicas when there are more than 1 replicas
+    if (numReplicas > 1) {
+      final AssignmentManager am = master.getAssignmentManager();
+      if (!am.isMetaLoaded()) {
+        throw new IllegalStateException(
+          "hbase:meta must be initialized first before we can " + "assign out its replicas");
       }
-      // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
-      // down hosting server which calls AM#stop.
-      if (metaState != null && metaState.getServerName() != null) {
-        // Try to retain old assignment.
-        assignmentManager.assign(hri, metaState.getServerName());
-      } else {
-        assignmentManager.assign(hri);
+      RegionStates regionStates = am.getRegionStates();
+      for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+        if (!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+          continue;
+        }
+        RegionState regionState = regionStates.getRegionState(regionInfo);
+        Set<ServerName> metaServerNames = new HashSet<ServerName>();
+        if (regionState.getServerName() != null) {
+          metaServerNames.add(regionState.getServerName());
+        }
+        for (int i = 1; i < numReplicas; i++) {
+          RegionInfo secondaryRegionInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, i);
+          RegionState secondaryRegionState = regionStates.getRegionState(secondaryRegionInfo);
+          ServerName sn = null;
+          if (secondaryRegionState != null) {
+            sn = secondaryRegionState.getServerName();
+            if (sn != null && !metaServerNames.add(sn)) {
+              LOG.info("{} old location {} is same with other hbase:meta replica location;" +
+                " setting location as null...", secondaryRegionInfo.getRegionNameAsString(), sn);
+              sn = null;
+            }
+          }
+          // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
+          // down hosting server which calls AM#stop.
+          if (sn != null) {
+            am.assign(secondaryRegionInfo, sn);
+          } else {
+            am.assign(secondaryRegionInfo);
+          }
+        }
       }
     }
+    // always try to remomve excess meta replicas
     unassignExcessMetaReplica(numReplicas);
   }
 
   private void unassignExcessMetaReplica(int numMetaReplicasConfigured) {
-    final ZKWatcher zooKeeper = master.getZooKeeper();
-    // unassign the unneeded replicas (for e.g., if the previous master was configured
-    // with a replication of 3 and now it is 2, we need to unassign the 1 unneeded replica)
-    try {
-      List<String> metaReplicaZnodes = zooKeeper.getMetaReplicaNodes();
-      for (String metaReplicaZnode : metaReplicaZnodes) {
-        int replicaId = zooKeeper.getZNodePaths().getMetaReplicaIdFromZnode(metaReplicaZnode);
-        if (replicaId >= numMetaReplicasConfigured) {
-          RegionState r = MetaTableLocator.getMetaRegionState(zooKeeper, replicaId);
-          LOG.info("Closing excess replica of meta region " + r.getRegion());
-          // send a close and wait for a max of 30 seconds
-          ServerManager.closeRegionSilentlyAndWait(master.getAsyncClusterConnection(),
-              r.getServerName(), r.getRegion(), 30000);
-          ZKUtil.deleteNode(zooKeeper, zooKeeper.getZNodePaths().getZNodeForReplica(replicaId));
+    ZKWatcher zooKeeper = master.getZooKeeper();
+    AssignmentManager am = master.getAssignmentManager();
+    RegionStates regionStates = am.getRegionStates();
+    Map<RegionInfo, Integer> region2MaxReplicaId = new HashMap<>();
+    for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+      RegionInfo primaryRegionInfo = RegionReplicaUtil.getRegionInfoForDefaultReplica(regionInfo);
+      region2MaxReplicaId.compute(primaryRegionInfo,
+        (k, v) -> v == null ? regionInfo.getReplicaId() : Math.max(v, regionInfo.getReplicaId()));
+      if (regionInfo.getReplicaId() < numMetaReplicasConfigured) {
+        continue;
+      }
+      RegionState regionState = regionStates.getRegionState(regionInfo);
+      try {
+        ServerManager.closeRegionSilentlyAndWait(master.getAsyncClusterConnection(),
+          regionState.getServerName(), regionInfo, 30000);
+        if (regionInfo.isFirst()) {
+          // for compatibility, also try to remove the replicas on zk.
+          ZKUtil.deleteNode(zooKeeper,
+            zooKeeper.getZNodePaths().getZNodeForReplica(regionInfo.getReplicaId()));
         }
+      } catch (Exception e) {
+        // ignore the exception since we don't want the master to be wedged due to potential
+        // issues in the cleanup of the extra regions. We can do that cleanup via hbck or manually
+        LOG.warn("Ignoring exception " + e);
       }
-    } catch (Exception ex) {
-      // ignore the exception since we don't want the master to be wedged due to potential
-      // issues in the cleanup of the extra regions. We can do that cleanup via hbck or manually
-      LOG.warn("Ignoring exception " + ex);
+      regionStates.deleteRegion(regionInfo);
     }
+    region2MaxReplicaId.forEach((regionInfo, maxReplicaId) -> {
+      if (maxReplicaId >= numMetaReplicasConfigured) {
+        byte[] metaRow = MetaTableAccessor.getMetaKeyForRegion(regionInfo);
+        Delete delete = MetaTableAccessor.removeRegionReplica(metaRow, numMetaReplicasConfigured,

Review comment:
       If we could have multiple meta regions, maybe we will have different meta replicas for different regions, think of we fail in the middle of this process. So use another loop for safety.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
##########
@@ -43,73 +49,103 @@
 
   private final HMaster master;
 
-  public MasterMetaBootstrap(HMaster master) {
+  private final LocalStore localStore;
+
+  public MasterMetaBootstrap(HMaster master, LocalStore localStore) {
     this.master = master;
+    this.localStore = localStore;
   }
 
   /**
    * For assigning hbase:meta replicas only.
-   * TODO: The way this assign runs, nothing but chance to stop all replicas showing up on same
-   * server as the hbase:meta region.
    */
-  void assignMetaReplicas()
-      throws IOException, InterruptedException, KeeperException {
+  void assignMetaReplicas() throws IOException, InterruptedException, KeeperException {
     int numReplicas = master.getConfiguration().getInt(HConstants.META_REPLICAS_NUM,
-           HConstants.DEFAULT_META_REPLICA_NUM);
-    if (numReplicas <= 1) {
-      // No replicaas to assign. Return.
-      return;
-    }
-    final AssignmentManager assignmentManager = master.getAssignmentManager();
-    if (!assignmentManager.isMetaLoaded()) {
-      throw new IllegalStateException("hbase:meta must be initialized first before we can " +
-          "assign out its replicas");
-    }
-    ServerName metaServername = MetaTableLocator.getMetaRegionLocation(this.master.getZooKeeper());
-    for (int i = 1; i < numReplicas; i++) {
-      // Get current meta state for replica from zk.
-      RegionState metaState = MetaTableLocator.getMetaRegionState(master.getZooKeeper(), i);
-      RegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica(
-          RegionInfoBuilder.FIRST_META_REGIONINFO, i);
-      LOG.debug(hri.getRegionNameAsString() + " replica region state from zookeeper=" + metaState);
-      if (metaServername.equals(metaState.getServerName())) {
-        metaState = null;
-        LOG.info(hri.getRegionNameAsString() +
-          " old location is same as current hbase:meta location; setting location as null...");
+      HConstants.DEFAULT_META_REPLICA_NUM);
+    // only try to assign meta replicas when there are more than 1 replicas
+    if (numReplicas > 1) {
+      final AssignmentManager am = master.getAssignmentManager();
+      if (!am.isMetaLoaded()) {
+        throw new IllegalStateException(
+          "hbase:meta must be initialized first before we can " + "assign out its replicas");
       }
-      // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
-      // down hosting server which calls AM#stop.
-      if (metaState != null && metaState.getServerName() != null) {
-        // Try to retain old assignment.
-        assignmentManager.assign(hri, metaState.getServerName());
-      } else {
-        assignmentManager.assign(hri);
+      RegionStates regionStates = am.getRegionStates();
+      for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+        if (!RegionReplicaUtil.isDefaultReplica(regionInfo)) {
+          continue;
+        }
+        RegionState regionState = regionStates.getRegionState(regionInfo);
+        Set<ServerName> metaServerNames = new HashSet<ServerName>();
+        if (regionState.getServerName() != null) {
+          metaServerNames.add(regionState.getServerName());
+        }
+        for (int i = 1; i < numReplicas; i++) {
+          RegionInfo secondaryRegionInfo = RegionReplicaUtil.getRegionInfoForReplica(regionInfo, i);
+          RegionState secondaryRegionState = regionStates.getRegionState(secondaryRegionInfo);
+          ServerName sn = null;
+          if (secondaryRegionState != null) {
+            sn = secondaryRegionState.getServerName();
+            if (sn != null && !metaServerNames.add(sn)) {
+              LOG.info("{} old location {} is same with other hbase:meta replica location;" +
+                " setting location as null...", secondaryRegionInfo.getRegionNameAsString(), sn);
+              sn = null;
+            }
+          }
+          // These assigns run inline. All is blocked till they complete. Only interrupt is shutting
+          // down hosting server which calls AM#stop.
+          if (sn != null) {
+            am.assign(secondaryRegionInfo, sn);
+          } else {
+            am.assign(secondaryRegionInfo);
+          }
+        }
       }
     }
+    // always try to remomve excess meta replicas
     unassignExcessMetaReplica(numReplicas);
   }
 
   private void unassignExcessMetaReplica(int numMetaReplicasConfigured) {
-    final ZKWatcher zooKeeper = master.getZooKeeper();
-    // unassign the unneeded replicas (for e.g., if the previous master was configured
-    // with a replication of 3 and now it is 2, we need to unassign the 1 unneeded replica)
-    try {
-      List<String> metaReplicaZnodes = zooKeeper.getMetaReplicaNodes();
-      for (String metaReplicaZnode : metaReplicaZnodes) {
-        int replicaId = zooKeeper.getZNodePaths().getMetaReplicaIdFromZnode(metaReplicaZnode);
-        if (replicaId >= numMetaReplicasConfigured) {
-          RegionState r = MetaTableLocator.getMetaRegionState(zooKeeper, replicaId);
-          LOG.info("Closing excess replica of meta region " + r.getRegion());
-          // send a close and wait for a max of 30 seconds
-          ServerManager.closeRegionSilentlyAndWait(master.getAsyncClusterConnection(),
-              r.getServerName(), r.getRegion(), 30000);
-          ZKUtil.deleteNode(zooKeeper, zooKeeper.getZNodePaths().getZNodeForReplica(replicaId));
+    ZKWatcher zooKeeper = master.getZooKeeper();
+    AssignmentManager am = master.getAssignmentManager();
+    RegionStates regionStates = am.getRegionStates();
+    Map<RegionInfo, Integer> region2MaxReplicaId = new HashMap<>();
+    for (RegionInfo regionInfo : regionStates.getRegionsOfTable(TableName.META_TABLE_NAME)) {
+      RegionInfo primaryRegionInfo = RegionReplicaUtil.getRegionInfoForDefaultReplica(regionInfo);
+      region2MaxReplicaId.compute(primaryRegionInfo,
+        (k, v) -> v == null ? regionInfo.getReplicaId() : Math.max(v, regionInfo.getReplicaId()));
+      if (regionInfo.getReplicaId() < numMetaReplicasConfigured) {
+        continue;
+      }
+      RegionState regionState = regionStates.getRegionState(regionInfo);
+      try {
+        ServerManager.closeRegionSilentlyAndWait(master.getAsyncClusterConnection(),
+          regionState.getServerName(), regionInfo, 30000);
+        if (regionInfo.isFirst()) {
+          // for compatibility, also try to remove the replicas on zk.
+          ZKUtil.deleteNode(zooKeeper,
+            zooKeeper.getZNodePaths().getZNodeForReplica(regionInfo.getReplicaId()));
         }
+      } catch (Exception e) {
+        // ignore the exception since we don't want the master to be wedged due to potential
+        // issues in the cleanup of the extra regions. We can do that cleanup via hbck or manually
+        LOG.warn("Ignoring exception " + e);
       }
-    } catch (Exception ex) {
-      // ignore the exception since we don't want the master to be wedged due to potential
-      // issues in the cleanup of the extra regions. We can do that cleanup via hbck or manually
-      LOG.warn("Ignoring exception " + ex);
+      regionStates.deleteRegion(regionInfo);
     }
+    region2MaxReplicaId.forEach((regionInfo, maxReplicaId) -> {
+      if (maxReplicaId >= numMetaReplicasConfigured) {
+        byte[] metaRow = MetaTableAccessor.getMetaKeyForRegion(regionInfo);
+        Delete delete = MetaTableAccessor.removeRegionReplica(metaRow, numMetaReplicasConfigured,

Review comment:
       Oh, I think the problem is that, only when we finish scanning the replicas for a region, we can know how many replicas we can remove, then we create a single Delete which deletes all the replicas. And for zk, different replicas will have different znode, so just delete it everytime...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633053613


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  3s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  4s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  hbase-client in the patch passed.  |
   | -1 :x: |  unit  | 196m 35s |  hbase-server in the patch failed.  |
   |  |   | 226m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9d650d19435a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c303f9d329 |
   | Default Java | 1.8.0_232 |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/testReport/ |
   | Max. process+thread count | 3817 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633040597


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 45s |  branch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 27s |  hbase-client in master failed.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 32s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 40s |  patch has no errors when building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 25s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 38s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  5s |  hbase-client in the patch passed.  |
   | -1 :x: |  unit  | 124m 13s |  hbase-server in the patch failed.  |
   |  |   | 153m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1746 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux db99f0e0afc6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c303f9d329 |
   | Default Java | 2020-01-14 |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt |
   | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/testReport/ |
   | Max. process+thread count | 4274 (vs. ulimit of 12500) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1746/4/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache9 commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428402685



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/region/LocalRegion.java
##########
@@ -0,0 +1,328 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.region;
+
+import static org.apache.hadoop.hbase.HConstants.HREGION_LOGDIR_NAME;
+
+import java.io.IOException;
+import java.util.Collections;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseIOException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegion.FlushResult;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.RecoverLeaseFSUtils;
+import org.apache.hadoop.hbase.wal.AbstractFSWALProvider;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALFactory;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.math.IntMath;
+
+/**
+ * A region that stores data in a separated directory on WAL file system.

Review comment:
       Sorry, what does 'snowflaking' mean? ...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on a change in pull request #1746: HBASE-24388 Introduce a 'local root region' at master side to store t…

Posted by GitBox <gi...@apache.org>.
Apache9 commented on a change in pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#discussion_r428401887



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/root/RootTable.java
##########
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.root;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
+import org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil;
+import org.apache.hadoop.hbase.region.LocalRegion;
+import org.apache.hadoop.hbase.region.LocalRegionParams;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Used to store the location of meta region.
+ */
+@InterfaceAudience.Private
+public class RootTable {
+
+  private static final Logger LOG = LoggerFactory.getLogger(RootTable.class);
+
+  static final String MAX_WALS_KEY = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_MAX_WALS = 10;
+
+  static final String USE_HSYNC_KEY = "hbase.root.table.region.wal.hsync";
+
+  static final String RING_BUFFER_SLOT_COUNT = "hbase.root.table.region.maxwals";
+
+  private static final int DEFAULT_RING_BUFFER_SLOT_COUNT = 64;
+
+  static final String ROOT_TABLE_DIR = "RootTable";
+
+  static final String HFILECLEANER_PLUGINS = "hbase.root.table.region.hfilecleaner.plugins";
+
+  static final String FLUSH_SIZE_KEY = "hbase.root.table.region.flush.size";
+
+  static final long DEFAULT_FLUSH_SIZE = TableDescriptorBuilder.DEFAULT_MEMSTORE_FLUSH_SIZE;
+
+  static final String FLUSH_PER_CHANGES_KEY = "hbase.root.table.region.flush.per.changes";
+
+  private static final long DEFAULT_FLUSH_PER_CHANGES = 1_000_000;
+
+  static final String FLUSH_INTERVAL_MS_KEY = "hbase.root.table.region.flush.interval.ms";
+
+  // default to flush every 15 minutes, for safety
+  private static final long DEFAULT_FLUSH_INTERVAL_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final String COMPACT_MIN_KEY = "hbase.root.table.region.compact.min";
+
+  private static final int DEFAULT_COMPACT_MIN = 4;
+
+  static final String ROLL_PERIOD_MS_KEY = "hbase.root.table.region.walroll.period.ms";
+
+  private static final long DEFAULT_ROLL_PERIOD_MS = TimeUnit.MINUTES.toMillis(15);
+
+  static final TableName TABLE_NAME = TableName.valueOf("master:root");
+
+  private static final TableDescriptor TABLE_DESC = TableDescriptorBuilder.newBuilder(TABLE_NAME)
+    .setColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(HConstants.CATALOG_FAMILY)
+      .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS).setInMemory(true)
+      .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE).setBloomFilterType(BloomType.ROWCOL)
+      .setDataBlockEncoding(DataBlockEncoding.ROW_INDEX_V1).build())
+    .build();
+
+  private final Server server;
+
+  private final DirScanPool cleanerPool;
+
+  private LocalRegion region;
+
+  public RootTable(Server server, DirScanPool cleanerPool) {
+    this.server = server;
+    this.cleanerPool = cleanerPool;
+  }
+
+  public void initialize() throws IOException {
+    LOG.info("Initializing root table...");
+    LocalRegionParams params = new LocalRegionParams().server(server).regionDirName(ROOT_TABLE_DIR)

Review comment:
       Yes.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-633781860






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #1746: HBASE-24388 Store the locations of meta regions in master local store

Posted by GitBox <gi...@apache.org>.
Apache9 commented on pull request #1746:
URL: https://github.com/apache/hbase/pull/1746#issuecomment-634105560


   OK, good, all green. Any other concerns?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org