You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/06/18 00:51:55 UTC

[GitHub] [druid] maytasm opened a new pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

maytasm opened a new pull request #10048:
URL: https://github.com/apache/druid/pull/10048


   Coordinator loadstatus API full format does not consider Broadcast rules
   
   ### Description
   
   Coordinator loadstatus API full format does not consider Broadcast rules. Currently, this API only consider the normal load rules and does not return counts of segments that should be loaded under the Broadcast rules. This apply to both the coordinator loadstatus (/druid/coordinator/v1/loadstatus?full) and the new datasource loadstatus (/druid/coordinator/v1/datasources/{datasource}/loadstatus?full) since they share the same code path. 
   
   This PR has:
   - [x] been self-reviewed.
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in [licenses.yaml](https://github.com/apache/druid/blob/master/licenses.yaml)
   - [ ] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
   - [x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for [code coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md) is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
maytasm commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442440375



##########
File path: server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
##########
@@ -280,20 +288,38 @@ public boolean isLeader()
       final List<Rule> rules = metadataRuleManager.getRulesWithDefault(segment.getDataSource());
 
       for (final Rule rule : rules) {
-        if (!(rule instanceof LoadRule && rule.appliesTo(segment, now))) {
+        if (!rule.appliesTo(segment, now)) {
           continue;
         }
 
-        ((LoadRule) rule)
-            .getTieredReplicants()
-            .forEach((final String tier, final Integer ruleReplicants) -> {
-              int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
-              Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
-                  .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+        if (rule instanceof LoadRule) {
+          ((LoadRule) rule)
+              .getTieredReplicants()
+              .forEach((final String tier, final Integer ruleReplicants) -> {
+                int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
+                Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
+                    .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+                ((Object2LongOpenHashMap<String>) underReplicationPerDataSource)
+                    .addTo(segment.getDataSource(), Math.max(ruleReplicants - currentReplicants, 0));
+              });
+        }
+
+        if (rule instanceof BroadcastDistributionRule) {

Review comment:
       Not right now. A Rule subclass may not always be needed to be considered in this method. Also not sure how the test will be able to automatically create new Rule subclass 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] jihoonson merged pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
jihoonson merged pull request #10048:
URL: https://github.com/apache/druid/pull/10048


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] ccaominh commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
ccaominh commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442415987



##########
File path: server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
##########
@@ -269,6 +270,13 @@ public boolean isLeader()
   )
   {
     final Map<String, Object2LongMap<String>> underReplicationCountsPerDataSourcePerTier = new HashMap<>();
+    final Set<String> decommissioningServers = getDynamicConfigs().getDecommissioningNodes();

Review comment:
       In a previous PR there was a discussion about why it's ok for `segmentReplicantLookup` to be stale in this method:  https://github.com/apache/druid/pull/9965/files#r440541949
   
   What do you think about having that explanation as a code comment for this method?

##########
File path: server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
##########
@@ -280,20 +288,38 @@ public boolean isLeader()
       final List<Rule> rules = metadataRuleManager.getRulesWithDefault(segment.getDataSource());
 
       for (final Rule rule : rules) {
-        if (!(rule instanceof LoadRule && rule.appliesTo(segment, now))) {
+        if (!rule.appliesTo(segment, now)) {
           continue;
         }
 
-        ((LoadRule) rule)
-            .getTieredReplicants()
-            .forEach((final String tier, final Integer ruleReplicants) -> {
-              int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
-              Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
-                  .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+        if (rule instanceof LoadRule) {
+          ((LoadRule) rule)
+              .getTieredReplicants()
+              .forEach((final String tier, final Integer ruleReplicants) -> {
+                int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
+                Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
+                    .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+                ((Object2LongOpenHashMap<String>) underReplicationPerDataSource)
+                    .addTo(segment.getDataSource(), Math.max(ruleReplicants - currentReplicants, 0));
+              });
+        }
+
+        if (rule instanceof BroadcastDistributionRule) {

Review comment:
       If `Rule` subclasses are added in the future and should be considered in this method, is there a test that will fail?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] ccaominh commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
ccaominh commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442545687



##########
File path: server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
##########
@@ -280,20 +288,38 @@ public boolean isLeader()
       final List<Rule> rules = metadataRuleManager.getRulesWithDefault(segment.getDataSource());
 
       for (final Rule rule : rules) {
-        if (!(rule instanceof LoadRule && rule.appliesTo(segment, now))) {
+        if (!rule.appliesTo(segment, now)) {
           continue;
         }
 
-        ((LoadRule) rule)
-            .getTieredReplicants()
-            .forEach((final String tier, final Integer ruleReplicants) -> {
-              int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
-              Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
-                  .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+        if (rule instanceof LoadRule) {
+          ((LoadRule) rule)
+              .getTieredReplicants()
+              .forEach((final String tier, final Integer ruleReplicants) -> {
+                int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
+                Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
+                    .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+                ((Object2LongOpenHashMap<String>) underReplicationPerDataSource)
+                    .addTo(segment.getDataSource(), Math.max(ruleReplicants - currentReplicants, 0));
+              });
+        }
+
+        if (rule instanceof BroadcastDistributionRule) {

Review comment:
       Maybe adding some comments to somewhere like `Rule` will be sufficient for now. Not sure how likely we'll add future `Rule`s, but if we do I think there's a good chance we'll forget to update this method if it's needed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
maytasm commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442595148



##########
File path: server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
##########
@@ -280,20 +288,38 @@ public boolean isLeader()
       final List<Rule> rules = metadataRuleManager.getRulesWithDefault(segment.getDataSource());
 
       for (final Rule rule : rules) {
-        if (!(rule instanceof LoadRule && rule.appliesTo(segment, now))) {
+        if (!rule.appliesTo(segment, now)) {
           continue;
         }
 
-        ((LoadRule) rule)
-            .getTieredReplicants()
-            .forEach((final String tier, final Integer ruleReplicants) -> {
-              int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
-              Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
-                  .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+        if (rule instanceof LoadRule) {
+          ((LoadRule) rule)
+              .getTieredReplicants()
+              .forEach((final String tier, final Integer ruleReplicants) -> {
+                int currentReplicants = segmentReplicantLookup.getLoadedReplicants(segment.getId(), tier);
+                Object2LongMap<String> underReplicationPerDataSource = underReplicationCountsPerDataSourcePerTier
+                    .computeIfAbsent(tier, ignored -> new Object2LongOpenHashMap<>());
+                ((Object2LongOpenHashMap<String>) underReplicationPerDataSource)
+                    .addTo(segment.getDataSource(), Math.max(ruleReplicants - currentReplicants, 0));
+              });
+        }
+
+        if (rule instanceof BroadcastDistributionRule) {

Review comment:
       https://github.com/apache/druid/pull/10054




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] clintropolis commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
clintropolis commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442022548



##########
File path: server/src/test/java/org/apache/druid/server/coordinator/DruidCoordinatorTest.java
##########
@@ -550,6 +551,241 @@ public void testCoordinatorTieredRun() throws Exception
     EasyMock.verify(metadataRuleManager);
   }
 
+  @Test(timeout = 60_000L)
+  public void testComputeUnderReplicationCountsPerDataSourcePerTierForSegmentsWithBroadcastRule() throws Exception
+  {
+    final String dataSource = "dataSource";
+    final String hotTierName = "hot";
+    final String coldTierName = "cold";
+    final String tierName1 = "tier1";
+    final String tierName2 = "tier2";
+    final Rule broadcastDistributionRule = new ForeverBroadcastDistributionRule();
+    final String loadPathCold = "/druid/loadqueue/cold:1234";
+    final String loadPathBroker1 = "/druid/loadqueue/broker1:1234";
+    final String loadPathBroker2 = "/druid/loadqueue/broker2:1234";
+    final String loadPathPeon = "/druid/loadqueue/peon:1234";
+    final DruidServer hotServer = new DruidServer("hot", "hot", null, 5L, ServerType.HISTORICAL, hotTierName, 0);
+    final DruidServer coldServer = new DruidServer("cold", "cold", null, 5L, ServerType.HISTORICAL, coldTierName, 0);
+    final DruidServer brokerServer1 = new DruidServer("broker1", "broker1", null, 5L, ServerType.BROKER, tierName1, 0);
+    final DruidServer brokerServer2 = new DruidServer("broker2", "broker2", null, 5L, ServerType.BROKER, tierName2, 0);
+    final DruidServer peonServer = new DruidServer("peon", "peon", null, 5L, ServerType.INDEXER_EXECUTOR, tierName2, 0);
+
+    final Map<String, DataSegment> dataSegments = ImmutableMap.of(
+        "2018-01-02T00:00:00.000Z_2018-01-03T00:00:00.000Z",
+        new DataSegment(dataSource, Intervals.of("2018-01-02/P1D"), "v1", null, null, null, null, 0x9, 0),
+        "2018-01-03T00:00:00.000Z_2018-01-04T00:00:00.000Z",
+        new DataSegment(dataSource, Intervals.of("2018-01-03/P1D"), "v1", null, null, null, null, 0x9, 0),
+        "2017-01-01T00:00:00.000Z_2017-01-02T00:00:00.000Z",
+        new DataSegment(dataSource, Intervals.of("2017-01-01/P1D"), "v1", null, null, null, null, 0x9, 0)
+    );
+
+    final LoadQueuePeon loadQueuePeonCold = new CuratorLoadQueuePeon(
+        curator,
+        loadPathCold,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_cold_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_cold-%d"),
+        druidCoordinatorConfig
+    );
+
+    final LoadQueuePeon loadQueuePeonBroker1 = new CuratorLoadQueuePeon(
+        curator,
+        loadPathBroker1,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_broker1_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_broker1-%d"),
+        druidCoordinatorConfig
+    );
+
+    final LoadQueuePeon loadQueuePeonBroker2 = new CuratorLoadQueuePeon(
+        curator,
+        loadPathBroker2,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_broker2_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_broker2-%d"),
+        druidCoordinatorConfig
+    );
+
+    final LoadQueuePeon loadQueuePeonPoenServer = new CuratorLoadQueuePeon(
+        curator,
+        loadPathPeon,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_peon_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_peon-%d"),
+        druidCoordinatorConfig
+    );
+    final PathChildrenCache pathChildrenCacheCold = new PathChildrenCache(
+        curator,
+        loadPathCold,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_cold-%d")
+    );
+    final PathChildrenCache pathChildrenCacheBroker1 = new PathChildrenCache(
+        curator,
+        loadPathBroker1,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_broker1-%d")
+    );
+    final PathChildrenCache pathChildrenCacheBroker2 = new PathChildrenCache(
+        curator,
+        loadPathBroker2,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_broker2-%d")
+    );
+    final PathChildrenCache pathChildrenCachePeon = new PathChildrenCache(
+        curator,
+        loadPathPeon,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_peon-%d")
+    );
+
+    loadManagementPeons.putAll(ImmutableMap.of("hot", loadQueuePeon,
+                                               "cold", loadQueuePeonCold,
+                                               "broker1", loadQueuePeonBroker1,
+                                               "broker2", loadQueuePeonBroker2,
+                                               "peon", loadQueuePeonPoenServer));
+
+    loadQueuePeonCold.start();
+    loadQueuePeonBroker1.start();
+    loadQueuePeonBroker2.start();
+    loadQueuePeonPoenServer.start();
+    pathChildrenCache.start();
+    pathChildrenCacheCold.start();
+    pathChildrenCacheBroker1.start();
+    pathChildrenCacheBroker2.start();
+    pathChildrenCachePeon.start();
+
+    DruidDataSource[] druidDataSources = {new DruidDataSource(dataSource, Collections.emptyMap())};
+    dataSegments.values().forEach(druidDataSources[0]::addSegment);
+
+    setupSegmentsMetadataMock(druidDataSources[0]);
+
+    EasyMock.expect(metadataRuleManager.getRulesWithDefault(EasyMock.anyString()))
+            .andReturn(ImmutableList.of(broadcastDistributionRule)).atLeastOnce();
+    EasyMock.expect(metadataRuleManager.getAllRules())
+            .andReturn(ImmutableMap.of(dataSource, ImmutableList.of(broadcastDistributionRule))).atLeastOnce();
+
+    EasyMock.expect(serverInventoryView.getInventory())
+            .andReturn(ImmutableList.of(hotServer, coldServer, brokerServer1, brokerServer2, peonServer))
+            .atLeastOnce();
+    EasyMock.expect(serverInventoryView.isStarted()).andReturn(true).anyTimes();
+
+    EasyMock.replay(metadataRuleManager, serverInventoryView);
+
+    coordinator.start();
+    leaderAnnouncerLatch.await(); // Wait for this coordinator to become leader
+
+    final CountDownLatch assignSegmentLatchHot = new CountDownLatch(1);

Review comment:
       It's probably worth making a method that takes a `CountDownLatch` and a `DruidServer` and does the thing going on here (and in a few other tests)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
maytasm commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442068672



##########
File path: server/src/test/java/org/apache/druid/server/coordinator/DruidCoordinatorTest.java
##########
@@ -550,6 +551,241 @@ public void testCoordinatorTieredRun() throws Exception
     EasyMock.verify(metadataRuleManager);
   }
 
+  @Test(timeout = 60_000L)
+  public void testComputeUnderReplicationCountsPerDataSourcePerTierForSegmentsWithBroadcastRule() throws Exception
+  {
+    final String dataSource = "dataSource";
+    final String hotTierName = "hot";
+    final String coldTierName = "cold";
+    final String tierName1 = "tier1";
+    final String tierName2 = "tier2";
+    final Rule broadcastDistributionRule = new ForeverBroadcastDistributionRule();
+    final String loadPathCold = "/druid/loadqueue/cold:1234";
+    final String loadPathBroker1 = "/druid/loadqueue/broker1:1234";
+    final String loadPathBroker2 = "/druid/loadqueue/broker2:1234";
+    final String loadPathPeon = "/druid/loadqueue/peon:1234";
+    final DruidServer hotServer = new DruidServer("hot", "hot", null, 5L, ServerType.HISTORICAL, hotTierName, 0);
+    final DruidServer coldServer = new DruidServer("cold", "cold", null, 5L, ServerType.HISTORICAL, coldTierName, 0);
+    final DruidServer brokerServer1 = new DruidServer("broker1", "broker1", null, 5L, ServerType.BROKER, tierName1, 0);
+    final DruidServer brokerServer2 = new DruidServer("broker2", "broker2", null, 5L, ServerType.BROKER, tierName2, 0);
+    final DruidServer peonServer = new DruidServer("peon", "peon", null, 5L, ServerType.INDEXER_EXECUTOR, tierName2, 0);
+
+    final Map<String, DataSegment> dataSegments = ImmutableMap.of(
+        "2018-01-02T00:00:00.000Z_2018-01-03T00:00:00.000Z",
+        new DataSegment(dataSource, Intervals.of("2018-01-02/P1D"), "v1", null, null, null, null, 0x9, 0),
+        "2018-01-03T00:00:00.000Z_2018-01-04T00:00:00.000Z",
+        new DataSegment(dataSource, Intervals.of("2018-01-03/P1D"), "v1", null, null, null, null, 0x9, 0),
+        "2017-01-01T00:00:00.000Z_2017-01-02T00:00:00.000Z",
+        new DataSegment(dataSource, Intervals.of("2017-01-01/P1D"), "v1", null, null, null, null, 0x9, 0)
+    );
+
+    final LoadQueuePeon loadQueuePeonCold = new CuratorLoadQueuePeon(
+        curator,
+        loadPathCold,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_cold_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_cold-%d"),
+        druidCoordinatorConfig
+    );
+
+    final LoadQueuePeon loadQueuePeonBroker1 = new CuratorLoadQueuePeon(
+        curator,
+        loadPathBroker1,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_broker1_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_broker1-%d"),
+        druidCoordinatorConfig
+    );
+
+    final LoadQueuePeon loadQueuePeonBroker2 = new CuratorLoadQueuePeon(
+        curator,
+        loadPathBroker2,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_broker2_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_broker2-%d"),
+        druidCoordinatorConfig
+    );
+
+    final LoadQueuePeon loadQueuePeonPoenServer = new CuratorLoadQueuePeon(
+        curator,
+        loadPathPeon,
+        objectMapper,
+        Execs.scheduledSingleThreaded("coordinator_test_load_queue_peon_peon_scheduled-%d"),
+        Execs.singleThreaded("coordinator_test_load_queue_peon_peon-%d"),
+        druidCoordinatorConfig
+    );
+    final PathChildrenCache pathChildrenCacheCold = new PathChildrenCache(
+        curator,
+        loadPathCold,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_cold-%d")
+    );
+    final PathChildrenCache pathChildrenCacheBroker1 = new PathChildrenCache(
+        curator,
+        loadPathBroker1,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_broker1-%d")
+    );
+    final PathChildrenCache pathChildrenCacheBroker2 = new PathChildrenCache(
+        curator,
+        loadPathBroker2,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_broker2-%d")
+    );
+    final PathChildrenCache pathChildrenCachePeon = new PathChildrenCache(
+        curator,
+        loadPathPeon,
+        true,
+        true,
+        Execs.singleThreaded("coordinator_test_path_children_cache_peon-%d")
+    );
+
+    loadManagementPeons.putAll(ImmutableMap.of("hot", loadQueuePeon,
+                                               "cold", loadQueuePeonCold,
+                                               "broker1", loadQueuePeonBroker1,
+                                               "broker2", loadQueuePeonBroker2,
+                                               "peon", loadQueuePeonPoenServer));
+
+    loadQueuePeonCold.start();
+    loadQueuePeonBroker1.start();
+    loadQueuePeonBroker2.start();
+    loadQueuePeonPoenServer.start();
+    pathChildrenCache.start();
+    pathChildrenCacheCold.start();
+    pathChildrenCacheBroker1.start();
+    pathChildrenCacheBroker2.start();
+    pathChildrenCachePeon.start();
+
+    DruidDataSource[] druidDataSources = {new DruidDataSource(dataSource, Collections.emptyMap())};
+    dataSegments.values().forEach(druidDataSources[0]::addSegment);
+
+    setupSegmentsMetadataMock(druidDataSources[0]);
+
+    EasyMock.expect(metadataRuleManager.getRulesWithDefault(EasyMock.anyString()))
+            .andReturn(ImmutableList.of(broadcastDistributionRule)).atLeastOnce();
+    EasyMock.expect(metadataRuleManager.getAllRules())
+            .andReturn(ImmutableMap.of(dataSource, ImmutableList.of(broadcastDistributionRule))).atLeastOnce();
+
+    EasyMock.expect(serverInventoryView.getInventory())
+            .andReturn(ImmutableList.of(hotServer, coldServer, brokerServer1, brokerServer2, peonServer))
+            .atLeastOnce();
+    EasyMock.expect(serverInventoryView.isStarted()).andReturn(true).anyTimes();
+
+    EasyMock.replay(metadataRuleManager, serverInventoryView);
+
+    coordinator.start();
+    leaderAnnouncerLatch.await(); // Wait for this coordinator to become leader
+
+    final CountDownLatch assignSegmentLatchHot = new CountDownLatch(1);

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] maytasm commented on a change in pull request #10048: Coordinator loadstatus API full format does not consider Broadcast rules

Posted by GitBox <gi...@apache.org>.
maytasm commented on a change in pull request #10048:
URL: https://github.com/apache/druid/pull/10048#discussion_r442437064



##########
File path: server/src/main/java/org/apache/druid/server/coordinator/DruidCoordinator.java
##########
@@ -269,6 +270,13 @@ public boolean isLeader()
   )
   {
     final Map<String, Object2LongMap<String>> underReplicationCountsPerDataSourcePerTier = new HashMap<>();
+    final Set<String> decommissioningServers = getDynamicConfigs().getDecommissioningNodes();

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org