You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/03/17 10:38:00 UTC

[GitHub] [druid] DixxieFlatline opened a new issue #9527: Second task hangs while ingesting from Kafka

DixxieFlatline opened a new issue #9527: Second task hangs while ingesting from Kafka
URL: https://github.com/apache/druid/issues/9527
 
 
   Hello, I am using dockerized version of druid 0.17.0. I tried to set up druid using the docker tutorial and the quickstart guide, using with the default values proposed and also tweaking them without any positive result. This mean five containers, one container per service being the coordinator and the overlord in the same one. Also the number of tasks in the supervisor is set to 2 and the time to 1M to force the error sooner.
   
   The problem is the following: setting up a supervisor starts a new task that properly ingests the data coming from kafka, but the second task created hangs in RUNNING status forever without ingesting any data. This leads to an UNHEALTHY_SUPERVISOR because of 3 "Timeout waiting for task" errors and also timeout exceptions in the broker node. Killing the task creates a new one that again ingests the data properly. I've just realized that the proper tasks are located at 172.18.0.9:8100 and the failing ones at 172.18.0.9:8101.
   
   Coordinator-Overlord log:
   ```
   2020-03-17T10:22:59,765 DEBUG [qtp669132924-121] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/tasks HTTP/1.1
   2020-03-17T10:22:59,978 INFO [KafkaSupervisor-analytics-Reporting-0] org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-4, groupId=kafka-supervisor-lbpfcima] Subscribed to partition(s): analytics-0
   2020-03-17T10:22:59,980 INFO [KafkaSupervisor-analytics-Reporting-0] org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-4, groupId=kafka-supervisor-lbpfcima] Resetting offset for partition analytics-0 to offset 152810.
   2020-03-17T10:23:01,713 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Task group [0] has run for [PT60S]
   2020-03-17T10:23:01,726 INFO [IndexTaskClient-analytics-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskClient - Task [index_kafka_analytics_cbfe5d63786d588_idbllidp] paused successfully
   2020-03-17T10:23:01,727 INFO [KafkaSupervisor-analytics-Worker-0] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Setting endOffsets for tasks in taskGroup [0] to {0=152889} and resuming
   2020-03-17T10:23:01,742 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - [analytics] supervisor is running.
   2020-03-17T10:23:01,742 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Creating new task group [0] for partitions [0]
   2020-03-17T10:23:01,742 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Number of tasks [0] does not match configured numReplicas [1] in task group [0], creating more tasks
   2020-03-17T10:23:01,743 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.overlord.MetadataTaskStorage - Inserting task index_kafka_analytics_5fa1e1f7085073a_eacmkogj with status: TaskStatus{id=index_kafka_analytics_5fa1e1f7085073a_eacmkogj, status=RUNNING, duration=-1, errorMsg=null}
   2020-03-17T10:23:01,745 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.overlord.TaskLockbox - Adding task[index_kafka_analytics_5fa1e1f7085073a_eacmkogj] to activeTasks
   2020-03-17T10:23:01,745 INFO [TaskQueue-Manager] org.apache.druid.indexing.overlord.TaskQueue - Asking taskRunner to run: index_kafka_analytics_5fa1e1f7085073a_eacmkogj
   2020-03-17T10:23:01,745 INFO [KafkaSupervisor-analytics] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - {id='analytics', generationTime=2020-03-17T10:23:01.745Z, payload=KafkaSupervisorReportPayload{dataSource='analytics', topic='analytics', partitions=1, replicas=1, durationSeconds=60, active=[], publishing=[{id='index_kafka_analytics_cbfe5d63786d588_idbllidp', startTime=2020-03-17T10:22:00.710Z, remainingSeconds=1799}], suspended=false, healthy=true, state=RUNNING, detailedState=RUNNING, recentErrors=[org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisorStateManager$SeekableStreamExceptionEvent@1386705, org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisorStateManager$SeekableStreamExceptionEvent@3e8840e8, org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisorStateManager$SeekableStreamExceptionEvent@644d9531]}}
   2020-03-17T10:23:01,745 INFO [TaskQueue-Manager] org.apache.druid.indexing.overlord.RemoteTaskRunner - Added pending task index_kafka_analytics_5fa1e1f7085073a_eacmkogj
   2020-03-17T10:23:01,746 INFO [rtr-pending-tasks-runner-0] org.apache.druid.indexing.overlord.RemoteTaskRunner - Coordinator asking Worker[172.18.0.9:8091] to add task[index_kafka_analytics_5fa1e1f7085073a_eacmkogj]
   2020-03-17T10:23:01,754 INFO [rtr-pending-tasks-runner-0] org.apache.druid.indexing.overlord.RemoteTaskRunner - Task index_kafka_analytics_5fa1e1f7085073a_eacmkogj switched from pending to running (on [172.18.0.9:8091])
   2020-03-17T10:23:01,772 INFO [Curator-PathChildrenCache-1] org.apache.druid.indexing.overlord.RemoteTaskRunner - Worker[172.18.0.9:8091] wrote RUNNING status for task [index_kafka_analytics_5fa1e1f7085073a_eacmkogj] on [TaskLocation{host='null', port=-1, tlsPort=-1}]
   2020-03-17T10:23:01,789 INFO [Curator-PathChildrenCache-1] org.apache.druid.indexing.overlord.RemoteTaskRunner - Worker[172.18.0.9:8091] wrote RUNNING status for task [index_kafka_analytics_5fa1e1f7085073a_eacmkogj] on [TaskLocation{host='172.18.0.9', port=8101, tlsPort=-1}]
   2020-03-17T10:23:02,351 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Found [17] used segments.
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant create queue is empty.
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T07:00:00.000Z_2020-03-17T08:00:00.000Z_2020-03-17T07:00:00.843Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T06:00:00.000Z_2020-03-17T07:00:00.000Z_2020-03-17T06:00:00.108Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T05:00:00.000Z_2020-03-17T06:00:00.000Z_2020-03-17T05:00:00.300Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T04:00:00.000Z_2020-03-17T05:00:00.000Z_2020-03-17T04:00:00.270Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T03:00:00.000Z_2020-03-17T04:00:00.000Z_2020-03-17T03:00:00.022Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T02:00:00.000Z_2020-03-17T03:00:00.000Z_2020-03-17T02:00:00.244Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,352 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T01:00:00.000Z_2020-03-17T02:00:00.000Z_2020-03-17T01:00:00.023Z]! Expected Replicants[2]
   2020-03-17T10:23:02,352 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T00:00:00.000Z_2020-03-17T01:00:00.000Z_2020-03-17T00:00:00.208Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T23:00:00.000Z_2020-03-17T00:00:00.000Z_2020-03-16T23:00:00.157Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T22:00:00.000Z_2020-03-16T23:00:00.000Z_2020-03-16T22:00:00.518Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T21:00:00.000Z_2020-03-16T22:00:00.000Z_2020-03-16T21:00:00.202Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T20:00:00.000Z_2020-03-16T21:00:00.000Z_2020-03-16T20:00:00.136Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T19:00:00.000Z_2020-03-16T20:00:00.000Z_2020-03-16T19:00:00.036Z_1]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T19:00:00.000Z_2020-03-16T20:00:00.000Z_2020-03-16T19:00:00.036Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T18:00:00.000Z_2020-03-16T19:00:00.000Z_2020-03-16T18:13:58.452Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T17:00:00.000Z_2020-03-16T18:00:00.000Z_2020-03-16T18:13:57.707Z]! Expected Replicants[2]
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - Found 1 active servers, 0 decommissioning servers
   2020-03-17T10:23:02,353 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - [_default_tier]: insufficient active servers. Cannot balance.
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Assigned 0 segments among 1 servers
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - Load Queues:
   2020-03-17T10:23:02,353 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[172.18.0.8:8083, historical, _default_tier] has 0 left to load, 0 left to drop, 0 bytes queued, 538,057 bytes served.
   2020-03-17T10:23:02,543 INFO [qtp669132924-143] org.apache.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_analytics_cbfe5d63786d588_idbllidp]: SegmentTransactionalInsertAction{segmentsToBeOverwritten=null, segments=[DataSegment{binaryVersion=9, id=analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1, loadSpec={type=>local, path=>/opt/data/segments/analytics/2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z/2020-03-17T08:00:00.478Z/1/3992f6f7-57a1-4a00-8b5b-975c1edab6d1/index.zip}, dimensions=[domain, domainStatus, exchange, format, campaignId, creativeId], metrics=[total, pixels, bids, impressions, clicks, conversions, events, spend, revenue], shardSpec=NumberedShardSpec{partitionNum=1, partitions=0}, lastCompactionState=null, size=20204}, DataSegment{binaryVersion=9, id=analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z, loadSpec={type=>local, path=>/opt/data/segments/analytics/2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z/2020-03-17T10:00:00.037Z/0/6d630d93-0a2e-4fc9-b1ff-b61569b336e1/index.zip}, dimensions=[domain, domainStatus, exchange, format, campaignId, creativeId], metrics=[total, pixels, bids, impressions, clicks, conversions, events, spend, revenue], shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, lastCompactionState=null, size=20183}, DataSegment{binaryVersion=9, id=analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z, loadSpec={type=>local, path=>/opt/data/segments/analytics/2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z/2020-03-17T09:00:00.517Z/0/8f539a0f-164e-497f-b30c-e00775cea59f/index.zip}, dimensions=[domain, domainStatus, exchange, format, campaignId, creativeId], metrics=[total, pixels, bids, impressions, clicks, conversions, events, spend, revenue], shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, lastCompactionState=null, size=35669}], startMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamStartSequenceNumbers{stream='analytics', partitionSequenceNumberMap={0=135176}, exclusivePartitions=[]}}, endMetadata=KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamEndSequenceNumbers{stream='analytics', partitionSequenceNumberMap={0=152889}}}, dataSource=null}
   2020-03-17T10:23:02,546 INFO [qtp669132924-143] org.apache.druid.metadata.IndexerSQLMetadataStorageCoordinator - Updated metadata from[KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamEndSequenceNumbers{stream='analytics', partitionSequenceNumberMap={0=135176}}}] to[KafkaDataSourceMetadata{SeekableStreamStartSequenceNumbers=SeekableStreamEndSequenceNumbers{stream='analytics', partitionSequenceNumberMap={0=152889}}}].
   2020-03-17T10:23:02,547 INFO [qtp669132924-143] org.apache.druid.metadata.IndexerSQLMetadataStorageCoordinator - Published segment [analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1] to DB with used flag [true], json[{"dataSource":"analytics","interval":"2020-03-17T08:00:00.000Z/2020-03-17T09:00:00.000Z","version":"2020-03-17T08:00:00.478Z","loadSpec":{"type":"local","path":"/opt/data/segments/analytics/2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z/2020-03-17T08:00:00.478Z/1/3992f6f7-57a1-4a00-8b5b-975c1edab6d1/index.zip"},"dimensions":"domain,domainStatus,exchange,format,campaignId,creativeId","metrics":"total,pixels,bids,impressions,clicks,conversions,events,spend,revenue","shardSpec":{"type":"numbered","partitionNum":1,"partitions":0},"binaryVersion":9,"size":20204,"identifier":"analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1"}]
   2020-03-17T10:23:02,547 INFO [qtp669132924-143] org.apache.druid.metadata.IndexerSQLMetadataStorageCoordinator - Published segment [analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z] to DB with used flag [true], json[{"dataSource":"analytics","interval":"2020-03-17T10:00:00.000Z/2020-03-17T11:00:00.000Z","version":"2020-03-17T10:00:00.037Z","loadSpec":{"type":"local","path":"/opt/data/segments/analytics/2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z/2020-03-17T10:00:00.037Z/0/6d630d93-0a2e-4fc9-b1ff-b61569b336e1/index.zip"},"dimensions":"domain,domainStatus,exchange,format,campaignId,creativeId","metrics":"total,pixels,bids,impressions,clicks,conversions,events,spend,revenue","shardSpec":{"type":"numbered","partitionNum":0,"partitions":0},"binaryVersion":9,"size":20183,"identifier":"analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z"}]
   2020-03-17T10:23:02,548 INFO [qtp669132924-143] org.apache.druid.metadata.IndexerSQLMetadataStorageCoordinator - Published segment [analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z] to DB with used flag [true], json[{"dataSource":"analytics","interval":"2020-03-17T09:00:00.000Z/2020-03-17T10:00:00.000Z","version":"2020-03-17T09:00:00.517Z","loadSpec":{"type":"local","path":"/opt/data/segments/analytics/2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z/2020-03-17T09:00:00.517Z/0/8f539a0f-164e-497f-b30c-e00775cea59f/index.zip"},"dimensions":"domain,domainStatus,exchange,format,campaignId,creativeId","metrics":"total,pixels,bids,impressions,clicks,conversions,events,spend,revenue","shardSpec":{"type":"numbered","partitionNum":0,"partitions":0},"binaryVersion":9,"size":35669,"identifier":"analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z"}]
   2020-03-17T10:23:02,549 DEBUG [qtp669132924-143] org.apache.druid.jetty.RequestLog - 172.18.0.9 POST //172.18.0.7:8081/druid/indexer/v1/action HTTP/1.1
   2020-03-17T10:23:03,062 DEBUG [qtp669132924-147] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/supervisor?system HTTP/1.1
   2020-03-17T10:23:04,982 DEBUG [qtp669132924-143] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/tasks HTTP/1.1
   2020-03-17T10:23:06,417 INFO [qtp669132924-147] org.apache.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_analytics_5fa1e1f7085073a_eacmkogj]: SegmentAllocateAction{dataSource='analytics', timestamp=2020-03-17T10:23:01.738Z, queryGranularity={type=period, period=PT1M, timeZone=UTC, origin=null}, preferredSegmentGranularity={type=period, period=PT1H, timeZone=UTC, origin=null}, sequenceName='index_kafka_analytics_5fa1e1f7085073a_0', previousSegmentId='null', skipSegmentLineageCheck=true, shardSpecFactory=org.apache.druid.timeline.partition.NumberedShardSpecFactory@774f31f, lockGranularity=TIME_CHUNK}
   2020-03-17T10:23:06,420 INFO [qtp669132924-147] org.apache.druid.metadata.IndexerSQLMetadataStorageCoordinator - Allocated pending segment [analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z_1] for sequence[index_kafka_analytics_5fa1e1f7085073a_0] in DB
   2020-03-17T10:23:06,420 INFO [qtp669132924-147] org.apache.druid.indexing.overlord.TaskLockbox - Added task[index_kafka_analytics_5fa1e1f7085073a_eacmkogj] to TaskLock[TimeChunkLock{type=EXCLUSIVE, groupId='index_kafka_analytics', dataSource='analytics', interval=2020-03-17T10:00:00.000Z/2020-03-17T11:00:00.000Z, version='2020-03-17T10:22:02.104Z', priority=75, revoked=false}]
   2020-03-17T10:23:06,420 INFO [qtp669132924-147] org.apache.druid.indexing.overlord.MetadataTaskStorage - Adding lock on interval[2020-03-17T10:00:00.000Z/2020-03-17T11:00:00.000Z] version[2020-03-17T10:22:02.104Z] for task: index_kafka_analytics_5fa1e1f7085073a_eacmkogj
   2020-03-17T10:23:06,421 DEBUG [qtp669132924-147] org.apache.druid.jetty.RequestLog - 172.18.0.9 POST //172.18.0.7:8081/druid/indexer/v1/action HTTP/1.1
   2020-03-17T10:23:06,509 INFO [ServerInventoryView-0] org.apache.druid.client.BatchServerInventoryView - New Server[DruidServerMetadata{name='172.18.0.9:8101', hostAndPort='172.18.0.9:8101', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}]
   2020-03-17T10:23:06,511 INFO [NodeRoleWatcher[PEON]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Node[http://172.18.0.9:8101] of role[peon] detected.
   2020-03-17T10:23:06,680 DEBUG [qtp669132924-128] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/supervisor?system HTTP/1.1
   2020-03-17T10:23:06,776 DEBUG [qtp669132924-121] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/tasks HTTP/1.1
   2020-03-17T10:23:07,354 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Found [17] used segments.
   2020-03-17T10:23:07,354 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant create queue is empty.
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T07:00:00.000Z_2020-03-17T08:00:00.000Z_2020-03-17T07:00:00.843Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T06:00:00.000Z_2020-03-17T07:00:00.000Z_2020-03-17T06:00:00.108Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T05:00:00.000Z_2020-03-17T06:00:00.000Z_2020-03-17T05:00:00.300Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T04:00:00.000Z_2020-03-17T05:00:00.000Z_2020-03-17T04:00:00.270Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T03:00:00.000Z_2020-03-17T04:00:00.000Z_2020-03-17T03:00:00.022Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T02:00:00.000Z_2020-03-17T03:00:00.000Z_2020-03-17T02:00:00.244Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T01:00:00.000Z_2020-03-17T02:00:00.000Z_2020-03-17T01:00:00.023Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T00:00:00.000Z_2020-03-17T01:00:00.000Z_2020-03-17T00:00:00.208Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,355 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T23:00:00.000Z_2020-03-17T00:00:00.000Z_2020-03-16T23:00:00.157Z]! Expected Replicants[2]
   2020-03-17T10:23:07,355 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T22:00:00.000Z_2020-03-16T23:00:00.000Z_2020-03-16T22:00:00.518Z]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T21:00:00.000Z_2020-03-16T22:00:00.000Z_2020-03-16T21:00:00.202Z]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T20:00:00.000Z_2020-03-16T21:00:00.000Z_2020-03-16T20:00:00.136Z]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T19:00:00.000Z_2020-03-16T20:00:00.000Z_2020-03-16T19:00:00.036Z_1]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T19:00:00.000Z_2020-03-16T20:00:00.000Z_2020-03-16T19:00:00.036Z]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T18:00:00.000Z_2020-03-16T19:00:00.000Z_2020-03-16T18:13:58.452Z]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T17:00:00.000Z_2020-03-16T18:00:00.000Z_2020-03-16T18:13:57.707Z]! Expected Replicants[2]
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - Found 1 active servers, 0 decommissioning servers
   2020-03-17T10:23:07,356 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - [_default_tier]: insufficient active servers. Cannot balance.
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Assigned 0 segments among 1 servers
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - Load Queues:
   2020-03-17T10:23:07,356 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[172.18.0.8:8083, historical, _default_tier] has 0 left to load, 0 left to drop, 0 bytes queued, 538,057 bytes served.
   2020-03-17T10:23:07,979 DEBUG [qtp669132924-143] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/supervisor?system HTTP/1.1
   2020-03-17T10:23:10,419 DEBUG [qtp669132924-147] org.apache.druid.jetty.RequestLog - 172.18.0.4 GET //172.18.0.7:8081/druid/indexer/v1/tasks HTTP/1.1
   2020-03-17T10:23:11,933 DEBUG [qtp669132924-128] org.apache.druid.jetty.RequestLog - 172.18.0.6 GET //172.18.0.7:8081/druid/coordinator/v1/rules HTTP/1.1
   2020-03-17T10:23:12,357 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorSegmentInfoLoader - Found [17] used segments.
   2020-03-17T10:23:12,357 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant create queue is empty.
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T07:00:00.000Z_2020-03-17T08:00:00.000Z_2020-03-17T07:00:00.843Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T06:00:00.000Z_2020-03-17T07:00:00.000Z_2020-03-17T06:00:00.108Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T05:00:00.000Z_2020-03-17T06:00:00.000Z_2020-03-17T05:00:00.300Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T04:00:00.000Z_2020-03-17T05:00:00.000Z_2020-03-17T04:00:00.270Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T03:00:00.000Z_2020-03-17T04:00:00.000Z_2020-03-17T03:00:00.022Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T02:00:00.000Z_2020-03-17T03:00:00.000Z_2020-03-17T02:00:00.244Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T01:00:00.000Z_2020-03-17T02:00:00.000Z_2020-03-17T01:00:00.023Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-17T00:00:00.000Z_2020-03-17T01:00:00.000Z_2020-03-17T00:00:00.208Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T23:00:00.000Z_2020-03-17T00:00:00.000Z_2020-03-16T23:00:00.157Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,358 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T22:00:00.000Z_2020-03-16T23:00:00.000Z_2020-03-16T22:00:00.518Z]! Expected Replicants[2]
   2020-03-17T10:23:12,358 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T21:00:00.000Z_2020-03-16T22:00:00.000Z_2020-03-16T21:00:00.202Z]! Expected Replicants[2]
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T20:00:00.000Z_2020-03-16T21:00:00.000Z_2020-03-16T20:00:00.136Z]! Expected Replicants[2]
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T19:00:00.000Z_2020-03-16T20:00:00.000Z_2020-03-16T19:00:00.036Z_1]! Expected Replicants[2]
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T19:00:00.000Z_2020-03-16T20:00:00.000Z_2020-03-16T19:00:00.036Z]! Expected Replicants[2]
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T18:00:00.000Z_2020-03-16T19:00:00.000Z_2020-03-16T18:13:58.452Z]! Expected Replicants[2]
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[analytics_2020-03-16T17:00:00.000Z_2020-03-16T18:00:00.000Z_2020-03-16T18:13:57.707Z]! Expected Replicants[2]
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.rules.LoadRule - Loading in progress, skipping drop until loading is complete
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - Found 1 active servers, 0 decommissioning servers
   2020-03-17T10:23:12,359 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorBalancer - [_default_tier]: insufficient active servers. Cannot balance.
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - [_default_tier] : Assigned 0 segments among 1 servers
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - Load Queues:
   2020-03-17T10:23:12,359 INFO [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorLogger - Server[172.18.0.8:8083, historical, _default_tier] has 0 left to load, 0 left to drop, 0 bytes queued, 538,057 bytes served.
   ```
   Last part goes on loop after that.
   
   Task log's last part:
   ```
   2020-03-17T10:23:05,918 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.http.SegmentListerResource as a root resource class
   2020-03-17T10:23:05,920 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
   2020-03-17T10:23:05,920 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider as a provider class
   2020-03-17T10:23:05,920 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.CustomExceptionMapper as a provider class
   2020-03-17T10:23:05,920 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.ForbiddenExceptionMapper as a provider class
   2020-03-17T10:23:05,920 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.BadRequestExceptionMapper as a provider class
   2020-03-17T10:23:05,920 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.StatusResource as a root resource class
   2020-03-17T10:23:05,922 INFO [main] com.sun.jersey.server.impl.application.WebApplicationImpl - Initiating Jersey application, version 'Jersey: 1.19.3 10/24/2016 03:43 PM'
   2020-03-17T10:23:05,969 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.1
   2020-03-17T10:23:05,969 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 55783d3133a5a49a
   2020-03-17T10:23:05,971 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.CuratorDataSegmentServerAnnouncer - Announcing self[DruidServerMetadata{name='172.18.0.9:8101', hostAndPort='172.18.0.9:8101', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}] at [/druid/announcements/172.18.0.9:8101]
   2020-03-17T10:23:05,980 INFO [task-runner-0-priority-0] org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Announced self [{"druidNode":{"service":"druid/middleManager","host":"172.18.0.9","bindOnHost":false,"plaintextPort":8101,"port":-1,"tlsPort":-1,"enablePlaintextPort":true,"enableTlsPort":false},"nodeType":"peon","services":{"dataNodeService":{"type":"dataNodeService","tier":"_default_tier","maxSize":0,"type":"indexer-executor","priority":0},"lookupNodeService":{"type":"lookupNodeService","lookupTier":"__default"}}}].
   2020-03-17T10:23:05,981 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.CustomExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T10:23:05,982 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.ForbiddenExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T10:23:05,982 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.BadRequestExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T10:23:05,983 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T10:23:05,989 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T10:23:06,034 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Initialized sequences: SequenceMetadata{sequenceId=0, sequenceName='index_kafka_analytics_5fa1e1f7085073a_0', assignments=[0], startOffsets={0=152889}, exclusiveStartPartitions=[], endOffsets={0=9223372036854775807}, sentinel=false, checkpointed=false}
   2020-03-17T10:23:06,036 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Adding partition[0], start[152889] -> end[9223372036854775807] to assignment.
   2020-03-17T10:23:06,038 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=kafka-supervisor-iejbaphc] Subscribed to partition(s): analytics-0
   2020-03-17T10:23:06,042 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Seeking partition[0] to[152889].
   2020-03-17T10:23:06,251 INFO [task-runner-0-priority-0] org.apache.kafka.clients.Metadata - Cluster ID: gcH_teSIS4mIpy5Y3asWNg
   2020-03-17T10:23:06,399 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.security.StateResourceFilter to GuiceInstantiatedComponentProvider
   2020-03-17T10:23:06,419 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.SegmentListerResource to GuiceManagedComponentProvider with the scope "PerRequest"
   2020-03-17T10:23:06,425 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.QueryResource to GuiceInstantiatedComponentProvider
   2020-03-17T10:23:06,430 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.segment.realtime.firehose.ChatHandlerResource to GuiceInstantiatedComponentProvider
   2020-03-17T10:23:06,434 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.security.ConfigResourceFilter to GuiceInstantiatedComponentProvider
   2020-03-17T10:23:06,438 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.query.lookup.LookupListeningResource to GuiceInstantiatedComponentProvider
   2020-03-17T10:23:06,440 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.query.lookup.LookupIntrospectionResource to GuiceInstantiatedComponentProvider
   2020-03-17T10:23:06,441 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.StatusResource to GuiceManagedComponentProvider with the scope "Undefined"
   2020-03-17T10:23:06,464 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@7254838{/,null,AVAILABLE}
   2020-03-17T10:23:06,469 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z_1] for sequenceName[index_kafka_analytics_5fa1e1f7085073a_0].
   2020-03-17T10:23:06,476 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@29fcfc54{HTTP/1.1,[http/1.1]}{0.0.0.0:8101}
   2020-03-17T10:23:06,476 INFO [main] org.eclipse.jetty.server.Server - Started @4701ms
   2020-03-17T10:23:06,476 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Starting lifecycle [module] stage [ANNOUNCEMENTS]
   2020-03-17T10:23:06,501 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Successfully started lifecycle [module]
   2020-03-17T10:23:06,524 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z_1] at new path[/druid/segments/172.18.0.9:8101/172.18.0.9:8101_indexer-executor__default_tier_2020-03-17T10:23:06.522Z_baea44f5ec7a489f951125fc6f1b5e010]
   ```
   
   Middlemanager log:
   ```
   
   2020-03-17T10:23:01,765 INFO [WorkerTaskManager-NoticeHandler] org.apache.druid.indexing.worker.WorkerTaskManager - Task[index_kafka_analytics_5fa1e1f7085073a_eacmkogj] started.
   2020-03-17T10:23:01,768 INFO [forking-task-runner-1] org.apache.druid.indexing.overlord.ForkingTaskRunner - Running command: java -cp /tmp/conf/druid/cluster/_common:/tmp/conf/druid/cluster/data/middleManager:lib/google-http-client-1.22.0.jar:lib/jdbi-2.63.1.jar:lib/FastInfoset-1.2.15.jar:lib/jackson-databind-2.10.1.jar:lib/stax-ex-1.8.jar:lib/aether-api-0.9.0.M2.jar:lib/httpclient-4.5.10.jar:lib/jakarta.activation-api-1.2.1.jar:lib/jackson-dataformat-cbor-2.10.1.jar:lib/avatica-server-1.15.0.jar:lib/maven-aether-provider-3.1.1.jar:lib/RoaringBitmap-0.8.11.jar:lib/jakarta.xml.bind-api-2.3.2.jar:lib/httpcore-4.4.11.jar:lib/jetty-continuation-9.4.12.v20180830.jar:lib/netty-resolver-4.1.42.Final.jar:lib/istack-commons-runtime-3.0.7.jar:lib/javax.el-api-3.0.0.jar:lib/rhino-1.7.11.jar:lib/guice-multibindings-4.1.0.jar:lib/druid-gcp-common-0.17.0.jar:lib/antlr4-runtime-4.5.1.jar:lib/aws-java-sdk-core-1.11.199.jar:lib/druid-services-0.17.0.jar:lib/aether-util-0.9.0.M2.jar:lib/netty-codec-dns-4.1.42.Final.jar:lib/icu4j-55.1.jar:lib/commons-pool2-2.2.jar:lib/netty-codec-socks-4.1.42.Final.jar:lib/netty-transport-native-unix-common-4.1.42.Final.jar:lib/google-http-client-jackson2-1.22.0.jar:lib/config-magic-0.9.jar:lib/commons-compiler-3.0.11.jar:lib/aether-connector-file-0.9.0.M2.jar:lib/druid-indexing-service-0.17.0.jar:lib/lz4-java-1.6.0.jar:lib/calcite-core-1.21.0.jar:lib/jcodings-1.0.43.jar:lib/commons-logging-1.1.1.jar:lib/jetty-server-9.4.12.v20180830.jar:lib/javax.activation-api-1.2.0.jar:lib/aether-impl-0.9.0.M2.jar:lib/esri-geometry-api-2.2.0.jar:lib/jetty-servlet-9.4.12.v20180830.jar:lib/async-http-client-netty-utils-2.5.3.jar:lib/commons-compress-1.19.jar:lib/aether-spi-0.9.0.M2.jar:lib/jvm-attach-api-1.5.jar:lib/derby-10.14.2.0.jar:lib/druid-server-0.17.0.jar:lib/druid-aws-common-0.17.0.jar:lib/curator-recipes-4.1.0.jar:lib/netty-handler-4.1.42.Final.jar:lib/jcl-over-slf4j-1.7.12.jar:lib/jetty-proxy-9.4.12.v20180830.jar:lib/jetty-io-9.4.12.v20180830.jar:lib/zookeeper-3.4.14.jar:lib/jackson-jaxrs-base-2.10.1.jar:lib/validation-api-1.1.0.Final.jar:lib/wagon-provider-api-2.4.jar:lib/javax.inject-1.jar:lib/netty-codec-4.1.42.Final.jar:lib/commons-math3-3.6.1.jar:lib/classmate-1.1.0.jar:lib/netty-codec-http-4.1.42.Final.jar:lib/google-api-client-1.22.0.jar:lib/joni-2.1.27.jar:lib/spymemcached-2.12.3.jar:lib/async-http-client-2.5.3.jar:lib/commons-io-2.6.jar:lib/jackson-core-2.10.1.jar:lib/jetty-rewrite-9.4.12.v20180830.jar:lib/datasketches-memory-1.2.0-incubating.jar:lib/accessors-smart-1.2.jar:lib/druid-processing-0.17.0.jar:lib/curator-x-discovery-4.1.0.jar:lib/jetty-security-9.4.12.v20180830.jar:lib/commons-collections4-4.2.jar:lib/javax.activation-1.2.0.jar:lib/asm-commons-7.1.jar:lib/plexus-interpolation-1.19.jar:lib/druid-hll-0.17.0.jar:lib/jsr305-2.0.1.jar:lib/guava-16.0.1.jar:lib/netty-handler-proxy-4.1.42.Final.jar:lib/audience-annotations-0.5.0.jar:lib/asm-tree-7.1.jar:lib/jmespath-java-1.11.199.jar:lib/checker-qual-2.5.7.jar:lib/extendedset-0.17.0.jar:lib/jaxb-api-2.3.1.jar:lib/maven-artifact-3.6.0.jar:lib/log4j-api-2.8.2.jar:lib/netty-reactive-streams-2.0.0.jar:lib/airline-0.7.jar:lib/calcite-linq4j-1.21.0.jar:lib/derbyclient-10.14.2.0.jar:lib/jline-0.9.94.jar:lib/maven-settings-builder-3.1.1.jar:lib/log4j-1.2-api-2.8.2.jar:lib/jersey-servlet-1.19.3.jar:lib/maven-model-3.1.1.jar:lib/error_prone_annotations-2.3.2.jar:lib/netty-common-4.1.42.Final.jar:lib/derbynet-10.14.2.0.jar:lib/commons-net-3.6.jar:lib/asm-7.1.jar:lib/jackson-module-guice-2.10.1.jar:lib/maven-repository-metadata-3.1.1.jar:lib/jersey-core-1.19.3.jar:lib/netty-transport-native-epoll-4.1.42.Final-linux-x86_64.jar:lib/netty-transport-4.1.42.Final.jar:lib/jackson-jaxrs-smile-provider-2.10.1.jar:lib/javax.el-3.0.0.jar:lib/janino-3.0.11.jar:lib/json-smart-2.3.jar:lib/commons-lang3-3.8.1.jar:lib/aopalliance-1.0.jar:lib/disruptor-3.3.6.jar:lib/jackson-mapper-asl-1.9.13.jar:lib/jackson-jq-0.0.10.jar:lib/jackson-dataformat-smile-2.10.1.jar:lib/datasketches-java-1.1.0-incubating.jar:lib/jna-4.5.1.jar:lib/reactive-streams-1.0.2.jar:lib/jersey-server-1.19.3.jar:lib/plexus-utils-3.0.24.jar:lib/jersey-guice-1.19.3.jar:lib/javax.servlet-api-3.1.0.jar:lib/log4j-core-2.8.2.jar:lib/aggdesigner-algorithm-6.0.jar:lib/okhttp-1.0.2.jar:lib/jackson-datatype-joda-2.10.1.jar:lib/txw2-2.3.1.jar:lib/jackson-annotations-2.10.1.jar:lib/compress-lzf-1.0.4.jar:lib/opencsv-4.6.jar:lib/fastutil-8.2.3.jar:lib/avatica-metrics-1.15.0.jar:lib/jetty-util-9.4.12.v20180830.jar:lib/guice-servlet-4.1.0.jar:lib/aether-connector-okhttp-0.0.9.jar:lib/netty-3.10.6.Final.jar:lib/avatica-core-1.15.0.jar:lib/aws-java-sdk-s3-1.11.199.jar:lib/xz-1.8.jar:lib/maven-model-builder-3.1.1.jar:lib/zstd-jni-1.3.3-1.jar:lib/hibernate-validator-5.2.5.Final.jar:lib/log4j-slf4j-impl-2.8.2.jar:lib/guice-4.1.0.jar:lib/metrics-core-4.0.0.jar:lib/jackson-module-jaxb-annotations-2.10.1.jar:lib/protobuf-java-3.11.0.jar:lib/sigar-1.6.5.132.jar:lib/jetty-client-9.4.12.v20180830.jar:lib/curator-client-4.1.0.jar:lib/aws-java-sdk-kms-1.11.199.jar:lib/jsr311-api-1.1.1.jar:lib/commons-dbcp2-2.0.1.jar:lib/jackson-core-asl-1.9.13.jar:lib/caffeine-2.8.0.jar:lib/log4j-jul-2.8.2.jar:lib/shims-0.8.11.jar:lib/netty-buffer-4.1.42.Final.jar:lib/curator-framework-4.1.0.jar:lib/commons-lang-2.6.jar:lib/aws-java-sdk-ec2-1.11.199.jar:lib/druid-console-0.17.0.jar:lib/commons-collections-3.2.2.jar:lib/jboss-logging-3.2.1.Final.jar:lib/commons-text-1.3.jar:lib/jackson-jaxrs-json-provider-2.10.1.jar:lib/netty-resolver-dns-4.1.42.Final.jar:lib/maven-settings-3.1.1.jar:lib/commons-beanutils-1.9.4.jar:lib/druid-core-0.17.0.jar:lib/druid-indexing-hadoop-0.17.0.jar:lib/commons-codec-1.13.jar:lib/jetty-http-9.4.12.v20180830.jar:lib/jackson-datatype-guava-2.10.1.jar:lib/slf4j-api-1.7.25.jar:lib/jaxb-runtime-2.3.1.jar:lib/joda-time-2.10.5.jar:lib/google-oauth-client-1.22.0.jar:lib/jetty-servlets-9.4.12.v20180830.jar:lib/json-path-2.3.0.jar:lib/tesla-aether-0.0.5.jar:lib/asm-analysis-7.1.jar:lib/ion-java-1.0.2.jar:lib/druid-sql-0.17.0.jar: -server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -server -Xmx1g -Xms1g -XX:MaxDirectMemorySize=3g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Ddruid.indexer.task.baseTaskDir=var/druid/task -Ddruid.host=172.18.0.9 -Ddruid.metadata.storage.host= -Ddruid.metadata.storage.connector.password=FoolishPassword -Ddruid.metadata.storage.connector.host=localhost -Ddruid.emitter.logging.logLevel=debug -Ddruid.emitter=noop -Ddruid.indexer.fork.property.druid.processing.buffer.sizeBytes=268435456 -Duser.timezone=UTC -Dfile.encoding.pkg=sun.io -Ddruid.storage.storageDirectory=/opt/data/segments -Ddruid.selectors.coordinator.serviceName=druid/coordinator -Ddruid.selectors.indexing.serviceName=druid/overlord -Ddruid.indexing.doubleStorage=double -Ddruid.lookup.enableLookupSyncOnStartup=false -Ddruid.server.http.numThreads=60 -Ddruid.worker.capacity=4 -Ddruid.metadata.storage.connector.port=1527 -Ddruid.processing.numMergeBuffers=2 -Ddruid.service=druid/middleManager -Ddruid.metadata.storage.connector.user=druid -Ddruid.metadata.storage.type=postgresql -Ddruid.metadata.storage.connector.connectURI=jdbc:postgresql://postgres:5432/druid -Ddruid.coordinator.balancer.strategy=cachingCost -Ddruid.plaintextPort=8091 -Djava.io.tmpdir=var/tmp -Ddruid.extensions.loadList=["druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage", "druid-kafka-indexing-service"] -Ddruid.sql.enable=true -Ddruid.startup.logging.logProperties=true -Ddruid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"] -Ddruid.processing.numThreads=2 -Ddruid.zk.service.host=zookeeper -Ddruid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"] -Ddruid.indexer.logs.directory=/opt/data/indexing-logs -Ddruid.zk.paths.base=/druid -Dfile.encoding=UTF-8 -Ddruid.storage.type=local -Ddruid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp -Ddruid.indexer.logs.type=file -Ddruid.processing.buffer.sizeBytes=268435456 -Ddruid.metrics.emitter.dimension.dataSource=analytics -Ddruid.metrics.emitter.dimension.taskId=index_kafka_analytics_5fa1e1f7085073a_eacmkogj -Ddruid.metrics.emitter.dimension.taskType=index_kafka -Ddruid.host=172.18.0.9 -Ddruid.plaintextPort=8101 -Ddruid.tlsPort=-1 -Ddruid.task.executor.service=druid/middleManager -Ddruid.task.executor.host=172.18.0.9 -Ddruid.task.executor.plaintextPort=8091 -Ddruid.task.executor.enablePlaintextPort=true -Ddruid.task.executor.tlsPort=-1 -Ddruid.task.executor.enableTlsPort=false org.apache.druid.cli.Main internal peon var/druid/task/index_kafka_analytics_5fa1e1f7085073a_eacmkogj/task.json var/druid/task/index_kafka_analytics_5fa1e1f7085073a_eacmkogj/2447b825-9d47-4eee-9bec-3bba484e44d5/status.json var/druid/task/index_kafka_analytics_5fa1e1f7085073a_eacmkogj/2447b825-9d47-4eee-9bec-3bba484e44d5/report.json
   2020-03-17T10:23:01,771 INFO [forking-task-runner-1] org.apache.druid.indexing.overlord.ForkingTaskRunner - Logging task index_kafka_analytics_5fa1e1f7085073a_eacmkogj output to: var/druid/task/index_kafka_analytics_5fa1e1f7085073a_eacmkogj/log
   2020-03-17T10:24:01,381 INFO [forking-task-runner-0-[index_kafka_analytics_cbfe5d63786d588_idbllidp]] org.apache.druid.indexing.overlord.ForkingTaskRunner - Process exited with status[0] for task: index_kafka_analytics_cbfe5d63786d588_idbllidp
   2020-03-17T10:24:01,382 INFO [forking-task-runner-0] org.apache.druid.indexing.common.tasklogs.FileTaskLogs - Wrote task log to: /opt/data/indexing-logs/index_kafka_analytics_cbfe5d63786d588_idbllidp.log
   2020-03-17T10:24:01,382 INFO [forking-task-runner-0] org.apache.druid.indexing.common.tasklogs.FileTaskLogs - Wrote task report to: /opt/data/indexing-logs/index_kafka_analytics_cbfe5d63786d588_idbllidp.report.json
   2020-03-17T10:24:01,385 INFO [forking-task-runner-0] org.apache.druid.indexing.overlord.ForkingTaskRunner - Removing task directory: var/druid/task/index_kafka_analytics_cbfe5d63786d588_idbllidp
   2020-03-17T10:24:01,389 INFO [WorkerTaskManager-NoticeHandler] org.apache.druid.indexing.worker.WorkerTaskManager - Task [index_kafka_analytics_cbfe5d63786d588_idbllidp] completed with status [SUCCESS].
   ```
   Historical log:
   ```
   
   2020-03-17T10:22:40,370 DEBUG [qtp1500151620-129] org.apache.druid.jetty.RequestLog - 172.18.0.4 POST //172.18.0.8:8083/druid/v2/ HTTP/1.1
   2020-03-17T10:23:37,379 INFO [ZKCoordinator--2] org.apache.druid.server.coordination.SegmentLoadDropHandler - Loading segment analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z
   2020-03-17T10:23:37,392 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Loading segment analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z
   2020-03-17T10:23:37,394 INFO [ZKCoordinator--3] org.apache.druid.server.coordination.SegmentLoadDropHandler - Loading segment analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1
   2020-03-17T10:23:37,398 INFO [ZKCoordinator--2] org.apache.druid.segment.loading.LocalDataSegmentPuller - Unzipped 20183 bytes from [/opt/data/segments/analytics/2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z/2020-03-17T10:00:00.037Z/0/6d630d93-0a2e-4fc9-b1ff-b61569b336e1/index.zip] to [/opt/apache-druid-0.17.0/var/druid/segment-cache/analytics/2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z/2020-03-17T10:00:00.037Z/0]
   2020-03-17T10:23:37,400 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.LocalDataSegmentPuller - Unzipped 35669 bytes from [/opt/data/segments/analytics/2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z/2020-03-17T09:00:00.517Z/0/8f539a0f-164e-497f-b30c-e00775cea59f/index.zip] to [/opt/apache-druid-0.17.0/var/druid/segment-cache/analytics/2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z/2020-03-17T09:00:00.517Z/0]
   2020-03-17T10:23:37,400 INFO [ZKCoordinator--3] org.apache.druid.segment.loading.LocalDataSegmentPuller - Unzipped 20204 bytes from [/opt/data/segments/analytics/2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z/2020-03-17T08:00:00.478Z/1/3992f6f7-57a1-4a00-8b5b-975c1edab6d1/index.zip] to [/opt/apache-druid-0.17.0/var/druid/segment-cache/analytics/2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z/2020-03-17T08:00:00.478Z/1]
   2020-03-17T10:23:37,403 INFO [ZKCoordinator--2] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z] at existing path[/druid/segments/172.18.0.8:8083/172.18.0.8:8083_historical__default_tier_2020-03-16T19:14:31.847Z_cebf254ae5604230bb4c31b486a8a6421]
   2020-03-17T10:23:37,405 INFO [ZkCoordinator] org.apache.druid.server.coordination.ZkCoordinator - zNode[/druid/loadQueue/172.18.0.8:8083/analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z] was removed
   2020-03-17T10:23:37,405 INFO [ZKCoordinator--2] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: analytics_2020-03-17T10:00:00.000Z_2020-03-17T11:00:00.000Z_2020-03-17T10:00:00.037Z]
   2020-03-17T10:23:37,406 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z] at existing path[/druid/segments/172.18.0.8:8083/172.18.0.8:8083_historical__default_tier_2020-03-16T19:14:31.847Z_cebf254ae5604230bb4c31b486a8a6421]
   2020-03-17T10:23:37,408 INFO [ZkCoordinator] org.apache.druid.server.coordination.ZkCoordinator - zNode[/druid/loadQueue/172.18.0.8:8083/analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z] was removed
   2020-03-17T10:23:37,408 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: analytics_2020-03-17T09:00:00.000Z_2020-03-17T10:00:00.000Z_2020-03-17T09:00:00.517Z]
   2020-03-17T10:23:37,411 INFO [ZKCoordinator--3] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1] at existing path[/druid/segments/172.18.0.8:8083/172.18.0.8:8083_historical__default_tier_2020-03-16T19:14:31.847Z_cebf254ae5604230bb4c31b486a8a6421]
   2020-03-17T10:23:37,421 INFO [ZKCoordinator--3] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1]
   2020-03-17T10:23:37,421 INFO [ZkCoordinator] org.apache.druid.server.coordination.ZkCoordinator - zNode[/druid/loadQueue/172.18.0.8:8083/analytics_2020-03-17T08:00:00.000Z_2020-03-17T09:00:00.000Z_2020-03-17T08:00:00.478Z_1] was removed
   2020-03-17T10:23:40,338 DEBUG [qtp1500151620-138] org.apache.druid.jetty.RequestLog - 172.18.0.4 POST //172.18.0.8:8083/druid/v2/ HTTP/1.1
   ```
   Broker log:
   ```
   
   2020-03-17T10:23:48,700 DEBUG [qtp1182469998-203] org.apache.druid.jetty.RequestLog - 172.18.0.6 POST //172.18.0.4:8082/druid/v2/sql HTTP/1.1
   2020-03-17T10:23:50,375 WARN [HttpClient-Netty-Boss-0] org.jboss.netty.channel.SimpleChannelUpstreamHandler - EXCEPTION, please implement org.jboss.netty.handler.codec.http.HttpContentDecompressor.exceptionCaught() for proper handling.
   org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /172.18.0.9:8101
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.6.Final.jar:?]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
   2020-03-17T10:23:50,986 DEBUG [qtp1182469998-147] org.apache.druid.jetty.RequestLog - 172.18.0.6 POST //172.18.0.4:8082/druid/v2/sql HTTP/1.1
   2020-03-17T10:23:53,575 WARN [HttpClient-Netty-Boss-0] org.jboss.netty.channel.SimpleChannelUpstreamHandler - EXCEPTION, please implement org.jboss.netty.handler.codec.http.HttpContentDecompressor.exceptionCaught() for proper handling.
   org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /172.18.0.9:8101
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.6.Final.jar:?]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
   2020-03-17T10:23:53,711 DEBUG [qtp1182469998-203] org.apache.druid.jetty.RequestLog - 172.18.0.6 POST //172.18.0.4:8082/druid/v2/sql HTTP/1.1
   2020-03-17T10:23:55,996 DEBUG [qtp1182469998-147] org.apache.druid.jetty.RequestLog - 172.18.0.6 POST //172.18.0.4:8082/druid/v2/sql HTTP/1.1
   2020-03-17T10:23:59,211 DEBUG [qtp1182469998-203] org.apache.druid.jetty.RequestLog - 172.18.0.6 POST //172.18.0.4:8082/druid/v2/sql HTTP/1.1
   2020-03-17T10:24:00,108 DEBUG [qtp1182469998-147] org.apache.druid.jetty.RequestLog - 172.18.0.6 POST //172.18.0.4:8082/druid/v2/sql HTTP/1.1
   2020-03-17T10:24:00,474 WARN [HttpClient-Netty-Boss-0] org.jboss.netty.channel.SimpleChannelUpstreamHandler - EXCEPTION, please implement org.jboss.netty.handler.codec.http.HttpContentDecompressor.exceptionCaught() for proper handling.
   org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /172.18.0.9:8101
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.6.Final.jar:?]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
   2020-03-17T10:24:00,476 WARN [ForkJoinPool-1-worker-9] org.apache.druid.client.JsonParserIterator - Query [049daad1-306d-4af6-a12c-f2365961d391] to host [172.18.0.9:8101] interrupted
   java.util.concurrent.ExecutionException: org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
   	at com.google.common.util.concurrent.Futures$ImmediateFailedFuture.get(Futures.java:186) ~[guava-16.0.1.jar:?]
   	at com.google.common.util.concurrent.Futures$ImmediateFuture.get(Futures.java:122) ~[guava-16.0.1.jar:?]
   	at org.apache.druid.client.JsonParserIterator.init(JsonParserIterator.java:138) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.JsonParserIterator.hasNext(JsonParserIterator.java:95) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.BaseSequence.makeYielder(BaseSequence.java:89) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.BaseSequence.toYielder(BaseSequence.java:69) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$ResultBatch.fromSequence(ParallelMergeCombiningSequence.java:847) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.block(ParallelMergeCombiningSequence.java:897) [druid-core-0.17.0.jar:0.17.0]
   	at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313) [?:1.8.0_232]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.getBatchYielder(ParallelMergeCombiningSequence.java:886) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$YielderBatchedResultsCursor.initialize(ParallelMergeCombiningSequence.java:993) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$PrepareMergeCombineInputsAction.compute(ParallelMergeCombiningSequence.java:702) [druid-core-0.17.0.jar:0.17.0]
   	at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [?:1.8.0_232]
   Caused by: org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
   	at org.apache.druid.java.util.http.client.NettyHttpClient.go(NettyHttpClient.java:131) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.DirectDruidClient.run(DirectDruidClient.java:441) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.getSimpleServerResults(CachingClusteredClient.java:647) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.lambda$addSequencesFromServer$7(CachingClusteredClient.java:611) ~[druid-server-0.17.0.jar:0.17.0]
   	at java.util.TreeMap.forEach(TreeMap.java:1005) ~[?:1.8.0_232]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.addSequencesFromServer(CachingClusteredClient.java:593) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.lambda$run$0(CachingClusteredClient.java:300) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.LazySequence.toYielder(LazySequence.java:46) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.query.RetryQueryRunner$1.toYielder(RetryQueryRunner.java:97) ~[druid-processing-0.17.0.jar:0.17.0]
   	at org.apache.druid.common.guava.CombiningSequence.toYielder(CombiningSequence.java:79) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence$2.get(WrappingSequence.java:88) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence$2.get(WrappingSequence.java:84) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.query.CPUTimeMetricQueryRunner$1.wrap(CPUTimeMetricQueryRunner.java:74) ~[druid-processing-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence.toYielder(WrappingSequence.java:83) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.Yielders.each(Yielders.java:32) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.server.QueryResource.doPost(QueryResource.java:219) ~[druid-server-0.17.0.jar:0.17.0]
   	at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source) ~[?:?]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
   	at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) ~[jersey-server-1.19.3.jar:1.19.3]
   	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) ~[jersey-servlet-1.19.3.jar:1.19.3]
   	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) ~[jersey-servlet-1.19.3.jar:1.19.3]
   	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) ~[jersey-servlet-1.19.3.jar:1.19.3]
   	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) ~[javax.servlet-api-3.1.0.jar:3.1.0]
   	at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286) ~[guice-servlet-4.1.0.jar:?]
   	at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276) ~[guice-servlet-4.1.0.jar:?]
   	at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181) ~[guice-servlet-4.1.0.jar:?]
   	at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) ~[guice-servlet-4.1.0.jar:?]
   	at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) ~[guice-servlet-4.1.0.jar:?]
   	at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120) ~[guice-servlet-4.1.0.jar:?]
   	at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135) ~[guice-servlet-4.1.0.jar:?]
   	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.apache.druid.server.security.PreResponseAuthorizationCheckFilter.doFilter(PreResponseAuthorizationCheckFilter.java:82) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.apache.druid.server.security.AllowOptionsResourceFilter.doFilter(AllowOptionsResourceFilter.java:75) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.apache.druid.server.security.AllowAllAuthenticator$1.doFilter(AllowAllAuthenticator.java:84) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.server.security.AuthenticationWrappingFilter.doFilter(AuthenticationWrappingFilter.java:59) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.apache.druid.server.security.SecuritySanityCheckFilter.doFilter(SecuritySanityCheckFilter.java:86) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) ~[jetty-servlet-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:740) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.Server.handle(Server.java:503) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) ~[jetty-server-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) ~[jetty-io-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[jetty-io-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) ~[jetty-io-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) ~[jetty-util-9.4.12.v20180830.jar:9.4.12.v20180830]
   	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_232]
   Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /172.18.0.9:8101
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.10.6.Final.jar:?]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_232]
   2020-03-17T10:24:00,484 WARN [qtp1182469998-172[segmentMetadata_[analytics]_049daad1-306d-4af6-a12c-f2365961d391]] org.apache.druid.server.QueryLifecycle - Exception while processing queryId [049daad1-306d-4af6-a12c-f2365961d391] (QueryInterruptedException{msg=org.jboss.netty.channel.ChannelException: Faulty channel in resource pool, code=Unknown exception, class=java.util.concurrent.ExecutionException, host=172.18.0.9:8101})
   2020-03-17T10:24:00,484 DEBUG [qtp1182469998-172] org.apache.druid.jetty.RequestLog - 10.0.124.18 POST //broker:8082/druid/v2/ HTTP/1.1
   2020-03-17T10:24:00,926 INFO [NodeRoleWatcher[PEON]] org.apache.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider$NodeRoleWatcher - Node[http://172.18.0.9:8100] of role[peon] went offline.
   2020-03-17T10:24:00,927 INFO [ServerInventoryView-0] org.apache.druid.client.BatchServerInventoryView - Server Disappeared[DruidServerMetadata{name='172.18.0.9:8100', hostAndPort='172.18.0.9:8100', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}]
   2020-03-17T10:24:00,927 INFO [ServerInventoryView-0] org.apache.druid.client.BatchServerInventoryView - Server Disappeared[DruidServerMetadata{name='172.18.0.9:8100', hostAndPort='172.18.0.9:8100', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}]
   2020-03-17T10:24:03,675 WARN [HttpClient-Netty-Boss-0] org.jboss.netty.channel.SimpleChannelUpstreamHandler - EXCEPTION, please implement org.jboss.netty.handler.codec.http.HttpContentDecompressor.exceptionCaught() for proper handling.
   org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /172.18.0.9:8101
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.6.Final.jar:?]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
   2020-03-17T10:24:03,676 WARN [ForkJoinPool-1-worker-2] org.apache.druid.client.JsonParserIterator - Query [148ce17b-ebea-4ce4-925e-60917f7bd32b] to host [172.18.0.9:8101] interrupted
   java.util.concurrent.ExecutionException: org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
   	at com.google.common.util.concurrent.Futures$ImmediateFailedFuture.get(Futures.java:186) ~[guava-16.0.1.jar:?]
   	at com.google.common.util.concurrent.Futures$ImmediateFuture.get(Futures.java:122) ~[guava-16.0.1.jar:?]
   	at org.apache.druid.client.JsonParserIterator.init(JsonParserIterator.java:138) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.JsonParserIterator.hasNext(JsonParserIterator.java:95) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.BaseSequence.makeYielder(BaseSequence.java:89) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.BaseSequence.toYielder(BaseSequence.java:69) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$ResultBatch.fromSequence(ParallelMergeCombiningSequence.java:847) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.block(ParallelMergeCombiningSequence.java:897) [druid-core-0.17.0.jar:0.17.0]
   	at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313) [?:1.8.0_232]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.getBatchYielder(ParallelMergeCombiningSequence.java:886) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$YielderBatchedResultsCursor.initialize(ParallelMergeCombiningSequence.java:993) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$PrepareMergeCombineInputsAction.compute(ParallelMergeCombiningSequence.java:702) [druid-core-0.17.0.jar:0.17.0]
   	at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [?:1.8.0_232]
   	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) [?:1.8.0_232]
   Caused by: org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
   	at org.apache.druid.java.util.http.client.NettyHttpClient.go(NettyHttpClient.java:131) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.DirectDruidClient.run(DirectDruidClient.java:441) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.getSimpleServerResults(CachingClusteredClient.java:647) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.lambda$addSequencesFromServer$7(CachingClusteredClient.java:611) ~[druid-server-0.17.0.jar:0.17.0]
   	at java.util.TreeMap.forEach(TreeMap.java:1005) ~[?:1.8.0_232]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.addSequencesFromServer(CachingClusteredClient.java:593) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.CachingClusteredClient$SpecificQueryRunnable.lambda$run$0(CachingClusteredClient.java:300) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.LazySequence.toYielder(LazySequence.java:46) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.query.RetryQueryRunner$1.toYielder(RetryQueryRunner.java:97) ~[druid-processing-0.17.0.jar:0.17.0]
   	at org.apache.druid.common.guava.CombiningSequence.toYielder(CombiningSequence.java:79) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence$2.get(WrappingSequence.java:88) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence$2.get(WrappingSequence.java:84) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.query.CPUTimeMetricQueryRunner$1.wrap(CPUTimeMetricQueryRunner.java:74) ~[druid-processing-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence.toYielder(WrappingSequence.java:83) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence$2.get(WrappingSequence.java:88) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence$2.get(WrappingSequence.java:84) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.SequenceWrapper.wrap(SequenceWrapper.java:55) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.WrappingSequence.toYielder(WrappingSequence.java:83) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.Yielders.each(Yielders.java:32) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.sql.calcite.schema.DruidSchema.refreshSegmentsForDataSource(DruidSchema.java:517) ~[druid-sql-0.17.0.jar:0.17.0]
   	at org.apache.druid.sql.calcite.schema.DruidSchema.refreshSegments(DruidSchema.java:473) ~[druid-sql-0.17.0.jar:0.17.0]
   	at org.apache.druid.sql.calcite.schema.DruidSchema$2.run(DruidSchema.java:258) ~[druid-sql-0.17.0.jar:0.17.0]
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_232]
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_232]
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_232]
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_232]
   Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection timed out: /172.18.0.9:8101
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:139) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.10.6.Final.jar:?]
   	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.10.6.Final.jar:?]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_232]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_232]
   	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_232]
   2020-03-17T10:24:03,679 WARN [DruidSchema-Cache-0] org.apache.druid.server.QueryLifecycle - Exception while processing queryId [148ce17b-ebea-4ce4-925e-60917f7bd32b] (QueryInterruptedException{msg=org.jboss.netty.channel.ChannelException: Faulty channel in resource pool, code=Unknown exception, class=java.util.concurrent.ExecutionException, host=172.18.0.9:8101})
   2020-03-17T10:24:03,679 WARN [DruidSchema-Cache-0] org.apache.druid.sql.calcite.schema.DruidSchema - Metadata refresh failed, trying again soon.
   org.apache.druid.query.QueryInterruptedException: org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
   	at org.apache.druid.client.JsonParserIterator.interruptQuery(JsonParserIterator.java:189) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.JsonParserIterator.init(JsonParserIterator.java:173) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.client.JsonParserIterator.hasNext(JsonParserIterator.java:95) ~[druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.BaseSequence.makeYielder(BaseSequence.java:89) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.BaseSequence.toYielder(BaseSequence.java:69) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$ResultBatch.fromSequence(ParallelMergeCombiningSequence.java:847) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.block(ParallelMergeCombiningSequence.java:897) ~[druid-core-0.17.0.jar:0.17.0]
   	at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313) ~[?:1.8.0_232]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.getBatchYielder(ParallelMergeCombiningSequence.java:886) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$YielderBatchedResultsCursor.initialize(ParallelMergeCombiningSequence.java:993) ~[druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$PrepareMergeCombineInputsAction.compute(ParallelMergeCombiningSequence.java:702) ~[druid-core-0.17.0.jar:0.17.0]
   	at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) ~[?:1.8.0_232]
   	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) ~[?:1.8.0_232]
   	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) ~[?:1.8.0_232]
   	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) ~[?:1.8.0_232]
   	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) ~[?:1.8.0_232]
   Caused by: java.util.concurrent.ExecutionException: org.jboss.netty.channel.ChannelException: Faulty channel in resource pool
   	at com.google.common.util.concurrent.Futures$ImmediateFailedFuture.get(Futures.java:186) ~[guava-16.0.1.jar:?]
   	at com.google.common.util.concurrent.Futures$ImmediateFuture.get(Futures.java:122) ~[guava-16.0.1.jar:?]
   	at org.apache.druid.client.JsonParserIterator.init(JsonParserIterator.java:138) ~[druid-server-0.17.0.jar:0.17.0]
   ```
   
   By the way, Zookeeper is also dockerized with only one instance and persisting volumes, same as Kafka.
   
   Thanks in advance.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] DixxieFlatline commented on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment

Posted by GitBox <gi...@apache.org>.
DixxieFlatline commented on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment
URL: https://github.com/apache/druid/issues/9527#issuecomment-600189381
 
 
   I only had one instance of zookeeper so I've implemented a cluster of 3, leading to a new exception in the task log:
   ```
   2020-03-17T16:59:08,392 INFO [main] org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
   2020-03-17T16:59:08,393 INFO [main] org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
   2020-03-17T16:59:08,394 INFO [main] org.eclipse.jetty.server.session - node0 Scavenging every 660000ms
   2020-03-17T16:59:08,450 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.1
   2020-03-17T16:59:08,450 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 55783d3133a5a49a
   2020-03-17T16:59:08,452 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.CuratorDataSegmentServerAnnouncer - Announcing self[DruidServerMetadata{name='172.18.0.10:8101', hostAndPort='172.18.0.10:8101', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}] at [/druid/announcements/172.18.0.10:8101]
   2020-03-17T16:59:08,457 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.http.SegmentListerResource as a root resource class
   2020-03-17T16:59:08,458 INFO [task-runner-0-priority-0] org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Announced self [{"druidNode":{"service":"druid/middleManager","host":"172.18.0.10","bindOnHost":false,"plaintextPort":8101,"port":-1,"tlsPort":-1,"enablePlaintextPort":true,"enableTlsPort":false},"nodeType":"peon","services":{"dataNodeService":{"type":"dataNodeService","tier":"_default_tier","maxSize":0,"type":"indexer-executor","priority":0},"lookupNodeService":{"type":"lookupNodeService","lookupTier":"__default"}}}].
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.CustomExceptionMapper as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.ForbiddenExceptionMapper as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.BadRequestExceptionMapper as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.StatusResource as a root resource class
   2020-03-17T16:59:08,461 INFO [main] com.sun.jersey.server.impl.application.WebApplicationImpl - Initiating Jersey application, version 'Jersey: 1.19.3 10/24/2016 03:43 PM'
   2020-03-17T16:59:08,500 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Initialized sequences: SequenceMetadata{sequenceId=0, sequenceName='index_kafka_analytics_d8c5f0c4519ee11_0', assignments=[0], startOffsets={0=8906}, exclusiveStartPartitions=[], endOffsets={0=9223372036854775807}, sentinel=false, checkpointed=false}
   2020-03-17T16:59:08,501 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Adding partition[0], start[8906] -> end[9223372036854775807] to assignment.
   2020-03-17T16:59:08,504 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=kafka-supervisor-lhgmjjpk] Subscribed to partition(s): analytics-0
   2020-03-17T16:59:08,508 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Seeking partition[0] to[8906].
   2020-03-17T16:59:08,522 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.CustomExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,523 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.ForbiddenExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,523 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.BadRequestExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,524 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,530 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,652 INFO [task-runner-0-priority-0] org.apache.kafka.clients.Metadata - Cluster ID: -8JgS5vVQLeP_wB2EIOI9A
   2020-03-17T16:59:08,932 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.security.StateResourceFilter to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,950 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[analytics_2020-03-17T16:00:00.000Z_2020-03-17T17:00:00.000Z_2020-03-17T16:58:03.837Z_1] for sequenceName[index_kafka_analytics_d8c5f0c4519ee11_0].
   2020-03-17T16:59:08,951 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.SegmentListerResource to GuiceManagedComponentProvider with the scope "PerRequest"
   2020-03-17T16:59:08,957 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.QueryResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,962 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.segment.realtime.firehose.ChatHandlerResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,965 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.security.ConfigResourceFilter to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,969 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.query.lookup.LookupListeningResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,971 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.query.lookup.LookupIntrospectionResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,972 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.StatusResource to GuiceManagedComponentProvider with the scope "Undefined"
   2020-03-17T16:59:09,002 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@5f0d8937{/,null,AVAILABLE}
   2020-03-17T16:59:09,004 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T16:00:00.000Z_2020-03-17T17:00:00.000Z_2020-03-17T16:58:03.837Z_1] at new path[/druid/segments/172.18.0.10:8101/172.18.0.10:8101_indexer-executor__default_tier_2020-03-17T16:59:09.000Z_cb6ca84f7c9b4469873c49965947c8320]
   2020-03-17T16:59:09,017 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@10745a02{HTTP/1.1,[http/1.1]}{0.0.0.0:8101}
   2020-03-17T16:59:09,017 INFO [main] org.eclipse.jetty.server.Server - Started @4764ms
   2020-03-17T16:59:09,018 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Starting lifecycle [module] stage [ANNOUNCEMENTS]
   2020-03-17T16:59:09,079 INFO [main] org.apache.druid.curator.announcement.Announcer - Problem creating parentPath[/druid/segments/172.18.0.10:8101], someone else created it first?
   org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /druid/segments/172.18.0.10:8101
   	at org.apache.zookeeper.KeeperException.create(KeeperException.java:122) ~[zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
   	at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) ~[zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
   	at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792) ~[zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1179) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1160) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64) ~[curator-client-4.1.0.jar:?]
   	at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100) ~[curator-client-4.1.0.jar:?]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1157) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:575) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$4.forPath(CreateBuilderImpl.java:463) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$4.forPath(CreateBuilderImpl.java:393) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.druid.curator.announcement.Announcer.createPath(Announcer.java:425) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.curator.announcement.Announcer.announce(Announcer.java:298) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.curator.announcement.Announcer.start(Announcer.java:116) [druid-server-0.17.0.jar:0.17.0]
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
   	at org.apache.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler.start(Lifecycle.java:446) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.lifecycle.Lifecycle.start(Lifecycle.java:341) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.guice.LifecycleModule$2.start(LifecycleModule.java:143) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:115) [druid-services-0.17.0.jar:0.17.0]
   	at org.apache.druid.cli.CliPeon.run(CliPeon.java:280) [druid-services-0.17.0.jar:0.17.0]
   	at org.apache.druid.cli.Main.main(Main.java:113) [druid-services-0.17.0.jar:0.17.0]
   2020-03-17T16:59:09,084 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Successfully started lifecycle [module]
   2020-03-17T17:00:00,046 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[analytics_2020-03-17T17:00:00.000Z_2020-03-17T18:00:00.000Z_2020-03-17T17:00:00.034Z] for sequenceName[index_kafka_analytics_d8c5f0c4519ee11_0].
   2020-03-17T17:00:00,052 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T17:00:00.000Z_2020-03-17T18:00:00.000Z_2020-03-17T17:00:00.034Z] at existing path[/druid/segments/172.18.0.10:8101/172.18.0.10:8101_indexer-executor__default_tier_2020-03-17T16:59:09.000Z_cb6ca84f7c9b4469873c49965947c8320]
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] DixxieFlatline closed issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment

Posted by GitBox <gi...@apache.org>.
DixxieFlatline closed issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment
URL: https://github.com/apache/druid/issues/9527
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] DixxieFlatline removed a comment on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment

Posted by GitBox <gi...@apache.org>.
DixxieFlatline removed a comment on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment
URL: https://github.com/apache/druid/issues/9527#issuecomment-600189381
 
 
   I only had one instance of zookeeper so I've implemented a cluster of 3, leading to a new exception in the task log:
   ```
   2020-03-17T16:59:08,392 INFO [main] org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
   2020-03-17T16:59:08,393 INFO [main] org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
   2020-03-17T16:59:08,394 INFO [main] org.eclipse.jetty.server.session - node0 Scavenging every 660000ms
   2020-03-17T16:59:08,450 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.1
   2020-03-17T16:59:08,450 INFO [task-runner-0-priority-0] org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 55783d3133a5a49a
   2020-03-17T16:59:08,452 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.CuratorDataSegmentServerAnnouncer - Announcing self[DruidServerMetadata{name='172.18.0.10:8101', hostAndPort='172.18.0.10:8101', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=indexer-executor, priority=0}] at [/druid/announcements/172.18.0.10:8101]
   2020-03-17T16:59:08,457 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.http.SegmentListerResource as a root resource class
   2020-03-17T16:59:08,458 INFO [task-runner-0-priority-0] org.apache.druid.curator.discovery.CuratorDruidNodeAnnouncer - Announced self [{"druidNode":{"service":"druid/middleManager","host":"172.18.0.10","bindOnHost":false,"plaintextPort":8101,"port":-1,"tlsPort":-1,"enablePlaintextPort":true,"enableTlsPort":false},"nodeType":"peon","services":{"dataNodeService":{"type":"dataNodeService","tier":"_default_tier","maxSize":0,"type":"indexer-executor","priority":0},"lookupNodeService":{"type":"lookupNodeService","lookupTier":"__default"}}}].
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.CustomExceptionMapper as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.ForbiddenExceptionMapper as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.initialization.jetty.BadRequestExceptionMapper as a provider class
   2020-03-17T16:59:08,459 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Registering org.apache.druid.server.StatusResource as a root resource class
   2020-03-17T16:59:08,461 INFO [main] com.sun.jersey.server.impl.application.WebApplicationImpl - Initiating Jersey application, version 'Jersey: 1.19.3 10/24/2016 03:43 PM'
   2020-03-17T16:59:08,500 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Initialized sequences: SequenceMetadata{sequenceId=0, sequenceName='index_kafka_analytics_d8c5f0c4519ee11_0', assignments=[0], startOffsets={0=8906}, exclusiveStartPartitions=[], endOffsets={0=9223372036854775807}, sentinel=false, checkpointed=false}
   2020-03-17T16:59:08,501 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Adding partition[0], start[8906] -> end[9223372036854775807] to assignment.
   2020-03-17T16:59:08,504 INFO [task-runner-0-priority-0] org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=kafka-supervisor-lhgmjjpk] Subscribed to partition(s): analytics-0
   2020-03-17T16:59:08,508 INFO [task-runner-0-priority-0] org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Seeking partition[0] to[8906].
   2020-03-17T16:59:08,522 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.CustomExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,523 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.ForbiddenExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,523 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.initialization.jetty.BadRequestExceptionMapper to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,524 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,530 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding com.fasterxml.jackson.jaxrs.smile.JacksonSmileProvider to GuiceManagedComponentProvider with the scope "Singleton"
   2020-03-17T16:59:08,652 INFO [task-runner-0-priority-0] org.apache.kafka.clients.Metadata - Cluster ID: -8JgS5vVQLeP_wB2EIOI9A
   2020-03-17T16:59:08,932 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.security.StateResourceFilter to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,950 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[analytics_2020-03-17T16:00:00.000Z_2020-03-17T17:00:00.000Z_2020-03-17T16:58:03.837Z_1] for sequenceName[index_kafka_analytics_d8c5f0c4519ee11_0].
   2020-03-17T16:59:08,951 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.SegmentListerResource to GuiceManagedComponentProvider with the scope "PerRequest"
   2020-03-17T16:59:08,957 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.QueryResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,962 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.segment.realtime.firehose.ChatHandlerResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,965 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.http.security.ConfigResourceFilter to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,969 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.query.lookup.LookupListeningResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,971 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.query.lookup.LookupIntrospectionResource to GuiceInstantiatedComponentProvider
   2020-03-17T16:59:08,972 INFO [main] com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory - Binding org.apache.druid.server.StatusResource to GuiceManagedComponentProvider with the scope "Undefined"
   2020-03-17T16:59:09,002 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@5f0d8937{/,null,AVAILABLE}
   2020-03-17T16:59:09,004 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T16:00:00.000Z_2020-03-17T17:00:00.000Z_2020-03-17T16:58:03.837Z_1] at new path[/druid/segments/172.18.0.10:8101/172.18.0.10:8101_indexer-executor__default_tier_2020-03-17T16:59:09.000Z_cb6ca84f7c9b4469873c49965947c8320]
   2020-03-17T16:59:09,017 INFO [main] org.eclipse.jetty.server.AbstractConnector - Started ServerConnector@10745a02{HTTP/1.1,[http/1.1]}{0.0.0.0:8101}
   2020-03-17T16:59:09,017 INFO [main] org.eclipse.jetty.server.Server - Started @4764ms
   2020-03-17T16:59:09,018 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Starting lifecycle [module] stage [ANNOUNCEMENTS]
   2020-03-17T16:59:09,079 INFO [main] org.apache.druid.curator.announcement.Announcer - Problem creating parentPath[/druid/segments/172.18.0.10:8101], someone else created it first?
   org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /druid/segments/172.18.0.10:8101
   	at org.apache.zookeeper.KeeperException.create(KeeperException.java:122) ~[zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
   	at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) ~[zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
   	at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:792) ~[zookeeper-3.4.14.jar:3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1179) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1160) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:64) ~[curator-client-4.1.0.jar:?]
   	at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:100) ~[curator-client-4.1.0.jar:?]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1157) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:607) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:575) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$4.forPath(CreateBuilderImpl.java:463) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.curator.framework.imps.CreateBuilderImpl$4.forPath(CreateBuilderImpl.java:393) ~[curator-framework-4.1.0.jar:4.1.0]
   	at org.apache.druid.curator.announcement.Announcer.createPath(Announcer.java:425) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.curator.announcement.Announcer.announce(Announcer.java:298) [druid-server-0.17.0.jar:0.17.0]
   	at org.apache.druid.curator.announcement.Announcer.start(Announcer.java:116) [druid-server-0.17.0.jar:0.17.0]
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
   	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
   	at org.apache.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler.start(Lifecycle.java:446) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.java.util.common.lifecycle.Lifecycle.start(Lifecycle.java:341) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.guice.LifecycleModule$2.start(LifecycleModule.java:143) [druid-core-0.17.0.jar:0.17.0]
   	at org.apache.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:115) [druid-services-0.17.0.jar:0.17.0]
   	at org.apache.druid.cli.CliPeon.run(CliPeon.java:280) [druid-services-0.17.0.jar:0.17.0]
   	at org.apache.druid.cli.Main.main(Main.java:113) [druid-services-0.17.0.jar:0.17.0]
   2020-03-17T16:59:09,084 INFO [main] org.apache.druid.java.util.common.lifecycle.Lifecycle - Successfully started lifecycle [module]
   2020-03-17T17:00:00,046 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.BaseAppenderatorDriver - New segment[analytics_2020-03-17T17:00:00.000Z_2020-03-17T18:00:00.000Z_2020-03-17T17:00:00.034Z] for sequenceName[index_kafka_analytics_d8c5f0c4519ee11_0].
   2020-03-17T17:00:00,052 INFO [task-runner-0-priority-0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing segment[analytics_2020-03-17T17:00:00.000Z_2020-03-17T18:00:00.000Z_2020-03-17T17:00:00.034Z] at existing path[/druid/segments/172.18.0.10:8101/172.18.0.10:8101_indexer-executor__default_tier_2020-03-17T16:59:09.000Z_cb6ca84f7c9b4469873c49965947c8320]
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] DixxieFlatline commented on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment

Posted by GitBox <gi...@apache.org>.
DixxieFlatline commented on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment
URL: https://github.com/apache/druid/issues/9527#issuecomment-600238972
 
 
   So I've exposed the por 8101 in the compose file and it worked.  Problem solved.
   
   Also I've done a grep through all the project and found this in the docker file from the integration test:
   ```
   # Expose ports:
   # - 8081, 8281: HTTP, HTTPS (coordinator)
   # - 8082, 8282: HTTP, HTTPS (broker)
   # - 8083, 8283: HTTP, HTTPS (historical)
   # - 8090, 8290: HTTP, HTTPS (overlord)
   # - 8091, 8291: HTTP, HTTPS (middlemanager)
   # - 8888-8891, 9088-9091: HTTP, HTTPS (routers)
   # - 3306: MySQL
   # - 2181 2888 3888: ZooKeeper
   # - 8100 8101 8102 8103 8104 8105 : peon ports
   # - 8300 8301 8302 8303 8304 8305 : peon HTTPS ports
   ```
   IMO the docker Getting Started section should be updated reflecting this possible issue -which won't be if noticing that the middlemanager should expose this ports depending on n+1 possible peon tasks-. Will try to do a PR soon with this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] DixxieFlatline commented on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment

Posted by GitBox <gi...@apache.org>.
DixxieFlatline commented on issue #9527: Second task hangs while ingesting from Kafka in Dockerized environment
URL: https://github.com/apache/druid/issues/9527#issuecomment-600227203
 
 
   Actually trying with a ZK cluster without success.
   
   For the record, I am using an overlay network so the containers communicate.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org