You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2020/10/21 00:06:53 UTC

[GitHub] [druid] technomage opened a new issue #10523: Historical server fails to load segments in kubernetes

technomage opened a new issue #10523:
URL: https://github.com/apache/druid/issues/10523


   Please provide a detailed title (e.g. "Broker crashes when using TopN query with Bound filter" instead of just "Broker crashes").
   We installed druid using the incubator helm chart.  It mounts a data volume for historical and middle manager at /opt/druid/var/druid.  The historical server appears unable to access the segments.
   
   
   druid:
     enabled: true
     configVars:
       druid_worker_capacity: '20'
       druid_extensions_loadList: '["druid-kafka-indexing-service", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage"]'
     historical:
       config:
         druid_segmentCache_locations: '[{"path":"/opt/druid/var/druid/segment-cache","maxSize":300000000000}]'
       persistence:
         size: "12Gi"
     middleManager:
       persistence:
         size: "12Gi"
       config:
         druid_segmentCache_locations: '[{"path":"/opt/druid/var/druid/segment-cache","maxSize":300000000000}]'
         druid_indexer_runner_javaOptsArray: '["-server", "-Xms1g", "-Xmx1g", "-XX:MaxDirectMemorySize=500m", "-Duser.timezone=UTC", "-Dfile.encoding=UTF-8", "-XX:+ExitOnOutOfMemoryError", "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"]'
       resources:
         limits:
           cpu: 1000m
           memory: 2Gi
         requests:
           cpu: 500m
           memory: 1Gi
   
   
   
   ### Affected Version
   Druid 0.19.0 from helm 0.2.13
   
   The Druid version where the problem was encountered.
   
   ### Description
   
   Trying to get  stable install to do initial development work with.  Real time queries and access are working, but historical server not able to access segments so they go away when they are published from the middle manager.
   
   Please include as much detailed information about the problem as possible.
   - Cluster size
   - Configurations in use
   - Steps to reproduce the problem
   - The error message or stack traces encountered. Providing more context, such as nearby log messages or even entire logs, can be helpful.
   - Any debugging that you have already done
   
   This is a single node minikube environment.  We are using this for development work.  The segments are using local file persistence.  See values config above.
   
   The following is from the historical pod log.  Note that there are file references trying to use /opt/apache-druid-0.19.0/var/druid even though the segment cache location was overridden to use /opt/druid/var/druid to match the volume mounts.
   
   2020-10-20T23:57:55,979 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: jobs_1979-01-01T00:00:00.000Z_1980-01-01T00:00:00.000Z_2020-10-20T21:36:46.683Z]
   2020-10-20T23:57:55,979 INFO [ZkCoordinator] org.apache.druid.server.coordination.ZkCoordinator - zNode[/druid/loadQueue/172.17.0.34:8083/jobs_1979-01-01T00:00:00.000Z_1980-01-01T00:00:00.000Z_2020-10-20T21:36:46.683Z] was removed
   2020-10-20T23:57:55,979 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Loading segment jobs_1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z_2020-10-20T21:36:50.665Z
   2020-10-20T23:57:55,980 WARN [ZKCoordinator--0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - No path to unannounce segment[jobs_1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z_2020-10-20T21:36:50.665Z]
   2020-10-20T23:57:55,980 INFO [ZKCoordinator--0] org.apache.druid.server.SegmentManager - Told to delete a queryable for a dataSource[jobs] that doesn't exist.
   2020-10-20T23:57:55,980 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/jobs/1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z/2020-10-20T21:36:50.665Z/0]
   2020-10-20T23:57:55,980 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/jobs/1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z/2020-10-20T21:36:50.665Z]
   2020-10-20T23:57:55,980 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/jobs/1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z]
   2020-10-20T23:57:55,980 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/jobs]
   2020-10-20T23:57:55,980 WARN [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Unable to delete segmentInfoCacheFile[/opt/druid/var/druid/segment-cache/info_dir/jobs_1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z_2020-10-20T21:36:50.665Z]
   2020-10-20T23:57:55,980 ERROR [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Failed to load segment for dataSource: {class=org.apache.druid.server.coordination.SegmentLoadDropHandler, exceptionType=class org.apache.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[jobs_1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z_2020-10-20T21:36:50.665Z], segment=DataSegment{binaryVersion=9, id=jobs_1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z_2020-10-20T21:36:50.665Z, loadSpec={type=>local, path=>/opt/apache-druid-0.19.0/var/druid/segments/jobs/1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z/2020-10-20T21:36:50.665Z/0/d5a5f72d-fccd-47cb-8976-7b187d167c65/index.zip}, dimensions=[finish, person, start, title], metrics=[count], shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, lastCompactionState=null, size=9315}}
   org.apache.druid.segment.loading.SegmentLoadingException: Exception loading segment[jobs_1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z_2020-10-20T21:36:50.665Z]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:269) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.addSegment(SegmentLoadDropHandler.java:313) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:61) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.ZkCoordinator.lambda$childAdded$2(ZkCoordinator.java:147) ~[druid-server-0.19.0.jar:0.19.0]
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_252]
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_252]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_252]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_252]
   	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
   Caused by: java.lang.IllegalArgumentException: Cannot construct instance of `org.apache.druid.segment.loading.LocalLoadSpec`, problem: [/opt/apache-druid-0.19.0/var/druid/segments/jobs/1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z/2020-10-20T21:36:50.665Z/0/d5a5f72d-fccd-47cb-8976-7b187d167c65/index.zip] does not exist
    at [Source: UNKNOWN; line: -1, column: -1]
   	at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:3922) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:3853) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocation(SegmentLoaderLocalCacheManager.java:240) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:229) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadSegmentWithRetry(SegmentLoaderLocalCacheManager.java:190) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:162) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:129) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.getAdapter(SegmentManager.java:218) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.loadSegment(SegmentManager.java:177) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:265) ~[druid-server-0.19.0.jar:0.19.0]
   	... 8 more
   Caused by: com.fasterxml.jackson.databind.exc.ValueInstantiationException: Cannot construct instance of `org.apache.druid.segment.loading.LocalLoadSpec`, problem: [/opt/apache-druid-0.19.0/var/druid/segments/jobs/1978-01-01T00:00:00.000Z_1979-01-01T00:00:00.000Z/2020-10-20T21:36:50.665Z/0/d5a5f72d-fccd-47cb-8976-7b187d167c65/index.zip] does not exist
    at [Source: UNKNOWN; line: -1, column: -1]
   	at com.fasterxml.jackson.databind.exc.ValueInstantiationException.from(ValueInstantiationException.java:47) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:1732) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.wrapAsJsonMappingException(StdValueInstantiator.java:491) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.rewrapCtorProblem(StdValueInstantiator.java:514) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromObjectWith(StdValueInstantiator.java:285) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.ValueInstantiator.createFromObjectWith(ValueInstantiator.java:229) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:198) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:488) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1287) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:194) ~[jackson-databind-2.10.2.jar:2.10.2]
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] asdf2014 commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
asdf2014 commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-713260793


   Hi, @technomage . Welcome to use Apache Druid's helm chart. @maver1ck @AWaterColorPen and I are the maintainers of this chart and will be happy to help you get through it. Can you provide the complete `values.yaml` configuration file?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] technomage closed issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
technomage closed issue #10523:
URL: https://github.com/apache/druid/issues/10523


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] AWaterColorPen commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
AWaterColorPen commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-713275266


   Hi, @technomage . It seems same issue https://github.com/helm/charts/issues/22911, https://github.com/helm/charts/issues/23250, https://github.com/helm/charts/issues/23201
   The default `druid_storage_type` value is local. you have to set the your deep storage for the Apache Druid cluster


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] AWaterColorPen commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
AWaterColorPen commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-714201892


   @technomage please change `druid_storage_type` from `local` to your `s3` or  `hdfs`. `local` can't work since **historical** and **middle manager** running in different containers.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] technomage commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
technomage commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-713828524


   Full values overrides for the druid chart (a sub-chart of the top level helm chart.
   
   
   druid:
     enabled: true
     configVars:
       druid_worker_capacity: '20'
       druid_storage_type: 'local'
       druid_storage_storageDirectory: '/opt/druid/var/druid/storage'
       druid_segmentCache_infoDir: '/opt/druid/var/druid/info'
       druid_extensions_loadList: '["druid-kafka-indexing-service", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "postgresql-metadata-storage"]'
       druid_segmentCache_locations: '[{"path":"/opt/druid/var/druid/segment-cache","maxSize":300000000000}]'
     historical:
       config:
         druid_segmentCache_locations: '[{"path":"/opt/druid/var/druid/segment-cache","maxSize":300000000000}]'
       persistence:
         size: "12Gi"
     middleManager:
       persistence:
         size: "12Gi"
       resources:
         limits:
           cpu: 1000m
           memory: 2Gi
         requests:
           cpu: 500m
           memory: 1Gi
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] technomage commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
technomage commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-713828041


   I applied the changes recommended for storage config and I am still seeing similar errors.  Here is the log, full values to follow.
   
   search
   Search
   Workloads
   keyboard_arrow_right
   Pods
   keyboard_arrow_right
   cog-druid-historical-0
   keyboard_arrow_right
   Logs
   Namespace
   default	
    
   Logs from
   druid	
   in
   cog-druid-historical-0	
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:129) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.getAdapter(SegmentManager.java:218) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.loadSegment(SegmentManager.java:177) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:265) ~[druid-server-0.19.0.jar:0.19.0]
   	... 8 more
   2020-10-21T19:32:52,646 INFO [ZkCoordinator] org.apache.druid.server.coordination.ZkCoordinator - zNode[/druid/loadQueue/172.17.0.29:8083/people_1972-01-01T00:00:00.000Z_1973-01-01T00:00:00.000Z_2020-10-21T18:30:30.853Z] was removed
   2020-10-21T19:32:52,646 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: people_1972-01-01T00:00:00.000Z_1973-01-01T00:00:00.000Z_2020-10-21T18:30:30.853Z]
   2020-10-21T19:32:52,650 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Loading segment people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z
   2020-10-21T19:32:52,650 WARN [ZKCoordinator--0] org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - No path to unannounce segment[people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z]
   2020-10-21T19:32:52,650 INFO [ZKCoordinator--0] org.apache.druid.server.SegmentManager - Told to delete a queryable for a dataSource[people] that doesn't exist.
   2020-10-21T19:32:52,651 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z/2020-10-21T18:30:31.071Z/0]
   2020-10-21T19:32:52,651 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z/2020-10-21T18:30:31.071Z]
   2020-10-21T19:32:52,651 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z]
   2020-10-21T19:32:52,652 INFO [ZKCoordinator--0] org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/opt/druid/var/druid/segment-cache/people]
   2020-10-21T19:32:52,652 WARN [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Unable to delete segmentInfoCacheFile[/opt/druid/var/druid/info/people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z]
   2020-10-21T19:32:52,652 ERROR [ZKCoordinator--0] org.apache.druid.server.coordination.SegmentLoadDropHandler - Failed to load segment for dataSource: {class=org.apache.druid.server.coordination.SegmentLoadDropHandler, exceptionType=class org.apache.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z], segment=DataSegment{binaryVersion=9, id=people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z, loadSpec={type=>local, path=>/opt/druid/var/druid/storage/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z/2020-10-21T18:30:31.071Z/0/be3f658b-e69a-4451-9fad-1ab0e7f9d577/index.zip}, dimensions=[birthdate, blood_type, first_name, id, last_name, name, role], metrics=[count], shardSpec=NumberedShardSpec{partitionNum=0, partitions=0}, lastCompactionState=null, size=28754336}}
   org.apache.druid.segment.loading.SegmentLoadingException: Exception loading segment[people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:269) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.addSegment(SegmentLoadDropHandler.java:313) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:61) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.ZkCoordinator.lambda$childAdded$2(ZkCoordinator.java:147) ~[druid-server-0.19.0.jar:0.19.0]
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_252]
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_252]
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_252]
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_252]
   	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
   Caused by: java.lang.IllegalArgumentException: Cannot construct instance of `org.apache.druid.segment.loading.LocalLoadSpec`, problem: [/opt/druid/var/druid/storage/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z/2020-10-21T18:30:31.071Z/0/be3f658b-e69a-4451-9fad-1ab0e7f9d577/index.zip] does not exist
    at [Source: UNKNOWN; line: -1, column: -1]
   	at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:3922) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:3853) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocation(SegmentLoaderLocalCacheManager.java:240) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:229) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadSegmentWithRetry(SegmentLoaderLocalCacheManager.java:190) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:162) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:129) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.getAdapter(SegmentManager.java:218) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.loadSegment(SegmentManager.java:177) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:265) ~[druid-server-0.19.0.jar:0.19.0]
   	... 8 more
   Caused by: com.fasterxml.jackson.databind.exc.ValueInstantiationException: Cannot construct instance of `org.apache.druid.segment.loading.LocalLoadSpec`, problem: [/opt/druid/var/druid/storage/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z/2020-10-21T18:30:31.071Z/0/be3f658b-e69a-4451-9fad-1ab0e7f9d577/index.zip] does not exist
    at [Source: UNKNOWN; line: -1, column: -1]
   	at com.fasterxml.jackson.databind.exc.ValueInstantiationException.from(ValueInstantiationException.java:47) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:1732) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.wrapAsJsonMappingException(StdValueInstantiator.java:491) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.rewrapCtorProblem(StdValueInstantiator.java:514) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromObjectWith(StdValueInstantiator.java:285) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.ValueInstantiator.createFromObjectWith(ValueInstantiator.java:229) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:198) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:488) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1287) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:194) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:161) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:130) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:97) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:254) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:68) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:3917) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:3853) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocation(SegmentLoaderLocalCacheManager.java:240) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:229) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadSegmentWithRetry(SegmentLoaderLocalCacheManager.java:190) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:162) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:129) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.getAdapter(SegmentManager.java:218) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.loadSegment(SegmentManager.java:177) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:265) ~[druid-server-0.19.0.jar:0.19.0]
   	... 8 more
   Caused by: java.lang.IllegalArgumentException: [/opt/druid/var/druid/storage/people/1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z/2020-10-21T18:30:31.071Z/0/be3f658b-e69a-4451-9fad-1ab0e7f9d577/index.zip] does not exist
   	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:148) ~[guava-16.0.1.jar:?]
   	at org.apache.druid.segment.loading.LocalLoadSpec.<init>(LocalLoadSpec.java:51) ~[druid-server-0.19.0.jar:0.19.0]
   	at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) ~[?:?]
   	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_252]
   	at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_252]
   	at com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:124) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromObjectWith(StdValueInstantiator.java:283) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.ValueInstantiator.createFromObjectWith(ValueInstantiator.java:229) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.impl.PropertyBasedCreator.build(PropertyBasedCreator.java:198) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:488) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1287) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:194) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:161) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:130) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:97) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:254) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:68) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.ObjectMapper._convert(ObjectMapper.java:3917) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at com.fasterxml.jackson.databind.ObjectMapper.convertValue(ObjectMapper.java:3853) ~[jackson-databind-2.10.2.jar:2.10.2]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocation(SegmentLoaderLocalCacheManager.java:240) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:229) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.loadSegmentWithRetry(SegmentLoaderLocalCacheManager.java:190) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:162) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:129) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.getAdapter(SegmentManager.java:218) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.SegmentManager.loadSegment(SegmentManager.java:177) ~[druid-server-0.19.0.jar:0.19.0]
   	at org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:265) ~[druid-server-0.19.0.jar:0.19.0]
   	... 8 more
   2020-10-21T19:32:52,664 INFO [ZKCoordinator--0] org.apache.druid.server.coordination.ZkCoordinator - Completed request [LOAD: people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z]
   2020-10-21T19:32:52,665 INFO [ZkCoordinator] org.apache.druid.server.coordination.ZkCoordinator - zNode[/druid/loadQueue/172.17.0.29:8083/people_1971-01-01T00:00:00.000Z_1972-01-01T00:00:00.000Z_2020-10-21T18:30:31.071Z] was removed
   Logs from Oct 21, 2020 to Oct 21, 2020 UTC
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] technomage commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
technomage commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-714596796


   Thank you.  It was not clear that local was not workable given they are mounted to the same PVC.  


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[GitHub] [druid] AWaterColorPen commented on issue #10523: Historical server fails to load segments in kubernetes

Posted by GitBox <gi...@apache.org>.
AWaterColorPen commented on issue #10523:
URL: https://github.com/apache/druid/issues/10523#issuecomment-715229588


   > Thank you. It was not clear that local was not workable given they are mounted to the same PVC.
   
   It is a problem that `druid_storage_type: local` is not workable. We haven't found a better way yet.
   If it makes **historical** and **middle manager** be mounted to the same PVC with `ReadWriteMany` access mode by default, there will be more confusion.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org