You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@skywalking.apache.org by GitBox <gi...@apache.org> on 2018/10/08 12:00:24 UTC

[GitHub] Mor-Walkme opened a new issue #1730: Collector cant connect to elasticsearch

Mor-Walkme opened a new issue #1730: Collector cant connect to elasticsearch
URL: https://github.com/apache/incubator-skywalking/issues/1730
 
 
   Hey,
   I am trying to setup local skywalking environment on my laptop, contains UI + Collector + ElasticSearch, and i cant make the collector work with elastic.
   
   The error i am getting in collector.log:
   Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{UZP-ChuRSg25DF3ymCgniw}{localhost}{127.0.0.1:9300}]]
   	at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
   	at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
   	at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
   	at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:363)
   	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
   	at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1256)
   	at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
   	at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
   	at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:62)
   	at org.apache.skywalking.apm.collector.client.elasticsearch.ElasticSearchClient.isExistsIndex(ElasticSearchClient.java:145)
   	at org.apache.skywalking.apm.collector.storage.es.base.define.ElasticSearchStorageInstaller.isExists(ElasticSearchStorageInstaller.java:151)
   	at org.apache.skywalking.apm.collector.storage.StorageInstaller.install(StorageInstaller.java:52)
   	at org.apache.skywalking.apm.collector.storage.es.StorageModuleEsProvider.start(StorageModuleEsProvider.java:125)
   	at org.apache.skywalking.apm.collector.core.module.BootstrapFlow.start(BootstrapFlow.java:61)
   	at org.apache.skywalking.apm.collector.core.module.ModuleManager.init(ModuleManager.java:68)
   	at org.apache.skywalking.apm.collector.boot.CollectorBootStartUp.main(CollectorBootStartUp.java:45)
   
   
   I am running local elasticsearch using docker container. here is my elasticsearch.yml:
   http.host: 0.0.0.0
   cluster.name: CollectorDBCluster
   node.name: TestNode
   thread_pool.bulk.queue_size: 1000
   discovery.type: single-node
   
   
   ElasticSearch version: 5.5
   How i run the container locally: docker run -p 9200:9200 -p 9300:9300 elasticsearch:5.5
   
   Skywalking version: apache-skywalking-apm-incubating-5.0.0-RC2.tar.gz
   
   Here is my skywalking configuration file application.yml:
   # Licensed to the Apache Software Foundation (ASF) under one
   # or more contributor license agreements.  See the NOTICE file
   # distributed with this work for additional information
   # regarding copyright ownership.  The ASF licenses this file
   # to you under the Apache License, Version 2.0 (the
   # "License"); you may not use this file except in compliance
   # with the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   #cluster:
   #  zookeeper:
   #    hostPort: localhost:2181
   #    sessionTimeout: 100000
   naming:
     jetty:
       #OS real network IP(binding required), for agent to find collector cluster
       host: localhost
       port: 10800
       contextPath: /
   cache:
   #  guava:
     caffeine:
   remote:
     gRPC:
       # OS real network IP(binding required), for collector nodes communicate with each other in cluster. collectorN --(gRPC) --> collectorM
       host: localhost
       port: 11800
   agent_gRPC:
     gRPC:
       #OS real network IP(binding required), for agent to uplink data(trace/metrics) to collector. agent--(gRPC)--> collector
       host: localhost
       port: 11800
       # Set these two setting to open ssl
       #sslCertChainFile: $path
       #sslPrivateKeyFile: $path
   
       # Set your own token to active auth
       #authentication: xxxxxx
   agent_jetty:
     jetty:
       # OS real network IP(binding required), for agent to uplink data(trace/metrics) to collector through HTTP. agent--(HTTP)--> collector
       # SkyWalking native Java/.Net/node.js agents don't use this.
       # Open this for other implementor.
       host: localhost
       port: 12800
       contextPath: /
   analysis_register:
     default:
   analysis_jvm:
     default:
   analysis_segment_parser:
     default:
       bufferFilePath: ../buffer/
       bufferOffsetMaxFileSize: 10M
       bufferSegmentMaxFileSize: 500M
       bufferFileCleanWhenRestart: true
   ui:
     jetty:
       # Stay in `localhost` if UI starts up in default mode.
       # Change it to OS real network IP(binding required), if deploy collector in different machine.
       host: localhost
       port: 12800
       contextPath: /
   storage:
     elasticsearch:
       clusterName: CollectorDBCluster
       clusterTransportSniffer: false
       clusterNodes: localhost:9300
       indexShardsNumber: 2
       indexReplicasNumber: 0
       highPerformanceMode: true
       # Batch process setting, refer to https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.html
       bulkActions: 2000 # Execute the bulk every 2000 requests
       bulkSize: 20 # flush the bulk every 20mb
       flushInterval: 10 # flush the bulk every 10 seconds whatever the number of requests
       concurrentRequests: 2 # the number of concurrent requests
       # Set a timeout on metric data. After the timeout has expired, the metric data will automatically be deleted.
       traceDataTTL: 90 # Unit is minute
       minuteMetricDataTTL: 90 # Unit is minute
       hourMetricDataTTL: 36 # Unit is hour
       dayMetricDataTTL: 45 # Unit is day
       monthMetricDataTTL: 18 # Unit is month
   #storage:
   #  h2:
   #    url: jdbc:h2:~/memorydb
   #    userName: sa
   configuration:
     default:
       #namespace: xxxxx
       # alarm threshold
       applicationApdexThreshold: 2000
       serviceErrorRateThreshold: 10.00
       serviceAverageResponseTimeThreshold: 2000
       instanceErrorRateThreshold: 10.00
       instanceAverageResponseTimeThreshold: 2000
       applicationErrorRateThreshold: 10.00
       applicationAverageResponseTimeThreshold: 2000
       # thermodynamic
       thermodynamicResponseTimeStep: 50
       thermodynamicCountOfResponseTimeSteps: 40
       # max collection's size of worker cache collection, setting it smaller when collector OutOfMemory crashed.
       workerCacheMaxSize: 10000
   #receiver_zipkin:
   #  default:
   #    host: localhost
   #    port: 9411
   #    contextPath: /
   
   Do you know what have i missed to make it work?
   
   Thanks,
   Mor.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services