You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@camel.apache.org by GitBox <gi...@apache.org> on 2021/08/05 20:34:58 UTC
[GitHub] [camel-kafka-connector] arjun180 removed a comment on issue #1241: Endpoint doesn't get resolved when using Camel Splunk HEC Kafka connector
arjun180 removed a comment on issue #1241:
URL: https://github.com/apache/camel-kafka-connector/issues/1241#issuecomment-893675388
Thanks @oscerd . I changed the Splunk url to `my server.com:<port>` and I don't get the URL error anymore. I also removed parameters `camel.sink.endpoint.bodyOnly: false` and `camel.sink.endpoint.headersOnly: false` because they weren't valid.
Now I get :
```
2021-08-05 17:41:04,041 ERROR Failed delivery for (MessageId: 6413467DD725059-0000000000000000 on ExchangeId: 6413467DD725059-0000000000000000). Exhausted after delivery attempt: 1 caught: java.lang.RuntimeException: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity
Message History (complete message history is disabled)
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route6 ] [route6 ] [ ] [ 400]
...
[route6 ] [toD6 ] [splunk-hec:myserver:8088/<token>?htt] [ 0]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
(org.apache.camel.processor.errorhandler.DefaultErrorHandler) [task-thread-splunk-sink-connector-0]
java.lang.RuntimeException: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity
at org.apache.camel.component.splunkhec.SplunkHECProducer.process(SplunkHECProducer.java:86)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendDynamicProcessor.lambda$process$0(SendDynamicProcessor.java:197)
at org.apache.camel.support.cache.DefaultProducerCache.doInAsyncProducer(DefaultProducerCache.java:318)
at org.apache.camel.processor.SendDynamicProcessor.process(SendDynamicProcessor.java:182)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:439)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:181)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:62)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:167)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:388)
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:96)
at org.apache.camel.impl.engine.SharedCamelInternalProcessor.process(SharedCamelInternalProcessor.java:217)
at org.apache.camel.impl.engine.SharedCamelInternalProcessor$1.process(SharedCamelInternalProcessor.java:111)
at org.apache.camel.impl.engine.DefaultAsyncProcessorAwaitManager.process(DefaultAsyncProcessorAwaitManager.java:83)
at org.apache.camel.impl.engine.SharedCamelInternalProcessor.process(SharedCamelInternalProcessor.java:108)
at org.apache.camel.support.cache.DefaultProducerCache.send(DefaultProducerCache.java:190)
at org.apache.camel.impl.engine.DefaultProducerTemplate.send(DefaultProducerTemplate.java:176)
at org.apache.camel.impl.engine.DefaultProducerTemplate.send(DefaultProducerTemplate.java:148)
at org.apache.camel.kafkaconnector.CamelSinkTask.put(CamelSinkTask.java:194)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
But I curled the Splunk URL from the Kafka Connect pod to check if there was an issue and it came out healthy. The `Service Unavailable: Back-end server is at capacity` error seems to be an issue with the Splunk LB, but that doesn't seem to be a problem in this case.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: commits-unsubscribe@camel.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org