You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@skywalking.apache.org by GitBox <gi...@apache.org> on 2021/03/03 05:15:56 UTC

[GitHub] [skywalking] libinglong edited a comment on pull request #5775: Fix deadlock problem when using elasticsearch-client-7.0.0

libinglong edited a comment on pull request #5775:
URL: https://github.com/apache/skywalking/pull/5775#issuecomment-789437376


   > > "Grpc server thread pool is full, rejecting the task
   > 
   > Thanks submit this, why it cause this error log? our production environment processes tens of billions of data every day, and we have never found such logs.
   
   I found a lot of such logs in the oap log.
   The thread stacks show all threads of grpcServerPool is WAITING. 
   
   ```text
   "grpcServerPool-1-thread-14" #360 prio=5 os_prio=0 tid=0x00007f1bb8041800 nid=0x104d6 waiting for monitor entry [0x00007f219d2f1000]
      java.lang.Thread.State: BLOCKED (on object monitor)
   	at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:319)
   	- waiting to lock <0x00000000e2c38d28> (a org.elasticsearch.action.bulk.BulkProcessor)
   	at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:304)
   	at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:290)
   	at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.BatchProcessEsDAO.asynchronous(BatchProcessEsDAO.java:56)
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org