You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Rohith Sharma K S (JIRA)" <ji...@apache.org> on 2018/08/17 13:08:00 UTC

[jira] [Commented] (YARN-8679) [ATSv2] If HBase cluster is down, high chances that NM ContainerManager dispatcher get blocked

    [ https://issues.apache.org/jira/browse/YARN-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583915#comment-16583915 ] 

Rohith Sharma K S commented on YARN-8679:
-----------------------------------------

In one of our cluster we see that NM jvm events are grow up and exited with OOM. Heap dump was showing many events NM container dispatcher was piled up.
Analyzing thread dump shows that NM container dispatcher was blocked which causes these problems.
{noformat}
"NM ContainerManager dispatcher" #149 prio=5 os_prio=0 tid=0x00007f85caf21800 nid=0x124a65 waiting for monitor entry [0x00007f8596e2d000]
   java.lang.Thread.State: BLOCKED (on object monitor)
	at org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService.initializeContainer(PerNodeTimelineCollectorsAuxService.java:159)
	- waiting to lock <0x00000000c05b04d8> (a java.util.concurrent.ConcurrentHashMap)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.handle(AuxServices.java:380)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.handle(AuxServices.java:65)
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
	at java.lang.Thread.run(Thread.java:745)


"pool-10-thread-1" #378 prio=5 os_prio=0 tid=0x00007f85b4498000 nid=0x124e00 waiting on condition [0x00007f858deaf000]
   java.lang.Thread.State: WAITING (parking)
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x00000000c0a06080> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
	at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
	at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
	at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:820)
	at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:732)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:281)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:236)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:321)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:202)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:170)
	at org.apache.hadoop.yarn.server.timelineservice.storage.common.TypedBufferedMutator.mutate(TypedBufferedMutator.java:54)
	at org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.store(ColumnRWHelper.java:153)
	at org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.store(ColumnRWHelper.java:107)
	at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.store(HBaseTimelineWriterImpl.java:375)
	at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:192)
	at org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector.writeTimelineEntities(TimelineCollector.java:164)
	at org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector.putEntitiesAsync(TimelineCollector.java:196)
	at org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollectorWithAgg$AppLevelAggregator.aggregate(AppLevelTimelineCollectorWithAgg.java:134)
	at org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollectorWithAgg$AppLevelAggregator.access$100(AppLevelTimelineCollectorWithAgg.java:112)
	at org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollectorWithAgg.serviceStop(AppLevelTimelineCollectorWithAgg.java:103)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
	- locked <0x00000000e77351b8> (a java.lang.Object)
	at org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorManager.remove(TimelineCollectorManager.java:190)
	- locked <0x00000000e7734f58> (a org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollectorWithAgg)
	at org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService.removeApplication(PerNodeTimelineCollectorsAuxService.java:144)
	at org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService$1.run(PerNodeTimelineCollectorsAuxService.java:202)
	- locked <0x00000000c05b04d8> (a java.util.concurrent.ConcurrentHashMap)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

{noformat}

> [ATSv2] If HBase cluster is down, high chances that NM ContainerManager dispatcher get blocked
> ----------------------------------------------------------------------------------------------
>
>                 Key: YARN-8679
>                 URL: https://issues.apache.org/jira/browse/YARN-8679
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Rohith Sharma K S
>            Assignee: Rohith Sharma K S
>            Priority: Major
>
> It is observed that if ATSv2 back end is down and client wait for few minutes, then NM ContainerManager dispatcher thread is blocked. As a result, NM containers operations are stuck in event processing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org