You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by "John (Youngseok) Yang" <jo...@gmail.com> on 2014/10/21 03:05:57 UTC

YARN rack-specific, relax_locality=false container request does not respond

Hello everyone,





My rack-specific(relax_locality=false) container request does not get any
response from the YARN RM.





Since I found no reporting on this on the web, I am assuming that my
cluster setup or YARN AM code is the problem.





I’d like to know if anyone else has experienced the same problem, or if you
know what might be the cause of this.





I’ve attached the logs below

- Rack configuration: “/rack2”, the rack I am requesting, is properly
configured

- Client-side log: AMRMClientAsync does allocate() properly, passing “ask”
of “/rack2”

- RM-side log: RM receives the request properly, but *fails to somehow
resolve the rack at each of nodeUpdates* (never mind the resource limit of
virtualCores: I am using DefaultResourceCalculator, which only looks at the
memory)





I would appreciate any advice or suggestions.







Best regards,

John









Rack configuration



john@master-OptiPlex-9020:~$ hdfs dfsadmin -printTopology

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/home/hadoop/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/home/chobrian/reef-inmemory-1.0-SNAPSHOT-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Rack: /rack1

   147.46.241.117:50010 (node04-OptiPlex-9020)

   147.46.241.125:50010 (node03-OptiPlex-9020)



Rack: /rack2

   147.46.241.128:50010 (node02-OptiPlex-9020)

   147.46.241.129:50010 (node01-OptiPlex-9020)















Client-Side log





2014-10-21 09:23:55,951 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Call -> master-OptiPlex-9020/147.46.241.143:8030:
allocate {ask { priority { priority: 1 } resource_name: "*" capability {
memory: 100 virtual_cores: 1 } num_containers: 1 relax_locality: false }
ask { priority { priority: 1 } resource_name: "/rack2" capability { memory:
100 virtual_cores: 1 } num_containers: 1 relax_locality: true }
blacklist_request { } response_id: 1 progress: 0.0}

2014-10-21 09:23:55,952 FINE hadoop.ipc.Client.run IPC Parameter Sending
Thread #0 | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop sending #10

2014-10-21 09:23:55,953 FINE hadoop.ipc.Client.receiveRpcResponse IPC
Client (929159917) connection to master-OptiPlex-9020/147.46.241.143:8030
from hadoop | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop got value #10

2014-10-21 09:23:55,953 FINE hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | Call: allocate took 3ms

2014-10-21 09:23:55,953 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Response <- master-OptiPlex-9020/
147.46.241.143:8030: allocate {response_id: 2 limit { memory: 54272
virtual_cores: 0 } num_cluster_nodes: 4}

2014-10-21 09:23:56,954 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Call -> master-OptiPlex-9020/147.46.241.143:8030:
allocate {blacklist_request { } response_id: 2 progress: 0.0}

2014-10-21 09:23:56,955 FINE hadoop.ipc.Client.run IPC Parameter Sending
Thread #0 | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop sending #11

2014-10-21 09:23:56,957 FINE hadoop.ipc.Client.receiveRpcResponse IPC
Client (929159917) connection to master-OptiPlex-9020/147.46.241.143:8030
from hadoop | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop got value #11

2014-10-21 09:23:56,958 FINE hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | Call: allocate took 4ms

2014-10-21 09:23:56,959 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Response <- master-OptiPlex-9020/
147.46.241.143:8030: allocate {response_id: 3 limit { memory: 54272
virtual_cores: 0 } num_cluster_nodes: 4}

2014-10-21 09:23:57,960 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Call -> master-OptiPlex-9020/147.46.241.143:8030:
allocate {blacklist_request { } response_id: 3 progress: 0.0}

2014-10-21 09:23:57,961 FINE hadoop.ipc.Client.run IPC Parameter Sending
Thread #0 | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop sending #12

2014-10-21 09:23:57,963 FINE hadoop.ipc.Client.receiveRpcResponse IPC
Client (929159917) connection to master-OptiPlex-9020/147.46.241.143:8030
from hadoop | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop got value #12

2014-10-21 09:23:57,964 FINE hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | Call: allocate took 3ms

2014-10-21 09:23:57,965 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Response <- master-OptiPlex-9020/
147.46.241.143:8030: allocate {response_id: 4 limit { memory: 54272
virtual_cores: 0 } num_cluster_nodes: 4}

2014-10-21 09:23:58,965 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Call -> master-OptiPlex-9020/147.46.241.143:8030:
allocate {blacklist_request { } response_id: 4 progress: 0.0}

2014-10-21 09:23:58,966 FINE hadoop.ipc.Client.run IPC Parameter Sending
Thread #0 | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop sending #13

2014-10-21 09:23:58,967 FINE hadoop.ipc.Client.receiveRpcResponse IPC
Client (929159917) connection to master-OptiPlex-9020/147.46.241.143:8030
from hadoop | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop got value #13

2014-10-21 09:23:58,967 FINE hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | Call: allocate took 2ms

2014-10-21 09:23:58,967 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Response <- master-OptiPlex-9020/
147.46.241.143:8030: allocate {response_id: 5 limit { memory: 54272
virtual_cores: 0 } num_cluster_nodes: 4}

2014-10-21 09:23:59,968 FINEST hadoop.ipc.ProtobufRpcEngine.invoke AMRM
Heartbeater thread | 46: Call -> master-OptiPlex-9020/147.46.241.143:8030:
allocate {blacklist_request { } response_id: 5 progress: 0.0}

2014-10-21 09:23:59,969 FINE hadoop.ipc.Client.run IPC Parameter Sending
Thread #0 | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop sending #14

2014-10-21 09:23:59,971 FINE hadoop.ipc.Client.receiveRpcResponse IPC
Client (929159917) connection to master-OptiPlex-9020/147.46.241.143:8030
from hadoop | IPC Client (929159917) connection to master-OptiPlex-9020/
147.46.241.143:8030 from hadoop got value #14







RM-side log



2014-10-21 09:23:29,688 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 37 on 8030:
org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from
147.46.241.125:54909 Call#10 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER

2014-10-21 09:23:29,688 DEBUG
org.apache.hadoop.security.UserGroupInformation: PrivilegedAction
as:appattempt_1413810448987_0003_000001 (auth:TOKEN)
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

2014-10-21 09:23:29,688 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptStatusupdateEvent.EventType:
STATUS_UPDATE

2014-10-21 09:23:29,688 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
Processing event for appattempt_1413810448987_0003_000001 of type
STATUS_UPDATE

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
allocate: pre-update
applicationAttemptId=appattempt_1413810448987_0003_000001
application=org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp@49f586fe

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
0, Capability: <memory:1024, vCores:1>, # Containers: 0, Location: *, Relax
Locality: true}

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo:
update: application=application_1413810448987_0003 request={Priority: 1,
Capability: <memory:1024, vCores:1>, # Containers: 1, Location: *, Relax
Locality: false}

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager:
User john added to activeUsers, currently: 1

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
allocate: post-update

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
0, Capability: <memory:1024, vCores:1>, # Containers: 0, Location: *, Relax
Locality: true}

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: /rack2,
Relax Locality: true}

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: *, Relax
Locality: false}

2014-10-21 09:23:29,689 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
allocate: applicationAttemptId=appattempt_1413810448987_0003_000001 #ask=2

2014-10-21 09:23:29,689 INFO org.apache.hadoop.ipc.Server: Served: allocate
queueTime= 0 procesingTime= 1

2014-10-21 09:23:29,689 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 37 on 8030: responding to
org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from
147.46.241.125:54909 Call#10 Retry#0

2014-10-21 09:23:29,689 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 37 on 8030: responding to
org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate from
147.46.241.125:54909 Call#10 Retry#0 Wrote 44 bytes.

2014-10-21 09:23:29,784 DEBUG org.apache.hadoop.ipc.Server:  got #40539

2014-10-21 09:23:29,784 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 25 on 8025:
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.125:58217 Call#40539 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER

2014-10-21 09:23:29,784 DEBUG
org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hadoop
(auth:SIMPLE)
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

2014-10-21 09:23:29,784 INFO org.apache.hadoop.ipc.Server: Served:
nodeHeartbeat queueTime= 0 procesingTime= 0

2014-10-21 09:23:29,784 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
STATUS_UPDATE

2014-10-21 09:23:29,784 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 25 on 8025: responding to
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.125:58217 Call#40539 Retry#0

2014-10-21 09:23:29,784 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Processing
node03-OptiPlex-9020:44234 of type STATUS_UPDATE

2014-10-21 09:23:29,784 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 25 on 8025: responding to
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.125:58217 Call#40539 Retry#0 Wrote 43 bytes.

2014-10-21 09:23:29,784 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
NODE_UPDATE

2014-10-21 09:23:29,784 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
nodeUpdate: node03-OptiPlex-9020:44234 clusterResources: <memory:56000,
vCores:32>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Node being looked for scheduling node03-OptiPlex-9020:44234
availableResource: <memory:12976, vCores:7>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Trying to schedule on node: node03-OptiPlex-9020, available: <memory:12976,
vCores:7>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Trying to assign containers to child-queue of root

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
printChildQueues - queue: root child-queues: root.default(0.018285714),

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Trying to assign to queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024,
vCores:1>usedCapacity=0.018285714, absoluteUsedCapacity=0.018285714,
numApps=1, numContainers=1

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignContainers: node=node03-OptiPlex-9020 #applications=1

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
pre-assignContainers for application application_1413810448987_0003

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
0, Capability: <memory:1024, vCores:1>, # Containers: 0, Location: *, Relax
Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: /rack2,
Relax Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: *, Relax
Locality: false}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
User limit computation for john in queue default userLimit=100
userLimitFactor=1.0 required: <memory:1024, vCores:1> consumed:
<memory:1024, vCores:1> limit: <memory:56320, vCores:1> queueCapacity:
<memory:56320, vCores:1> qconsumed: <memory:1024, vCores:1>
currentCapacity: <memory:56320, vCores:1> activeUsers: 1 clusterCapacity:
<memory:56000, vCores:32>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Headroom calculation for user john:  userLimit=<memory:56320, vCores:1>
queueMaxCap=<memory:55296, vCores:1> consumed=<memory:1024, vCores:1>
headroom=<memory:54272, vCores:0>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
post-assignContainers for application application_1413810448987_0003

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
0, Capability: <memory:1024, vCores:1>, # Containers: 0, Location: *, Relax
Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: /rack2,
Relax Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: *, Relax
Locality: false}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024,
vCores:1>usedCapacity=0.018285714, absoluteUsedCapacity=0.018285714,
numApps=1, numContainers=1 --> <memory:0, vCores:0>, NODE_LOCAL

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server:  got #40542

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server:  got #40513

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 10 on 8025:
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.128:44582 Call#40542 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hadoop
(auth:SIMPLE)
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 19 on 8025:
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.117:44989 Call#40513 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hadoop
(auth:SIMPLE)
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

2014-10-21 09:23:29,785 INFO org.apache.hadoop.ipc.Server: Served:
nodeHeartbeat queueTime= 0 procesingTime= 0

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 10 on 8025: responding to
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.128:44582 Call#40542 Retry#0

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
STATUS_UPDATE

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 10 on 8025: responding to
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.128:44582 Call#40542 Retry#0 Wrote 43 bytes.

2014-10-21 09:23:29,785 INFO org.apache.hadoop.ipc.Server: Served:
nodeHeartbeat queueTime= 0 procesingTime= 0

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 19 on 8025: responding to
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.117:44989 Call#40513 Retry#0

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.ipc.Server: IPC Server
handler 19 on 8025: responding to
org.apache.hadoop.yarn.server.api.ResourceTrackerPB.nodeHeartbeat from
147.46.241.117:44989 Call#40513 Retry#0 Wrote 43 bytes.

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Processing
node02-OptiPlex-9020:44674 of type STATUS_UPDATE

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
STATUS_UPDATE

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Processing
node04-OptiPlex-9020:33962 of type STATUS_UPDATE

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
NODE_UPDATE

2014-10-21 09:23:29,785 DEBUG org.apache.hadoop.yarn.event.AsyncDispatcher:
Dispatching the event
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
NODE_UPDATE

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
nodeUpdate: node02-OptiPlex-9020:44674 clusterResources: <memory:56000,
vCores:32>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Node being looked for scheduling node02-OptiPlex-9020:44674
availableResource: <memory:14000, vCores:8>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Trying to schedule on node: node02-OptiPlex-9020, available: <memory:14000,
vCores:8>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Trying to assign containers to child-queue of root

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
printChildQueues - queue: root child-queues: root.default(0.018285714),

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Trying to assign to queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024,
vCores:1>usedCapacity=0.018285714, absoluteUsedCapacity=0.018285714,
numApps=1, numContainers=1

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
assignContainers: node=node02-OptiPlex-9020 #applications=1

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
pre-assignContainers for application application_1413810448987_0003

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
0, Capability: <memory:1024, vCores:1>, # Containers: 0, Location: *, Relax
Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: /rack2,
Relax Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: *, Relax
Locality: false}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
User limit computation for john in queue default userLimit=100
userLimitFactor=1.0 required: <memory:1024, vCores:1> consumed:
<memory:1024, vCores:1> limit: <memory:56320, vCores:1> queueCapacity:
<memory:56320, vCores:1> qconsumed: <memory:1024, vCores:1>
currentCapacity: <memory:56320, vCores:1> activeUsers: 1 clusterCapacity:
<memory:56000, vCores:32>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
Headroom calculation for user john:  userLimit=<memory:56320, vCores:1>
queueMaxCap=<memory:55296, vCores:1> consumed=<memory:1024, vCores:1>
headroom=<memory:54272, vCores:0>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
post-assignContainers for application application_1413810448987_0003

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
0, Capability: <memory:1024, vCores:1>, # Containers: 0, Location: *, Relax
Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003
headRoom=<memory:54272, vCores:0> currentConsumption=1024

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: /rack2,
Relax Locality: true}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1413810448987_0003 request={Priority:
1, Capability: <memory:1024, vCores:1>, # Containers: 1, Location: *, Relax
Locality: false}

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024,
vCores:1>usedCapacity=0.018285714, absoluteUsedCapacity=0.018285714,
numApps=1, numContainers=1 --> <memory:0, vCores:0>, NODE_LOCAL

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
nodeUpdate: node04-OptiPlex-9020:33962 clusterResources: <memory:56000,
vCores:32>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Node being looked for scheduling node04-OptiPlex-9020:33962
availableResource: <memory:14000, vCores:8>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Trying to schedule on node: node04-OptiPlex-9020, available: <memory:14000,
vCores:8>

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Trying to assign containers to child-queue of root

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
printChildQueues - queue: root child-queues: root.default(0.018285714),

2014-10-21 09:23:29,785 DEBUG
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Trying to assign to queue: root.default stats: default: capacity=1.0,
absoluteCapacity=1.0, usedResources=<memory:1024,
vCores:1>usedCapacity=0.018285714, absoluteUsedCapacity=0.018285714,
numApps=1, numContainers=1