You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Smita Deshpande <sm...@cumulus-systems.com> on 2014/09/16 11:30:09 UTC

About extra containers being allocated in distributed shell example.

Hi,
                In YARN distributed shell example, I am setting up my request for containers to the RM using the following call   (I am asking for 9 containers here)
  private ContainerRequest setupContainerAskForRM(Resource capability) {}
                But when actually RMCallbackHandler allocates containers in following call   (I am getting 23 containers here)
  @Override
         public void onContainersAllocated(List<Container> allocatedContainers) {}

I am getting extra containers which expire after 600 seconds.
Will these extra launched containers which are not doing anything will have any performance issue in my application?

At one point in my application, out of 19K containers 12K containers expired because they were not used. Can anybody suggest any workaround on this or is it a bug?

-Smita