You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Jerry Peng <je...@gmail.com> on 2017/05/05 13:54:40 UTC

Re: Unexpected behavior with RAS enabled

Hello ashad,

The Resource Aware Scheduler before version 1.0.3 aims to pack
topologies as tightly as possible to reduce the number of nodes to use
and increase hardware utilization.  Thus, the behavior you described
is expected.  However, in storm version 1.1.0 and higher a new
scheduling strategy is implemented that only pack topologies as
tightly as it can on a per topology basis.  For a single topology, the
executors will be packed as tightly on nodes as allowed to reduce
network latency, but not across multiple topologies.  Taking your
environment for example, with this new scheduling strategy, your
topologies with get spread out more across your nodes.  You can read
more about the updated scheduling strategy for RAS:

http://storm.apache.org/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html

Hope this helps!

On Thu, Apr 27, 2017 at 10:26 AM, arshad matin <ar...@gmail.com> wrote:
>
> I need some help regarding Apache Storm 1.0.3 with Resource Aware Scheduler
> Enabled. I have a storm cluster with 2 supervisors. Each supervisor has 20
> slots, 8000 CPU and 100GB memory. I submitted my first topology and it went
> to supervisor 1. Let say it uses 2 slots, 1000 CPU and 2 GB memory. Now i am
> submitting another topology with same usage. This topology should go to
> supervisor 2 as it has more available resources compared to supervisor 1 but
> what actually happened is that the new topology went to supervisor 1.
>
>
> Same with the 3rd topology. All the topology is going to 1st supervisor
> unless it is exhausted of slots,CPU or memory.
>
> Is this behavior expected? or am i missing something in configurations.
>
>
> Version : 1.0.3
>
> Storm.yaml
> ==========
> storm.scheduler:
> “org.apache.storm.scheduler.resource.ResourceAwareScheduler”
> supervisor.memory.capacity.mb: 100000.0
> supervisor.cpu.capacity: 8000.0
> topology.component.resources.onheap.memory.mb: 10.2
> topology.component.resources.offheap.memory.mb: 10.2
> topology.component.cpu.pcore.percent: 10.00
> topology.worker.max.heap.size.mb: 2048.0
> topology.priority: 29
> topology.scheduler.strategy:
> “org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy”
> resource.aware.scheduler.eviction.strategy:
> “org.apache.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy”
> resource.aware.scheduler.priority.strategy:
> “org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy”
>