You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@brooklyn.apache.org by Svetoslav Neykov <sv...@cloudsoftcorp.com> on 2017/01/11 12:08:21 UTC
[PROPOSAL] Controlling effectors concurrency
## Problem
The current model in Brooklyn for executing effectors is to do it in parallel, without regard for already running instances of the same effector. This makes writing certain classes of YAML blueprints harder - use-cases which need to limit the number of concurrent executions. Currently this gets worked around on per-blueprint basis, shifting the burden of synchronizing/locking to the blueprint which have limited means to do it.
Some concrete examples:
* A haproxy blueprint which needs to have at most one "update configuration" effector running - solved in bash by using flock
https://github.com/brooklyncentral/clocker/blob/9d3487198f426e8ebc6efeee94af3dc50383fa71/common/catalog/common/haproxy.bom
* Some clusters have a limit on how many members can join at a time (Cassandra notably)
* A DNS blueprint needs to make sure that updates to the records happen sequentially so no records get lost
* To avoid API rate limits in certain services we need to limit how many operations we do at any moment - say we want to limit provisioning of entities, but not installing/launching them.
A first step in solving the above has been made in https://github.com/apache/brooklyn-server/pull/443 which adds "maxConcurrentChildCommands" to the DynamicCluster operations (start, resize, stop). This allows us to limit how many entities get created/destroyed by the cluster in parallel. The goal of this proposal is to extend it by making it possible to apply finer grained limits (say just on the launch step of the start effector) and to make it more general (not just start/stop in cluster but any effector).
## Proposed solution
Add functionality which allows external code (e.g. adjuncts) to plug into the lifecycle of entities **synchronously** and influence their behaviour. This will allow us to influence the execution of effectors on entities and for this particular proposal to block execution until some condition is met.
## Possible approaches (alternatives)
### Effector execution notifications
Provide the functionality to subscribe callbacks to be called when an effector is about to execute on an entity. The callback has the ability to mutate the effector, for example by adding a wrapper task to ensure certain concurrency limits. A simpler alternative would be to add pre and post execution callbacks. For this to be useful we need to split big effectors into smaller pieces. For example the start effectors will be a composition of provision, install, customize, launch effectors.
The reason not to work at the task level is that tasks are anonymous so we can't really subscribe to them. To do that we'd need to add identifiers to them which essentially turns them into effectors.
### Add hooks to the existing effectors
We could add fixed pre and post hooks to the start/stop effectors which execute callbacks synchronously at key points around tasks.
--
Both of the above will allow us to plug additional logic into the lifecycle of entities, making it possible to block execution. For clusters we'd plug into the members' lifecycle and provide cluster-wide limits (say a semaphore shared by the members). For more complex scenarios we could name the synchronising entity explicitly, for example to block execution until a step in a separate entity is complete (say registering DNS records after provisioning but before launch application-wide).
## Examples
Here are some concrete examples which give you a taste of what it would look like (thanks Geoff for sharing these)
### Limit the number of entities starting at any moment in the cluster (but provision them in parallel)
services:
- type: cluster
brooklyn.enrichers:
### plugs into the lifecycle provided callbacks and limits how many tasks can execute in parallel after provisioning the machines
### by convention concurrency is counted down at the last stage if not explicitly defined
- type: org.apache.brooklyn.enricher.stock.LimitGroupTasksSemaphore
brooklyn.config:
stage: post.provisioning
parallel.operation.size: auto # meaning the whole cluster; or could be integer e.g. 10 for 10-at-a-time
brooklyn.config:
initialSize: 50
memberSpec:
$brooklyn:entitySpec:
type: cluste-member
---
### Use an third entity to control the concurrency
brooklyn.catalog:
items:
- id: provisionBeforeInstallCluster
version: 1.0.0
item:
type: cluster
id: cluster
brooklyn.parameters:
- name: initial.cluster.size
description: Initial Cluster Size
default: 50
brooklyn.config:
initialSize: $brooklyn:config("initial.cluster.size")
memberSpec:
$brooklyn:entitySpec:
type: cluster-member
brooklyn.enrichers:
- type: org.apache.brooklyn.enricher.stock.AquirePermissionToProceed
brooklyn.config:
stage: post.provisioning
### Delegate the concurrency decisions to the referee entity
authorisor: $brooklyn:entity("referee")
brooklyn.children:
- type: org.apache.brooklyn.entity.TaskRegulationSemaphore
id: referee
brooklyn.config:
initial.value: $brooklyn:entity("cluster").config("initial.cluster.size") # or 1 for sequential execution
---
Some thoughts from Alex form previous discussions on how it would look like in YOML with initd-style effectors:
I’d like to have a semaphore on normal nodes cluster and for the launch step each node acquires that semaphore, releasing when confirmed joined. i could see a task you set in yaml eg if using the initdish idea
035-pre-launch-get-semaphore: { acquire-semaphore: { scope: $brooklyn:parent(), name: "node-launch" } }
040-launch: { ssh: "service cassandra start" }
045-confirm-service-up: { wait: { sensor: service.inCluster, timeout: 20m } }
050-finish-release-semaphore: semaphore-release
tasks of type acquire-semaphore would use (create if needed) a named semaphore against the given entity … but somehow we need to say when it should automatically be released (eg on failure) in addition to explicit release (the 050 which assumes some scope, not sure how/if to implement that)
---
Thanks to Geoff who shared his thoughts on the subject, with this post based on them.
Svet.
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Svetoslav Neykov <sv...@cloudsoftcorp.com>.
Thanks everyone for your opinions.
I still think it's valuable to have the "tasks callback" mechanism, but will pitch it again in another context in the future.
For now I settled on extending the latch functionality in SoftwareProcess to limit the parallelism on the immediate task it's guarding. The PR is available at https://github.com/apache/brooklyn-server/pull/520 <https://github.com/apache/brooklyn-server/pull/520> and any feedback is welcome. Longer term we'll be able to express this better with the tasks based approach discussed below.
Svet.
> On 12.01.2017 г., at 11:49, Aled Sage <al...@gmail.com> wrote:
>
> Hi all,
>
> Great discussion - really like the direction this is going of using tasks in yaml blueprints.
>
> However, it feels like we've launched into discussing a complex use-case (concurrency control) without having first discussed what yaml blueprints would look like for simpler tasks (*). I suggest we discuss that in another proposal thread, and get agreement of what the YAML should look like. Let's focus on the yaml blueprint author - i.e. reach agreement on some yaml examples, without worrying too much about how it's implemented under the covers.
>
> I'd like the examples to show an entire blueprint - including defining effectors, sensors, feeds, etc.
>
> ---
>
> Svet, for the "callback behaviour for effectors" I think that would be a powerful/advanced feature. If multiple callbacks were registered (e.g. in the super-type and in the sub-type, when extending existing blueprints), the order the callbacks are called could get tricky for blueprint authors. Maybe we live with that. Maybe the task-based approach will give a more elegant way to control it, making the callbacks redundant.
>
> If it helps to solve your immediate problem cleanly, I'm fine with us adding support for it. But I worry that we'd want to deprecate it shortly after task-based yaml is supported.
>
> Aled
>
> (*) Sorry if I've forgotten about some previous discussion on the mailing list - if I have, then please point us at it again!
>
>
>
> On 11/01/2017 17:45, Svetoslav Neykov wrote:
>> Sam I think that's a great approach to composing tasks and mixing them with semaphores.
>>
>>> sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
>> Alex, I like that, gives you much cleaner semantics than flock.
>>
>>> effectors:
>>> start:
>>> my-pre-launch:
>>> task: { acquire-semaphore: ... }
>>> run-before: launch
>>
>> Geoff, I have some reservations re init.d style naming as well. Mostly due to being too fragile when having multiple layers of blueprints. I'm going to suggest one more alternative, somehow similar to before/after. Instead of declaring when to run a task (before/after another task), just override the parent task, do the work you need to and reference the parent task explicitly.
>>
>> effectors:
>> start:
>> launch: [my-custom-launch, $super.launch]
>> my-custom-launch:
>> - ssh: ....
>>
>> Anyway this is getting sidetracked. I agree that the task based approach is a nice place to be longer term. Might go with extending the latches shorter term - need to think some more about it.
>>
>> What do people think about introducing the callback behaviour for effectors though? I see it as an orthogonal concept to what's possible with composing tasks. It's a middle ground for implementing the semaphores where the latches are too limited and "semaphores as tasks" are still not here. Will make life easier for some more complex scenarios where one needs to sync with/influence other entities (examples in previous emails).
>>
>> Svet.
>>
>>
>>
>>> On 11.01.2017 г., at 18:50, Alex Heneveld <al...@cloudsoftcorp.com> wrote:
>>>
>>>
>>> Svet-
>>>
>>> On 11/01/2017 15:55, Svetoslav Neykov wrote:
>>>>> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
>>>> What do you think the behaviour should be here - releasing the semaphore after the corresponding step completes or at the end of the wrapping effector? I think this is defined by the blueprint authors. And making this configurable adds even more complexity. Instead could invest in developing the above functionality.
>>> I'd probably release the latch sempahore right after the phase but I agree it's arbitrary and don't like that. Only suggested it if we need something very quick as it's isolated and probably fairly easy in the code.
>>>
>>> Note code exists and is run implicitly I think during the Install step (otherwise things like yum complain), along with I think the class is called SemaphoreWithOwner.
>>>
>>> I'd LOVE to invest in the task-based functionality. Code-wise it's not that far away with the type registry and YOML, but it needs a couple more people to become familiar with it!
>>>
>>> Lastly I meant to say -- someone mentioned flock but the bash *sem* command (aka parallel) is awesome though not as well known as it should be. This is an example used in zookeeper:
>>>
>>>
>>> sudo yum install parallel
>>>
>>> REMOVE_OLD_CMD="sed -i /server.*=/d zoo.cfg"
>>> ADD_NEW_CMD="cat >> zoo.cfg << EOF
>>> server.0=NEW
>>> server.1=NEW
>>> EOF
>>> "
>>>
>>> sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
>>>
>>>
>>> It runs the commands in the quotes on the last line while acquiring the semaphore with the given name "brooklyn" on the machine. You can configure semaphore counts and timeouts too.
>>>
>>> Best
>>> Alex
>>>
>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Aled Sage <al...@gmail.com>.
Hi all,
Great discussion - really like the direction this is going of using
tasks in yaml blueprints.
However, it feels like we've launched into discussing a complex use-case
(concurrency control) without having first discussed what yaml
blueprints would look like for simpler tasks (*). I suggest we discuss
that in another proposal thread, and get agreement of what the YAML
should look like. Let's focus on the yaml blueprint author - i.e. reach
agreement on some yaml examples, without worrying too much about how
it's implemented under the covers.
I'd like the examples to show an entire blueprint - including defining
effectors, sensors, feeds, etc.
---
Svet, for the "callback behaviour for effectors" I think that would be a
powerful/advanced feature. If multiple callbacks were registered (e.g.
in the super-type and in the sub-type, when extending existing
blueprints), the order the callbacks are called could get tricky for
blueprint authors. Maybe we live with that. Maybe the task-based
approach will give a more elegant way to control it, making the
callbacks redundant.
If it helps to solve your immediate problem cleanly, I'm fine with us
adding support for it. But I worry that we'd want to deprecate it
shortly after task-based yaml is supported.
Aled
(*) Sorry if I've forgotten about some previous discussion on the
mailing list - if I have, then please point us at it again!
On 11/01/2017 17:45, Svetoslav Neykov wrote:
> Sam I think that's a great approach to composing tasks and mixing them with semaphores.
>
>> sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
> Alex, I like that, gives you much cleaner semantics than flock.
>
>> effectors:
>> start:
>> my-pre-launch:
>> task: { acquire-semaphore: ... }
>> run-before: launch
>
> Geoff, I have some reservations re init.d style naming as well. Mostly due to being too fragile when having multiple layers of blueprints. I'm going to suggest one more alternative, somehow similar to before/after. Instead of declaring when to run a task (before/after another task), just override the parent task, do the work you need to and reference the parent task explicitly.
>
> effectors:
> start:
> launch: [my-custom-launch, $super.launch]
> my-custom-launch:
> - ssh: ....
>
> Anyway this is getting sidetracked. I agree that the task based approach is a nice place to be longer term. Might go with extending the latches shorter term - need to think some more about it.
>
> What do people think about introducing the callback behaviour for effectors though? I see it as an orthogonal concept to what's possible with composing tasks. It's a middle ground for implementing the semaphores where the latches are too limited and "semaphores as tasks" are still not here. Will make life easier for some more complex scenarios where one needs to sync with/influence other entities (examples in previous emails).
>
> Svet.
>
>
>
>> On 11.01.2017 \u0433., at 18:50, Alex Heneveld <al...@cloudsoftcorp.com> wrote:
>>
>>
>> Svet-
>>
>> On 11/01/2017 15:55, Svetoslav Neykov wrote:
>>>> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
>>> What do you think the behaviour should be here - releasing the semaphore after the corresponding step completes or at the end of the wrapping effector? I think this is defined by the blueprint authors. And making this configurable adds even more complexity. Instead could invest in developing the above functionality.
>> I'd probably release the latch sempahore right after the phase but I agree it's arbitrary and don't like that. Only suggested it if we need something very quick as it's isolated and probably fairly easy in the code.
>>
>> Note code exists and is run implicitly I think during the Install step (otherwise things like yum complain), along with I think the class is called SemaphoreWithOwner.
>>
>> I'd LOVE to invest in the task-based functionality. Code-wise it's not that far away with the type registry and YOML, but it needs a couple more people to become familiar with it!
>>
>> Lastly I meant to say -- someone mentioned flock but the bash *sem* command (aka parallel) is awesome though not as well known as it should be. This is an example used in zookeeper:
>>
>>
>> sudo yum install parallel
>>
>> REMOVE_OLD_CMD="sed -i /server.*=/d zoo.cfg"
>> ADD_NEW_CMD="cat >> zoo.cfg << EOF
>> server.0=NEW
>> server.1=NEW
>> EOF
>> "
>>
>> sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
>>
>>
>> It runs the commands in the quotes on the last line while acquiring the semaphore with the given name "brooklyn" on the machine. You can configure semaphore counts and timeouts too.
>>
>> Best
>> Alex
>>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Svetoslav Neykov <sv...@cloudsoftcorp.com>.
Sam I think that's a great approach to composing tasks and mixing them with semaphores.
> sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
Alex, I like that, gives you much cleaner semantics than flock.
> effectors:
> start:
> my-pre-launch:
> task: { acquire-semaphore: ... }
> run-before: launch
Geoff, I have some reservations re init.d style naming as well. Mostly due to being too fragile when having multiple layers of blueprints. I'm going to suggest one more alternative, somehow similar to before/after. Instead of declaring when to run a task (before/after another task), just override the parent task, do the work you need to and reference the parent task explicitly.
effectors:
start:
launch: [my-custom-launch, $super.launch]
my-custom-launch:
- ssh: ....
Anyway this is getting sidetracked. I agree that the task based approach is a nice place to be longer term. Might go with extending the latches shorter term - need to think some more about it.
What do people think about introducing the callback behaviour for effectors though? I see it as an orthogonal concept to what's possible with composing tasks. It's a middle ground for implementing the semaphores where the latches are too limited and "semaphores as tasks" are still not here. Will make life easier for some more complex scenarios where one needs to sync with/influence other entities (examples in previous emails).
Svet.
> On 11.01.2017 г., at 18:50, Alex Heneveld <al...@cloudsoftcorp.com> wrote:
>
>
> Svet-
>
> On 11/01/2017 15:55, Svetoslav Neykov wrote:
>>> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
>> What do you think the behaviour should be here - releasing the semaphore after the corresponding step completes or at the end of the wrapping effector? I think this is defined by the blueprint authors. And making this configurable adds even more complexity. Instead could invest in developing the above functionality.
>
> I'd probably release the latch sempahore right after the phase but I agree it's arbitrary and don't like that. Only suggested it if we need something very quick as it's isolated and probably fairly easy in the code.
>
> Note code exists and is run implicitly I think during the Install step (otherwise things like yum complain), along with I think the class is called SemaphoreWithOwner.
>
> I'd LOVE to invest in the task-based functionality. Code-wise it's not that far away with the type registry and YOML, but it needs a couple more people to become familiar with it!
>
> Lastly I meant to say -- someone mentioned flock but the bash *sem* command (aka parallel) is awesome though not as well known as it should be. This is an example used in zookeeper:
>
>
> sudo yum install parallel
>
> REMOVE_OLD_CMD="sed -i /server.*=/d zoo.cfg"
> ADD_NEW_CMD="cat >> zoo.cfg << EOF
> server.0=NEW
> server.1=NEW
> EOF
> "
>
> sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
>
>
> It runs the commands in the quotes on the last line while acquiring the semaphore with the given name "brooklyn" on the machine. You can configure semaphore counts and timeouts too.
>
> Best
> Alex
>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Alex Heneveld <al...@cloudsoftcorp.com>.
Svet-
On 11/01/2017 15:55, Svetoslav Neykov wrote:
>> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
> What do you think the behaviour should be here - releasing the semaphore after the corresponding step completes or at the end of the wrapping effector? I think this is defined by the blueprint authors. And making this configurable adds even more complexity. Instead could invest in developing the above functionality.
I'd probably release the latch sempahore right after the phase but I
agree it's arbitrary and don't like that. Only suggested it if we need
something very quick as it's isolated and probably fairly easy in the code.
Note code exists and is run implicitly I think during the Install step
(otherwise things like yum complain), along with I think the class is
called SemaphoreWithOwner.
I'd LOVE to invest in the task-based functionality. Code-wise it's not
that far away with the type registry and YOML, but it needs a couple
more people to become familiar with it!
Lastly I meant to say -- someone mentioned flock but the bash *sem*
command (aka parallel) is awesome though not as well known as it should
be. This is an example used in zookeeper:
sudo yum install parallel
REMOVE_OLD_CMD="sed -i /server.*=/d zoo.cfg"
ADD_NEW_CMD="cat >> zoo.cfg << EOF
server.0=NEW
server.1=NEW
EOF
"
sem --id brooklyn --fg "$REMOVE_OLD_CMD ; $ADD_NEW_CMD"
It runs the commands in the quotes on the last line while acquiring the
semaphore with the given name "brooklyn" on the machine. You can
configure semaphore counts and timeouts too.
Best
Alex
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Alex Heneveld <al...@cloudsoftcorp.com>.
Sam- Really nice!
Two thoughts. First, I think the default interpretation of a word should
probably be a task in the registry rather than an effector. These can be
extended, and could even be scoped to a catalog bundle/set (so eg "install"
would define how to install something, and could be used in several
blueprints in a group; "provision" is probably from core Brooklyn
however). Most other tasks -- such as invoking an effector (or executing
an ssh command or a rest call etc) -- should specify a task type eg `{
type: effector, name: launch }` (optionally passing args in that case).
Some optional shorthands are that a single key could treat the key as the
type (so could write `{effector: launch}` or {ssh: service start mysql}`),
and if the item is a list treat it as a sequence of task definitions (as
you've done).
Secondly I had imagined various complex scopes for mutexes to ensure
automatic release but I wasn't enamored of that approach. Your mutexed /
subtasks is much cleaner. There may be times when the semaphore should
survive the task -- so worth having the explicit "acquire-semaphore" task
-- but agree w Svet and you the normal pattern should be simpler. To
elaborate on your launch example something like:
launch:
- ssh: yum install app
- with-semaphore:
semaphore: $brooklyn:attributeWhenReady("semaphore.launch")
subtasks:
- ssh: service start app
- wait-for-running
not completely sure where the semaphore itself lives -- the above is
assuming we add a new config/sensor type similar to "port", so that the
semaphore is guaranteed to be populated as a sensor, but a parent can
specify a shared semaphore as config if desired.
("with-semaphore" as a type could be a simple extension of a "try-finally"
task which has a "pre", "main", and "finally" block, with semantics that
"finally" is always invoked, even if main is cancelled or fails ... or
alternatively the semaphore class could be "owner-aware" so if someone else
requests it, it will check whether the owners are still alive, so if the
owner is a task and that task is cancelled, it acts as if the owner had
cleared it)
Best
Alex
On 11 January 2017 at 16:20, Sam Corbett <sa...@cloudsoftcorp.com>
wrote:
> +1 to Alex's core suggestion. It would be a powerful extension to
> Brooklyn. We should not force blueprint authors into arbitrary structures
> that were only a good fit for someone else's scenario in the past.
>
> To lob another suggestion into the mix, I imagine writing:
>
> effectors:
> # default named phases, not needed normally.
> start: [provision, install, customise, launch]
> # launch phase overridden to wrap latter half with mutex handling
> launch:
> - ssh-task-1
> - mutexed
> subtasks:
> - ssh-task-2
> - wait-for-running
>
> This implies that each of the phases of start is another effector and that
> they are configurable objects. The `mutexed` task cleans up properly when
> its subtasks fail.
>
> Sam
>
>
>
> On 11/01/2017 15:55, Svetoslav Neykov wrote:
>
>> Alex,
>>
>> I like the task based approach and agree it's something we need to push
>> for. I'm not convinced that locking-like behaviour should be represented as
>> tasks though. It brings procedural style flavour to concurrency - for
>> example in your original example there's a separate tasks for acquiring and
>> releasing the lock. They could easily get misplaced, overwritten,
>> forgotten, etc. Not clear how releasing it works in case of errors.
>> What I'd prefer is more declarative approach where the tasks are grouped
>> and the locking requirements are applied on the group. At worst referencing
>> a start task and optionally an end task (with the default of the parent
>> task existing).
>>
>> Plugging into the lifecycle of entities has other use-cases. No matter
>> how an entity is defined - whether using the current monolithic START
>> effector or one composited of smaller tasks - there's no way to be notified
>> when its tasks get executed. Some concrete examples - the
>> "SystemServiceEnricher" which looks for the "launch" task and can be
>> applied on any "SoftwareProcessEnity"; An entity which needs to do some
>> cleanup based on the shutdown of another entity (DNS blueprints); latching
>> at arbitrary points during entity lifecycle; etc.
>>
>> One of the alternatives I mention in the original email (Effector
>> execution notifications) is a step in the "task based" approach direction.
>> It's still in Java, but we are splitting the monolith effectors into
>> smaller building blocks, which could in future get reused in YAML.
>>
>> So to summarise - +1 for tasks as building blocks, but still need
>> visibility into the executing blocks.
>>
>> PS - as a near-term option if needed we could extend SoftwareProcess
>>> LATCH to do something special if the config/sensor it is given is a
>>> "Semaphore" type
>>>
>> What do you think the behaviour should be here - releasing the semaphore
>> after the corresponding step completes or at the end of the wrapping
>> effector? I think this is defined by the blueprint authors. And making this
>> configurable adds even more complexity. Instead could invest in developing
>> the above functionality.
>>
>> Svet.
>>
>>
>>
>> On 11.01.2017 г., at 16:25, Alex Heneveld <alex.heneveld@cloudsoftcorp.c
>>> om> wrote:
>>>
>>>
>>> svet, all,
>>>
>>> lots of good points.
>>>
>>> the idea of "lifecycle phases" in our software process entities has
>>> grown to be something of a monster, in my opinion. they started as a small
>>> set of conventions but they've grown to the point where it's the primary
>>> hook for yaml and people are wanting "pre.pre.install".
>>>
>>> it's (a) misguided since for any value of N, N phases is too few and (b)
>>> opaque coming from a java superclass.
>>>
>>> a lot of things rely on the current SoftwareProcess so not saying we
>>> kill it, but for the community it would be much healthier (in terms of
>>> consumers) to allow multiple strategies and especially focus on *the
>>> reusability of tasks in YAML* -- eg "provision" or "install template files"
>>> or "create run dir" -- so people can write rich blueprints that don't
>>> require magic lifecycle phases from superclasses.
>>>
>>> the "init.d" numbered approach svet cites is one way these are wired
>>> together, with extensions (child types) able to insert new numbered steps
>>> or override with the number and label. but that would be one strategy,
>>> there might be simpler ones that are just a list, or other sequencing
>>> strategies like precondition/action/postcondition or
>>> runs-before/runs-after (where to extend something, you can say
>>> `my-pre-customize: { do: something, run-before: launch, run-after:
>>> customize }`).
>>>
>>> we're not sure exactly how that would look but wrt mutex logic the idea
>>> that adjuncts plug into the entity lifecycles feels wrong, like it's
>>> pushing for more standardisation in lifecycle phases where things can plug
>>> in. whereas with a task approach we can have effector definitions being
>>> explicit about synchronization, which i think is better. and if they want
>>> to make that configurable/pluggable they can do this and be explicit about
>>> how that is done (for instance it might take a config param, even a config
>>> param (or child or relation) which is an entity and call an effector on
>>> that if set).
>>>
>>> concretely in the examples i'm saying instead of ENRICHER APPROACH
>>>
>>> memberSpec:
>>> $brooklyn:entitySpec:
>>> type: cluster-member
>>> brooklyn.enrichers:
>>> - type: org.apache.brooklyn.enricher.s
>>> tock.AquirePermissionToProceed
>>> brooklyn.config:
>>> stage: post.provisioning
>>> # lifecycle stage "post.provisioning" is part of
>>> cluster-member and
>>> # the enricher above understands those stages
>>>
>>> we concentrate on a way to define tasks and extend them, so that we
>>> could instead have a TASK APPROACH:
>>>
>>> memberSpec:
>>> $brooklyn:entitySpec:
>>> type: cluster-member
>>> effectors:
>>> start:
>>> 035-pre-launch-get-semaphore: { acquire-semaphore: ... }
>>> # assume 040-launch is defined in the parent "start"
>>> yaml defn
>>> # using a new hypothetical "initd" yaml-friendly task
>>> factory,
>>> # acquire-semaphore is a straightforward task;
>>> # scope can come from parent/ancestor task,
>>> # also wanted to ensure mutex is not kept on errors
>>>
>>> or
>>>
>>> memberSpec:
>>> $brooklyn:entitySpec:
>>> type: cluster-member
>>> effectors:
>>> start:
>>> my-pre-launch:
>>> task: { acquire-semaphore: ... }
>>> run-before: launch
>>> run-after: customize
>>> # launch and customize defined in the parent "start"
>>> yaml defn,
>>> # using a new hypothetical "ordered-labels"
>>> yaml-friendly task factory;
>>> # acquire-semaphore and scope are as in the "initd"
>>> example
>>>
>>>
>>> both approaches need a little bit of time to get your head around the
>>> new concepts but the latter approach is much more powerful and general.
>>> there's a lot TBD but the point i'm making is that *if we make tasks easier
>>> to work with in yaml, it becomes more natural to express concurrency
>>> control as tasks*.
>>>
>>>
>>> PS - as a near-term option if needed we could extend SoftwareProcess
>>> LATCH to do something special if the config/sensor it is given is a
>>> "Semaphore" type
>>>
>>> best
>>> alex
>>>
>>>
>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Geoff Macartney <ge...@cloudsoftcorp.com>.
What about the case, though, where you want not to override the launch but
add 'mixin' behaviour?
I guess this is where Alex's 'run-before' etc. comes in (I'm not so keen on
the "initd" style of notation).
For run-before etc. to work you'd need to be able to have names for each
member of a sequence of tasks, but you practically have that above anyway,
if you make the effectors a list of key/action pairs, i.e. instead of
effectors:
start:
035-pre-launch-get-semaphore: { acquire-semaphore: { scope:
$brooklyn:parent(), name: "node-launch" } }
040-launch: { ssh: "service cassandra start" }
045-confirm-service-up: { wait: { sensor: service.inCluster, timeout:
20m } }
050-finish-release-semaphore: semaphore-release
you have
effectors:
start:
- pre-launch-get-semaphore: { acquire-semaphore: { scope:
$brooklyn:parent(), name: "node-launch" } }
- launch: { ssh: "service cassandra start" }
- confirm-service-up: { wait: { sensor: service.inCluster, timeout:
20m } }
- finish-release-semaphore: semaphore-release
and then a child "mixes-in" via:
effectors:
start:
my-pre-launch:
task: { acquire-semaphore: ... }
run-before: launch
But I think this still needs a lot of thought, especially if you are going
to have an entity inheriting from a whole hierarchy of super entities -
whose "launch" are you talking about above? How is this big graph of tasks
for each entity going to be ordered?
On Wed, 11 Jan 2017 at 16:20 Sam Corbett <sa...@cloudsoftcorp.com>
wrote:
> +1 to Alex's core suggestion. It would be a powerful extension to
> Brooklyn. We should not force blueprint authors into arbitrary
> structures that were only a good fit for someone else's scenario in the
> past.
>
> To lob another suggestion into the mix, I imagine writing:
>
> effectors:
> # default named phases, not needed normally.
> start: [provision, install, customise, launch]
> # launch phase overridden to wrap latter half with mutex handling
> launch:
> - ssh-task-1
> - mutexed
> subtasks:
> - ssh-task-2
> - wait-for-running
>
> This implies that each of the phases of start is another effector and
> that they are configurable objects. The `mutexed` task cleans up
> properly when its subtasks fail.
>
> Sam
>
>
> On 11/01/2017 15:55, Svetoslav Neykov wrote:
> > Alex,
> >
> > I like the task based approach and agree it's something we need to push
> for. I'm not convinced that locking-like behaviour should be represented as
> tasks though. It brings procedural style flavour to concurrency - for
> example in your original example there's a separate tasks for acquiring and
> releasing the lock. They could easily get misplaced, overwritten,
> forgotten, etc. Not clear how releasing it works in case of errors.
> > What I'd prefer is more declarative approach where the tasks are grouped
> and the locking requirements are applied on the group. At worst referencing
> a start task and optionally an end task (with the default of the parent
> task existing).
> >
> > Plugging into the lifecycle of entities has other use-cases. No matter
> how an entity is defined - whether using the current monolithic START
> effector or one composited of smaller tasks - there's no way to be notified
> when its tasks get executed. Some concrete examples - the
> "SystemServiceEnricher" which looks for the "launch" task and can be
> applied on any "SoftwareProcessEnity"; An entity which needs to do some
> cleanup based on the shutdown of another entity (DNS blueprints); latching
> at arbitrary points during entity lifecycle; etc.
> >
> > One of the alternatives I mention in the original email (Effector
> execution notifications) is a step in the "task based" approach direction.
> It's still in Java, but we are splitting the monolith effectors into
> smaller building blocks, which could in future get reused in YAML.
> >
> > So to summarise - +1 for tasks as building blocks, but still need
> visibility into the executing blocks.
> >
> >> PS - as a near-term option if needed we could extend SoftwareProcess
> LATCH to do something special if the config/sensor it is given is a
> "Semaphore" type
> > What do you think the behaviour should be here - releasing the semaphore
> after the corresponding step completes or at the end of the wrapping
> effector? I think this is defined by the blueprint authors. And making this
> configurable adds even more complexity. Instead could invest in developing
> the above functionality.
> >
> > Svet.
> >
> >
> >
> >> On 11.01.2017 г., at 16:25, Alex Heneveld <
> alex.heneveld@cloudsoftcorp.com> wrote:
> >>
> >>
> >> svet, all,
> >>
> >> lots of good points.
> >>
> >> the idea of "lifecycle phases" in our software process entities has
> grown to be something of a monster, in my opinion. they started as a small
> set of conventions but they've grown to the point where it's the primary
> hook for yaml and people are wanting "pre.pre.install".
> >>
> >> it's (a) misguided since for any value of N, N phases is too few and
> (b) opaque coming from a java superclass.
> >>
> >> a lot of things rely on the current SoftwareProcess so not saying we
> kill it, but for the community it would be much healthier (in terms of
> consumers) to allow multiple strategies and especially focus on *the
> reusability of tasks in YAML* -- eg "provision" or "install template files"
> or "create run dir" -- so people can write rich blueprints that don't
> require magic lifecycle phases from superclasses.
> >>
> >> the "init.d" numbered approach svet cites is one way these are wired
> together, with extensions (child types) able to insert new numbered steps
> or override with the number and label. but that would be one strategy,
> there might be simpler ones that are just a list, or other sequencing
> strategies like precondition/action/postcondition or runs-before/runs-after
> (where to extend something, you can say `my-pre-customize: { do: something,
> run-before: launch, run-after: customize }`).
> >>
> >> we're not sure exactly how that would look but wrt mutex logic the idea
> that adjuncts plug into the entity lifecycles feels wrong, like it's
> pushing for more standardisation in lifecycle phases where things can plug
> in. whereas with a task approach we can have effector definitions being
> explicit about synchronization, which i think is better. and if they want
> to make that configurable/pluggable they can do this and be explicit about
> how that is done (for instance it might take a config param, even a config
> param (or child or relation) which is an entity and call an effector on
> that if set).
> >>
> >> concretely in the examples i'm saying instead of ENRICHER APPROACH
> >>
> >> memberSpec:
> >> $brooklyn:entitySpec:
> >> type: cluster-member
> >> brooklyn.enrichers:
> >> - type:
> org.apache.brooklyn.enricher.stock.AquirePermissionToProceed
> >> brooklyn.config:
> >> stage: post.provisioning
> >> # lifecycle stage "post.provisioning" is part of
> cluster-member and
> >> # the enricher above understands those stages
> >>
> >> we concentrate on a way to define tasks and extend them, so that we
> could instead have a TASK APPROACH:
> >>
> >> memberSpec:
> >> $brooklyn:entitySpec:
> >> type: cluster-member
> >> effectors:
> >> start:
> >> 035-pre-launch-get-semaphore: { acquire-semaphore: ... }
> >> # assume 040-launch is defined in the parent "start"
> yaml defn
> >> # using a new hypothetical "initd" yaml-friendly task
> factory,
> >> # acquire-semaphore is a straightforward task;
> >> # scope can come from parent/ancestor task,
> >> # also wanted to ensure mutex is not kept on errors
> >>
> >> or
> >>
> >> memberSpec:
> >> $brooklyn:entitySpec:
> >> type: cluster-member
> >> effectors:
> >> start:
> >> my-pre-launch:
> >> task: { acquire-semaphore: ... }
> >> run-before: launch
> >> run-after: customize
> >> # launch and customize defined in the parent "start"
> yaml defn,
> >> # using a new hypothetical "ordered-labels"
> yaml-friendly task factory;
> >> # acquire-semaphore and scope are as in the "initd"
> example
> >>
> >>
> >> both approaches need a little bit of time to get your head around the
> new concepts but the latter approach is much more powerful and general.
> there's a lot TBD but the point i'm making is that *if we make tasks easier
> to work with in yaml, it becomes more natural to express concurrency
> control as tasks*.
> >>
> >>
> >> PS - as a near-term option if needed we could extend SoftwareProcess
> LATCH to do something special if the config/sensor it is given is a
> "Semaphore" type
> >>
> >> best
> >> alex
> >>
>
>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Sam Corbett <sa...@cloudsoftcorp.com>.
+1 to Alex's core suggestion. It would be a powerful extension to
Brooklyn. We should not force blueprint authors into arbitrary
structures that were only a good fit for someone else's scenario in the
past.
To lob another suggestion into the mix, I imagine writing:
effectors:
# default named phases, not needed normally.
start: [provision, install, customise, launch]
# launch phase overridden to wrap latter half with mutex handling
launch:
- ssh-task-1
- mutexed
subtasks:
- ssh-task-2
- wait-for-running
This implies that each of the phases of start is another effector and
that they are configurable objects. The `mutexed` task cleans up
properly when its subtasks fail.
Sam
On 11/01/2017 15:55, Svetoslav Neykov wrote:
> Alex,
>
> I like the task based approach and agree it's something we need to push for. I'm not convinced that locking-like behaviour should be represented as tasks though. It brings procedural style flavour to concurrency - for example in your original example there's a separate tasks for acquiring and releasing the lock. They could easily get misplaced, overwritten, forgotten, etc. Not clear how releasing it works in case of errors.
> What I'd prefer is more declarative approach where the tasks are grouped and the locking requirements are applied on the group. At worst referencing a start task and optionally an end task (with the default of the parent task existing).
>
> Plugging into the lifecycle of entities has other use-cases. No matter how an entity is defined - whether using the current monolithic START effector or one composited of smaller tasks - there's no way to be notified when its tasks get executed. Some concrete examples - the "SystemServiceEnricher" which looks for the "launch" task and can be applied on any "SoftwareProcessEnity"; An entity which needs to do some cleanup based on the shutdown of another entity (DNS blueprints); latching at arbitrary points during entity lifecycle; etc.
>
> One of the alternatives I mention in the original email (Effector execution notifications) is a step in the "task based" approach direction. It's still in Java, but we are splitting the monolith effectors into smaller building blocks, which could in future get reused in YAML.
>
> So to summarise - +1 for tasks as building blocks, but still need visibility into the executing blocks.
>
>> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
> What do you think the behaviour should be here - releasing the semaphore after the corresponding step completes or at the end of the wrapping effector? I think this is defined by the blueprint authors. And making this configurable adds even more complexity. Instead could invest in developing the above functionality.
>
> Svet.
>
>
>
>> On 11.01.2017 \u0433., at 16:25, Alex Heneveld <al...@cloudsoftcorp.com> wrote:
>>
>>
>> svet, all,
>>
>> lots of good points.
>>
>> the idea of "lifecycle phases" in our software process entities has grown to be something of a monster, in my opinion. they started as a small set of conventions but they've grown to the point where it's the primary hook for yaml and people are wanting "pre.pre.install".
>>
>> it's (a) misguided since for any value of N, N phases is too few and (b) opaque coming from a java superclass.
>>
>> a lot of things rely on the current SoftwareProcess so not saying we kill it, but for the community it would be much healthier (in terms of consumers) to allow multiple strategies and especially focus on *the reusability of tasks in YAML* -- eg "provision" or "install template files" or "create run dir" -- so people can write rich blueprints that don't require magic lifecycle phases from superclasses.
>>
>> the "init.d" numbered approach svet cites is one way these are wired together, with extensions (child types) able to insert new numbered steps or override with the number and label. but that would be one strategy, there might be simpler ones that are just a list, or other sequencing strategies like precondition/action/postcondition or runs-before/runs-after (where to extend something, you can say `my-pre-customize: { do: something, run-before: launch, run-after: customize }`).
>>
>> we're not sure exactly how that would look but wrt mutex logic the idea that adjuncts plug into the entity lifecycles feels wrong, like it's pushing for more standardisation in lifecycle phases where things can plug in. whereas with a task approach we can have effector definitions being explicit about synchronization, which i think is better. and if they want to make that configurable/pluggable they can do this and be explicit about how that is done (for instance it might take a config param, even a config param (or child or relation) which is an entity and call an effector on that if set).
>>
>> concretely in the examples i'm saying instead of ENRICHER APPROACH
>>
>> memberSpec:
>> $brooklyn:entitySpec:
>> type: cluster-member
>> brooklyn.enrichers:
>> - type: org.apache.brooklyn.enricher.stock.AquirePermissionToProceed
>> brooklyn.config:
>> stage: post.provisioning
>> # lifecycle stage "post.provisioning" is part of cluster-member and
>> # the enricher above understands those stages
>>
>> we concentrate on a way to define tasks and extend them, so that we could instead have a TASK APPROACH:
>>
>> memberSpec:
>> $brooklyn:entitySpec:
>> type: cluster-member
>> effectors:
>> start:
>> 035-pre-launch-get-semaphore: { acquire-semaphore: ... }
>> # assume 040-launch is defined in the parent "start" yaml defn
>> # using a new hypothetical "initd" yaml-friendly task factory,
>> # acquire-semaphore is a straightforward task;
>> # scope can come from parent/ancestor task,
>> # also wanted to ensure mutex is not kept on errors
>>
>> or
>>
>> memberSpec:
>> $brooklyn:entitySpec:
>> type: cluster-member
>> effectors:
>> start:
>> my-pre-launch:
>> task: { acquire-semaphore: ... }
>> run-before: launch
>> run-after: customize
>> # launch and customize defined in the parent "start" yaml defn,
>> # using a new hypothetical "ordered-labels" yaml-friendly task factory;
>> # acquire-semaphore and scope are as in the "initd" example
>>
>>
>> both approaches need a little bit of time to get your head around the new concepts but the latter approach is much more powerful and general. there's a lot TBD but the point i'm making is that *if we make tasks easier to work with in yaml, it becomes more natural to express concurrency control as tasks*.
>>
>>
>> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
>>
>> best
>> alex
>>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Svetoslav Neykov <sv...@cloudsoftcorp.com>.
Alex,
I like the task based approach and agree it's something we need to push for. I'm not convinced that locking-like behaviour should be represented as tasks though. It brings procedural style flavour to concurrency - for example in your original example there's a separate tasks for acquiring and releasing the lock. They could easily get misplaced, overwritten, forgotten, etc. Not clear how releasing it works in case of errors.
What I'd prefer is more declarative approach where the tasks are grouped and the locking requirements are applied on the group. At worst referencing a start task and optionally an end task (with the default of the parent task existing).
Plugging into the lifecycle of entities has other use-cases. No matter how an entity is defined - whether using the current monolithic START effector or one composited of smaller tasks - there's no way to be notified when its tasks get executed. Some concrete examples - the "SystemServiceEnricher" which looks for the "launch" task and can be applied on any "SoftwareProcessEnity"; An entity which needs to do some cleanup based on the shutdown of another entity (DNS blueprints); latching at arbitrary points during entity lifecycle; etc.
One of the alternatives I mention in the original email (Effector execution notifications) is a step in the "task based" approach direction. It's still in Java, but we are splitting the monolith effectors into smaller building blocks, which could in future get reused in YAML.
So to summarise - +1 for tasks as building blocks, but still need visibility into the executing blocks.
> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
What do you think the behaviour should be here - releasing the semaphore after the corresponding step completes or at the end of the wrapping effector? I think this is defined by the blueprint authors. And making this configurable adds even more complexity. Instead could invest in developing the above functionality.
Svet.
> On 11.01.2017 г., at 16:25, Alex Heneveld <al...@cloudsoftcorp.com> wrote:
>
>
> svet, all,
>
> lots of good points.
>
> the idea of "lifecycle phases" in our software process entities has grown to be something of a monster, in my opinion. they started as a small set of conventions but they've grown to the point where it's the primary hook for yaml and people are wanting "pre.pre.install".
>
> it's (a) misguided since for any value of N, N phases is too few and (b) opaque coming from a java superclass.
>
> a lot of things rely on the current SoftwareProcess so not saying we kill it, but for the community it would be much healthier (in terms of consumers) to allow multiple strategies and especially focus on *the reusability of tasks in YAML* -- eg "provision" or "install template files" or "create run dir" -- so people can write rich blueprints that don't require magic lifecycle phases from superclasses.
>
> the "init.d" numbered approach svet cites is one way these are wired together, with extensions (child types) able to insert new numbered steps or override with the number and label. but that would be one strategy, there might be simpler ones that are just a list, or other sequencing strategies like precondition/action/postcondition or runs-before/runs-after (where to extend something, you can say `my-pre-customize: { do: something, run-before: launch, run-after: customize }`).
>
> we're not sure exactly how that would look but wrt mutex logic the idea that adjuncts plug into the entity lifecycles feels wrong, like it's pushing for more standardisation in lifecycle phases where things can plug in. whereas with a task approach we can have effector definitions being explicit about synchronization, which i think is better. and if they want to make that configurable/pluggable they can do this and be explicit about how that is done (for instance it might take a config param, even a config param (or child or relation) which is an entity and call an effector on that if set).
>
> concretely in the examples i'm saying instead of ENRICHER APPROACH
>
> memberSpec:
> $brooklyn:entitySpec:
> type: cluster-member
> brooklyn.enrichers:
> - type: org.apache.brooklyn.enricher.stock.AquirePermissionToProceed
> brooklyn.config:
> stage: post.provisioning
> # lifecycle stage "post.provisioning" is part of cluster-member and
> # the enricher above understands those stages
>
> we concentrate on a way to define tasks and extend them, so that we could instead have a TASK APPROACH:
>
> memberSpec:
> $brooklyn:entitySpec:
> type: cluster-member
> effectors:
> start:
> 035-pre-launch-get-semaphore: { acquire-semaphore: ... }
> # assume 040-launch is defined in the parent "start" yaml defn
> # using a new hypothetical "initd" yaml-friendly task factory,
> # acquire-semaphore is a straightforward task;
> # scope can come from parent/ancestor task,
> # also wanted to ensure mutex is not kept on errors
>
> or
>
> memberSpec:
> $brooklyn:entitySpec:
> type: cluster-member
> effectors:
> start:
> my-pre-launch:
> task: { acquire-semaphore: ... }
> run-before: launch
> run-after: customize
> # launch and customize defined in the parent "start" yaml defn,
> # using a new hypothetical "ordered-labels" yaml-friendly task factory;
> # acquire-semaphore and scope are as in the "initd" example
>
>
> both approaches need a little bit of time to get your head around the new concepts but the latter approach is much more powerful and general. there's a lot TBD but the point i'm making is that *if we make tasks easier to work with in yaml, it becomes more natural to express concurrency control as tasks*.
>
>
> PS - as a near-term option if needed we could extend SoftwareProcess LATCH to do something special if the config/sensor it is given is a "Semaphore" type
>
> best
> alex
>
Re: [PROPOSAL] Controlling effectors concurrency
Posted by Alex Heneveld <al...@cloudsoftcorp.com>.
svet, all,
lots of good points.
the idea of "lifecycle phases" in our software process entities has
grown to be something of a monster, in my opinion. they started as a
small set of conventions but they've grown to the point where it's the
primary hook for yaml and people are wanting "pre.pre.install".
it's (a) misguided since for any value of N, N phases is too few and (b)
opaque coming from a java superclass.
a lot of things rely on the current SoftwareProcess so not saying we
kill it, but for the community it would be much healthier (in terms of
consumers) to allow multiple strategies and especially focus on *the
reusability of tasks in YAML* -- eg "provision" or "install template
files" or "create run dir" -- so people can write rich blueprints that
don't require magic lifecycle phases from superclasses.
the "init.d" numbered approach svet cites is one way these are wired
together, with extensions (child types) able to insert new numbered
steps or override with the number and label. but that would be one
strategy, there might be simpler ones that are just a list, or other
sequencing strategies like precondition/action/postcondition or
runs-before/runs-after (where to extend something, you can say
`my-pre-customize: { do: something, run-before: launch, run-after:
customize }`).
we're not sure exactly how that would look but wrt mutex logic the idea
that adjuncts plug into the entity lifecycles feels wrong, like it's
pushing for more standardisation in lifecycle phases where things can
plug in. whereas with a task approach we can have effector definitions
being explicit about synchronization, which i think is better. and if
they want to make that configurable/pluggable they can do this and be
explicit about how that is done (for instance it might take a config
param, even a config param (or child or relation) which is an entity and
call an effector on that if set).
concretely in the examples i'm saying instead of ENRICHER APPROACH
memberSpec:
$brooklyn:entitySpec:
type: cluster-member
brooklyn.enrichers:
- type: org.apache.brooklyn.enricher.stock.AquirePermissionToProceed
brooklyn.config:
stage: post.provisioning
# lifecycle stage "post.provisioning" is part of cluster-member and
# the enricher above understands those stages
we concentrate on a way to define tasks and extend them, so that we
could instead have a TASK APPROACH:
memberSpec:
$brooklyn:entitySpec:
type: cluster-member
effectors:
start:
035-pre-launch-get-semaphore: { acquire-semaphore: ... }
# assume 040-launch is defined in the parent "start" yaml defn
# using a new hypothetical "initd" yaml-friendly task factory,
# acquire-semaphore is a straightforward task;
# scope can come from parent/ancestor task,
# also wanted to ensure mutex is not kept on errors
or
memberSpec:
$brooklyn:entitySpec:
type: cluster-member
effectors:
start:
my-pre-launch:
task: { acquire-semaphore: ... }
run-before: launch
run-after: customize
# launch and customize defined in the parent "start" yaml defn,
# using a new hypothetical "ordered-labels" yaml-friendly task factory;
# acquire-semaphore and scope are as in the "initd" example
both approaches need a little bit of time to get your head around the
new concepts but the latter approach is much more powerful and general.
there's a lot TBD but the point i'm making is that *if we make tasks
easier to work with in yaml, it becomes more natural to express
concurrency control as tasks*.
PS - as a near-term option if needed we could extend SoftwareProcess
LATCH to do something special if the config/sensor it is given is a
"Semaphore" type
best
alex