You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ariatosca.apache.org by mx...@apache.org on 2017/06/14 14:33:27 UTC

[1/5] incubator-ariatosca git commit: ARIA-166 Update README file [Forced Update!]

Repository: incubator-ariatosca
Updated Branches:
  refs/heads/ARIA-278-Remove-core-tasks fc05b65d4 -> 546e7af7a (forced update)


ARIA-166 Update README file


Project: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/commit/22f6e9ef
Tree: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/tree/22f6e9ef
Diff: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/diff/22f6e9ef

Branch: refs/heads/ARIA-278-Remove-core-tasks
Commit: 22f6e9efd5300f33bee51a1c1622c22b1531bbf5
Parents: 5afa2f7
Author: Ran Ziv <ra...@gigaspaces.com>
Authored: Thu Jun 8 18:26:29 2017 +0300
Committer: Ran Ziv <ra...@gigaspaces.com>
Committed: Mon Jun 12 19:08:44 2017 +0300

----------------------------------------------------------------------
 README.md | 231 +++++++++++++++++++--------------------------------------
 1 file changed, 75 insertions(+), 156 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/22f6e9ef/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index e534645..6aee414 100644
--- a/README.md
+++ b/README.md
@@ -1,201 +1,120 @@
 ARIA
 ====
 
-[![Build Status](https://travis-ci.org/apache/incubator-ariatosca.svg?branch=master)](https://travis-ci.org/apache/incubator-ariatosca)
-[![Appveyor Build Status](https://ci.appveyor.com/api/projects/status/ltv89jk63ahiu306?svg=true)](https://ci.appveyor.com/project/ApacheSoftwareFoundation/incubator-ariatosca/history)
-[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
+[![Build Status](https://img.shields.io/travis/apache/incubator-ariatosca/master.svg)](https://travis-ci.org/apache/incubator-ariatosca)
+[![Appveyor Build Status](https://img.shields.io/appveyor/ci/ApacheSoftwareFoundation/incubator-ariatosca/master.svg)](https://ci.appveyor.com/project/ApacheSoftwareFoundation/incubator-ariatosca/history)
+[![License](https://img.shields.io/github/license/apache/incubator-ariatosca.svg)](http://www.apache.org/licenses/LICENSE-2.0)
+[![PyPI release](https://img.shields.io/pypi/v/ariatosca.svg)](https://pypi.python.org/pypi/ariatosca)
+![Python Versions](https://img.shields.io/pypi/pyversions/ariatosca.svg)
+![Wheel](https://img.shields.io/pypi/wheel/ariatosca.svg)
+![Contributors](https://img.shields.io/github/contributors/apache/incubator-ariatosca.svg)
+[![Open Pull Requests](https://img.shields.io/github/issues-pr/apache/incubator-ariatosca.svg)](https://github.com/apache/incubator-ariatosca/pulls)
+[![Closed Pull Requests](https://img.shields.io/github/issues-pr-closed-raw/apache/incubator-ariatosca.svg)](https://github.com/apache/incubator-ariatosca/pulls?q=is%3Apr+is%3Aclosed)
 
 
-[ARIA](http://ariatosca.org/) is a minimal TOSCA orchestrator, as well as a platform for building
-TOSCA-based products. Its features can be accessed via a well-documented Python API.
+What is ARIA?
+----------------
 
-On its own, ARIA provides built-in tools for blueprint validation and for creating ready-to-run
-service instances. 
+[ARIA](http://ariatosca.incubator.apache.org/) is a an open-source, [TOSCA](https://www.oasis-open.org/committees/tosca/)-based, lightweight library and CLI for orchestration and for consumption by projects building TOSCA-based solutions for resources and services orchestration.
 
-ARIA adheres strictly and meticulously to the
-[TOSCA Simple Profile v1.0 cos01 specification](http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html),
-providing state-of-the-art validation at seven different levels:
+ARIA can be utilized by any organization that wants to implement TOSCA-based orchestration in its solutions, whether a multi-cloud enterprise application, or an NFV or SDN solution for multiple virtual infrastructure managers.
 
-<ol start="0">
-<li>Platform errors. E.g. network, hardware, or even an internal bug in ARIA (let us know,
-	please!).</li>
-<li>Syntax and format errors. E.g. non-compliant YAML, XML, JSON.</li>
-<li>Field validation. E.g. assigning a string where an integer is expected, using a list instead of
-	a dict.</li>
-<li>Relationships between fields within a type. This is "grammar" as it applies to rules for
-    setting the values of fields in relation to each other.</li>
-<li>Relationships between types. E.g. referring to an unknown type, causing a type inheritance
-    loop.</li>
-<li>Topology. These errors happen if requirements and capabilities cannot be matched in order to
-	assemble a valid topology.</li>
-<li>External dependencies. These errors happen if requirement/capability matching fails due to
-    external resources missing, e.g. the lack of a valid virtual machine, API credentials, etc.
-    </li> 
-</ol>
+With ARIA, you can utilize TOSCA's cloud portability out-of-the-box, to develop, test and run your applications, from template to deployment.
 
-Validation errors include a plain English message and when relevant the exact location (file, row,
-column) of the data the caused the error.
+ARIA is an incubation project under the [Apache Software Foundation](https://www.apache.org/).
 
-The ARIA API documentation always links to the relevant section of the specification, and likewise
-we provide an annotated version of the specification that links back to the API documentation.
 
+Installation
+----------------
 
-Quick Start
------------
+ARIA is [available on PyPI](https://pypi.python.org/pypi/ariatosca).    
 
-You need Python 2.6 or 2.7. Python 3+ is not currently supported.
+To install ARIA directly from PyPI (using a `wheel`), use:
 
-To install, we recommend using [pip](https://pip.pypa.io/) and a
-[virtualenv](https://virtualenv.pypa.io/en/stable/).
+    pip install aria
 
-In Debian-based systems:
 
-	sudo apt install python-setuptools
-	sudo -H easy_install pip
-	sudo -H pip install virtualenv
-	virtualenv env
+To install ARIA from source, download the source tarball from [PyPI](https://pypi.python.org/pypi/ariatosca),
+extract it, and then when inside the extracted directory, use:
 
-Or in Archlinux-based systems:
+    pip install .
 
-	pacman -S python2 python-setuptools python-pip
-	pip install virtualenv
-	virtualenv env -p $(type -p python2)
+The source package comes along with relevant examples, documentation,
+`requirements.txt` (for installing specifically the frozen dependencies' versions with which ARIA was tested) and more.
 
-To install the latest development snapshot of ARIA:
+<br>
+Note that for the `pip install` commands mentioned above, you must use a privileged user, or use virtualenv.
+<br><br><br>
 
-	. env/bin/activate
-	pip install git+http://git-wip-us.apache.org/repos/asf/incubator-ariatosca.git
+ARIA itself is in a `wheel` format compatible with all platforms. 
+Some dependencies, however, might require compilation (based on a given platform), and therefore possibly some system dependencies are required as well.
 
-To test it, let's create a service instance from a TOSCA blueprint:
+On Ubuntu or other Debian-based systems:
 
-	aria parse blueprints/tosca/node-cellar/node-cellar.yaml
-	
-You can also get it in JSON or YAML formats:
+	sudo apt install python-setuptools python-dev build-essential libssl-dev libffi-dev
 
-	aria parse blueprints/tosca/node-cellar/node-cellar.yaml --json
+On Archlinux:
 
-Or get an overview of the relationship graph:
+	sudo pacman -S python-setuptools
 
-	aria parse blueprints/tosca/node-cellar/node-cellar.yaml --graph
 
-You can provide inputs as JSON, overriding default values provided in the blueprint
+ARIA requires Python 2.6/2.7. Python 3+ is currently not supported.
 
-	aria parse blueprints/tosca/node-cellar/node-cellar.yaml --inputs='{"openstack_credential": {"user": "username"}}'
 
-Instead of providing them explicitly, you can also provide them in a file or URL, in either JSON or
-YAML. If you do so, the value must end in ".json" or ".yaml":
+Getting Started
+---------------
 
-	aria parse blueprints/tosca/node-cellar/node-cellar.yaml --inputs=blueprints/tosca/node-cellar/inputs.yaml
+This section will describe how to run a simple "Hello World" example.
 
+First, provide ARIA with the ARIA "hello world" service-template and name it (e.g. `my-service-template`):
 
-CLI
----
+	aria service-templates store examples/hello-world/helloworld.yaml my-service-template
+	
+Now create a service based on this service-template and name it (e.g. `my-service`):
+	
+	aria services create my-service -t my-service-template
+	
+Finally, start an `install` workflow execution on `my-service` like so:
 
-Though ARIA is fully exposed as an API, it also comes with a CLI tool to allow you to work from the
-shell:
+	aria executions start install -s my-service
 
-	aria parse blueprints/tosca/node-cellar/node-cellar.yaml instance
+<br>
+You should now have a simple web-server running on your local machine.
+You can try visiting http://localhost:9090 to view your deployed application.
 
-The `parse` command supports the following directives to create variations of the default consumer
-chain:
+To uninstall and clean your environment, follow these steps:
 
-* `presentation`: emits a colorized textual representation of the Python presentation classes
-   wrapping the blueprint.
-* `model`: emits a colorized textual representation of the complete service model derived from the
-   validated blueprint. This includes all the node templates, with their requirements satisfied at
-   the level of relating to other node templates.
-* `types`: emits a colorized textual representation of the type hierarchies.
-* `instance`: **this is the default command**; emits a colorized textual representation of a
-   service instance instantiated from the service model. Here the node templates are each used to
-   create one or more nodes, with the appropriate relationships between them. Note that every time
-   you run this consumer, you will get a different set of node IDs. Use `--graph` to see just the
-   node relationship graph.
-   
-For all these commands, you can also use `--json` or `--yaml` flags to emit in those formats.
+    aria executions start uninstall -s my-service
+    aria services delete my-service
+    aria service-templates delete my-service-template
 
-Additionally, The CLI tool lets you specify the complete classname of your own custom consumer to
-chain at the end of the default consumer chain, after `instance`.
 
-Your custom consumer can be an entry point into a powerful TOSCA-based tool or application, such as
-an orchestrator, a graphical modeling tool, etc.
+Contribution
+------------
 
+You are welcome and encouraged to participate and contribute to the ARIA project.
 
-Development
------------
+Please see our guide to [Contributing to ARIA](https://cwiki.apache.org/confluence/display/ARIATOSCA/Contributing+to+ARIA).
 
-Instead of installing with `pip`, it would be easier to work directly with the source files:
+Feel free to also provide feedback on the mailing lists (see [Resources](#user-content-resources) section).
 
-	pip install virtualenv
-	virtualenv env
-	. env/bin/activate
-	git clone http://git-wip-us.apache.org/repos/asf/incubator-ariatosca.git ariatosca
-	cd ariatosca
-	pip install -e .
 
-To run tests:
+Resources
+---------
 
-	pip install tox
-	tox
+* [ARIA homepage](http://ariatosca.incubator.apache.org/)
+* [ARIA wiki](https://cwiki.apache.org/confluence/display/AriaTosca)
+* [Issue tracker](https://issues.apache.org/jira/browse/ARIA)
 
-Here's a quick example of using the API to parse YAML text into a service instance:
+* Dev mailing list: dev@ariatosca.incubator.apache.org
+* User mailing list: user@ariatosca.incubator.apache.org
 
-	from aria import install_aria_extensions
-	from aria.parser.consumption import ConsumptionContext, ConsumerChain, Read, Validate, Model, Instance
-	from aria.parser.loading import LiteralLocation
-	
-	def parse_text(payload, file_search_paths=[]):
-	    context = ConsumptionContext()
-	    context.presentation.location = LiteralLocation(payload)
-	    context.loading.file_search_paths += file_search_paths
-	    ConsumerChain(context, (Read, Validate, Model, Instance)).consume()
-	    if not context.validation.dump_issues():
-	        return context.modeling.instance
-	    return None
-	
-	install_aria_extensions()
-
-	print parse_text("""
-	tosca_definitions_version: tosca_simple_yaml_1_0
-	topology_template:
-	  node_templates:
-	    MyNode:
-	      type: tosca.nodes.Compute 
-	""")
-
-
-Parser API Architecture
------------------------
-
-ARIA's parsing engine comprises individual "consumers" (in the `aria.parser.consumption` package)
-that do things with blueprints. When chained together, each performs a different task, adds its own
-validations, and can provide its own output.
-
-Parsing happens in five phases, represented in five packages:
-
-* `aria.parser.loading`: Loaders are used to read the TOSCA data, usually as text. For example
-  UriTextLoader will load text from URIs (including files).
-* `aria.parser.reading`: Readers convert data from the loaders into agnostic raw data. For
-  example, `YamlReader` converts YAML text into Python dicts, lists, and primitives.
-* `aria.parser.presentation`: Presenters wrap the agnostic raw data in a nice
-  Python facade (a "presentation") that makes it much easier to work with the data, including
-  utilities for validation, querying, etc. Note that presenters are _wrappers_: the agnostic raw
-  data is always maintained intact, and can always be accessed directly or written back to files.
-* `aria.parser.modeling.model`: Here the topology is normalized into a coherent structure of
-  node templates, requirements, and capabilities. Types are inherited and properties are assigned.
-  The service model is a _new_ structure, which is not mapped to the YAML. In fact, it is possible
-  to generate the model programmatically, or from a DSL parser other than TOSCA.
-* `aria.parser.modeling.instance`: The service instance is an instantiated service model. Node
-  templates turn into node instances (with unique IDs), and requirements are satisfied by matching
-  them to capabilities. This is where level 5 validation errors are detected (see above).
-
-The phases do not have to be used in order. Indeed, consumers do not have to be used at all: ARIA
-can be used to _produce_ blueprints. For example, it is possible to fill in the
-`aria.parser.presentation` classes programmatically, in Python, and then write the presentation
-to a YAML file as compliant TOSCA. The same technique can be used to convert from one DSL (consume
-it) to another (write it).
-
-The term "agnostic raw data" (ARD?) appears often in the documentation. It denotes data structures
-comprising _only_ Python dicts, lists, and primitives, such that they can always be converted to and
-from language-agnostic formats such as YAML, JSON, and XML. A considerable effort has been made to
-conserve the agnostic raw data at all times. Thus, though ARIA makes good use of the dynamic power
-of Python, you will _always_ be able to use ARIA with other systems.
+Subscribe by sending a mail to `<group>-subscribe@ariatosca.incubator.apache.org` (e.g. `dev-subscribe@ariatosca.incubator.apache.org`).
+See information on how to subscribe to mailing list [here](https://www.apache.org/foundation/mailinglists.html).
+
+For past correspondence, see the [dev mailing list archive](http://mail-archives.apache.org/mod_mbox/incubator-ariatosca-dev/).
+
+
+License
+-------
+ARIA is licensed under the [Apache License 2.0](https://github.com/apache/incubator-ariatosca/blob/master/LICENSE).


[2/5] incubator-ariatosca git commit: ARIA-275 Update NFV profile to csd04

Posted by mx...@apache.org.
ARIA-275 Update NFV profile to csd04

This update was done according to:
http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html

We resolved some inconsistencies of csd04 with the TOSCA spec, and within csd04 itself. Wherever we resolved such inconsistencies, we added a detailed
comment describing our reasoning.


Project: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/commit/1e883c57
Tree: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/tree/1e883c57
Diff: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/diff/1e883c57

Branch: refs/heads/ARIA-278-Remove-core-tasks
Commit: 1e883c57abb733b10e13f0b7005cf564886d3fb1
Parents: 22f6e9e
Author: Avia Efrat <av...@gigaspaces.com>
Authored: Sun Jun 4 22:11:10 2017 +0300
Committer: Avia Efrat <av...@gigaspaces.com>
Committed: Mon Jun 12 22:24:04 2017 +0300

----------------------------------------------------------------------
 .../profiles/tosca-simple-1.0/artifacts.yaml    |   8 +-
 .../profiles/tosca-simple-1.0/capabilities.yaml |   2 +-
 .../profiles/tosca-simple-1.0/data.yaml         |   2 +-
 .../profiles/tosca-simple-1.0/groups.yaml       |   2 +-
 .../profiles/tosca-simple-1.0/interfaces.yaml   |   2 +-
 .../profiles/tosca-simple-1.0/nodes.yaml        |   2 +-
 .../profiles/tosca-simple-1.0/policies.yaml     |  10 +-
 .../tosca-simple-1.0/relationships.yaml         |   2 +-
 .../tosca-simple-nfv-1.0/artifacts.yaml         |  84 +++++
 .../tosca-simple-nfv-1.0/capabilities.yaml      |  99 ++----
 .../profiles/tosca-simple-nfv-1.0/data.yaml     | 305 ++++++++++++++---
 .../profiles/tosca-simple-nfv-1.0/groups.yaml   |  56 ----
 .../profiles/tosca-simple-nfv-1.0/nodes.yaml    | 323 ++++++++++++-------
 .../tosca-simple-nfv-1.0/relationships.yaml     |  37 +--
 .../tosca-simple-nfv-1.0.yaml                   |   2 +-
 15 files changed, 604 insertions(+), 332 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/artifacts.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/artifacts.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/artifacts.yaml
index af99340..cfb0df5 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/artifacts.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/artifacts.yaml
@@ -17,7 +17,7 @@ artifact_types:
 
   tosca.artifacts.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.3.1
@@ -41,7 +41,7 @@ artifact_types:
   
   tosca.artifacts.Deployment:
     _extensions:
-      shorthand_name: Deployment # ARIA NOTE: ommitted in the spec
+      shorthand_name: Deployment # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Deployment
       specification: tosca-simple-1.0
       specification_section: 5.3.3.1
@@ -67,7 +67,7 @@ artifact_types:
   
   tosca.artifacts.Deployment.Image.VM:
     _extensions:
-      shorthand_name: Deployment.VM # ARIA NOTE: ommitted in the spec
+      shorthand_name: Deployment.VM # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Deployment.VM
       specification: tosca-simple-1.0
       specification_section: 5.3.3.4
@@ -85,7 +85,7 @@ artifact_types:
   
   tosca.artifacts.Implementation:
     _extensions:
-      shorthand_name: Implementation # ARIA NOTE: ommitted in the spec
+      shorthand_name: Implementation # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Implementation
       specification: tosca-simple-1.0
       specification_section: 5.3.4.1

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/capabilities.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/capabilities.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/capabilities.yaml
index 0b81a16..30abe10 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/capabilities.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/capabilities.yaml
@@ -17,7 +17,7 @@ capability_types:
 
   tosca.capabilities.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.4.1

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/data.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/data.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/data.yaml
index 5210aa0..771a969 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/data.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/data.yaml
@@ -95,7 +95,7 @@ data_types:
 
   tosca.datatypes.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.2.1

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/groups.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/groups.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/groups.yaml
index 31cfc55..66cc25f 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/groups.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/groups.yaml
@@ -17,7 +17,7 @@ group_types:
 
   tosca.groups.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.9.1

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/interfaces.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/interfaces.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/interfaces.yaml
index 1e83ef9..473bd98 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/interfaces.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/interfaces.yaml
@@ -17,7 +17,7 @@ interface_types:
 
   tosca.interfaces.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.7.3

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/nodes.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/nodes.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/nodes.yaml
index bb33b6f..1d2fe90 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/nodes.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/nodes.yaml
@@ -214,7 +214,7 @@ node_types:
   
   tosca.nodes.DBMS:
     _extensions:
-      shorthand_name: DBMS # ARIA NOTE: ommitted in the spec
+      shorthand_name: DBMS # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:DBMS
       specification: tosca-simple-1.0
       specification_section: 5.8.6

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/policies.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/policies.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/policies.yaml
index 015d2b0..c65e38b 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/policies.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/policies.yaml
@@ -17,7 +17,7 @@ policy_types:
 
   tosca.policies.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.10.1
@@ -27,7 +27,7 @@ policy_types:
   
   tosca.policies.Placement:
     _extensions:
-      shorthand_name: Placement # ARIA NOTE: ommitted in the spec
+      shorthand_name: Placement # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Placement
       specification: tosca-simple-1.0
       specification_section: 5.10.2
@@ -38,7 +38,7 @@ policy_types:
   
   tosca.policies.Scaling:
     _extensions:
-      shorthand_name: Scaling # ARIA NOTE: ommitted in the spec
+      shorthand_name: Scaling # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Scaling
       specification: tosca-simple-1.0
       specification_section: 5.10.3
@@ -49,7 +49,7 @@ policy_types:
   
   tosca.policies.Update:
     _extensions:
-      shorthand_name: Update # ARIA NOTE: ommitted in the spec
+      shorthand_name: Update # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Update
       specification: tosca-simple-1.0
       specification_section: 5.10.4
@@ -60,7 +60,7 @@ policy_types:
   
   tosca.policies.Performance:
     _extensions:
-      shorthand_name: Performance # ARIA NOTE: ommitted in the spec
+      shorthand_name: Performance # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Performance
       specification: tosca-simple-1.0
       specification_section: 5.10.5

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/relationships.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/relationships.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/relationships.yaml
index 6ea4d12..b9d3176 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/relationships.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-1.0/relationships.yaml
@@ -17,7 +17,7 @@ relationship_types:
 
   tosca.relationships.Root:
     _extensions:
-      shorthand_name: Root # ARIA NOTE: ommitted in the spec
+      shorthand_name: Root # ARIA NOTE: omitted in the spec
       type_qualified_name: tosca:Root
       specification: tosca-simple-1.0
       specification_section: 5.6.1

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/artifacts.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/artifacts.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/artifacts.yaml
new file mode 100644
index 0000000..2427d9f
--- /dev/null
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/artifacts.yaml
@@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+artifact_types:
+
+  tosca.artifacts.nfv.SwImage:
+    _extensions:
+      shorthand_name: SwImage
+      type_qualified_name: tosca:SwImage
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.4.1
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896067'
+    derived_from: tosca.artifacts.Deployment.Image
+    properties:
+      name:
+        description: >-
+          Name of this software image.
+        type: string
+        required: true
+      version:
+        description: >-
+          Version of this software image.
+        type: string
+        required: true
+      checksum:
+        description: >-
+          Checksum of the software image file.
+        type: string
+      container_format:
+        description: >-
+          The container format describes the container file format in which software image is
+          provided.
+        type: string
+        required: true
+      disk_format:
+        description: >-
+          The disk format of a software image is the format of the underlying disk image.
+        type: string
+        required: true
+      min_disk:
+        description: >-
+          The minimal disk size requirement for this software image.
+        type: scalar-unit.size
+        required: true
+      min_ram:
+        description: >-
+          The minimal disk size requirement for this software image.
+        type: scalar-unit.size
+        required: false
+      size: # ARIA NOTE: section [5.4.1.1 Properties] calls this field 'Size'
+        description: >-
+          The size of this software image
+        type: scalar-unit.size
+        required: true
+      sw_image:
+        description: >-
+          A reference to the actual software image within VNF Package, or url.
+        type: string
+        required: true
+      operating_system:
+        description: >-
+          Identifies the operating system used in the software image.
+        type: string
+        required: false
+      supported _virtualization_enviroment:
+        description: >-
+          Identifies the virtualization environments (e.g. hypervisor) compatible with this software
+          image.
+        type: list
+        entry_schema:
+          type: string
+        required: false

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/capabilities.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/capabilities.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/capabilities.yaml
index 6bc6b67..7b6363f 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/capabilities.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/capabilities.yaml
@@ -15,58 +15,13 @@
 
 capability_types:
 
-  tosca.capabilities.Compute.Container.Architecture:
-    _extensions:
-      shorthand_name: Compute.Container.Architecture
-      type_qualified_name: tosca:Compute.Container.Architecture
-      specification: tosca-simple-nfv-1.0
-      specification_section: 8.2.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#DEFN_TYPE_CAPABILITIES_CONTAINER'
-    description: >-
-      Enhance compute architecture capability that needs to be typically use for performance sensitive NFV workloads.
-    derived_from: tosca.capabilities.Container
-    properties:
-      mem_page_size:
-        description: >-
-          Describe page size of the VM:
-  
-          * small page size is typically 4KB
-          * large page size is typically 2MB
-          * any page size maps to system default
-          * custom MB value: sets TLB size to this specific value
-        type: string
-        # ARIA NOTE: seems wrong in the spec
-        #constraints:
-        #  - [ normal, huge ]
-      cpu_allocation:
-        description: >-
-          Describes CPU allocation requirements like dedicated CPUs (cpu pinning), socket count, thread count, etc.
-        type: tosca.datatypes.compute.Container.Architecture.CPUAllocation
-        required: false
-      numa_node_count:
-        description: >-
-          Specifies the symmetric count of NUMA nodes to expose to the VM. vCPU and Memory equally split across this number of
-          NUMA.
-  
-          NOTE: the map of numa_nodes should not be specified.
-        type: integer
-        required: false 
-      numa_nodes:
-        description: >-
-          Asymmetric allocation of vCPU and Memory across the specific NUMA nodes (CPU sockets and memory banks).
-  
-          NOTE: symmetric numa_node_count should not be specified.
-        type: map
-        entry_schema: tosca.datatypes.compute.Container.Architecture.NUMA
-        required: false
-
   tosca.capabilities.nfv.VirtualBindable:
     _extensions:
       shorthand_name: VirtualBindable
       type_qualified_name: tosca:VirtualBindable
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.2.2
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290220'
+      specification_section: 5.5.1
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896069'
     description: >-
       A node type that includes the VirtualBindable capability indicates that it can be pointed by
       tosca.relationships.nfv.VirtualBindsTo relationship type.
@@ -77,33 +32,39 @@ capability_types:
       shorthand_name: Metric
       type_qualified_name: tosca:Metric
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.2.3
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc418607874'
+      specification_section: 5.5.2
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896070'
     description: >-
       A node type that includes the Metric capability indicates that it can be monitored using an nfv.relationships.Monitor
       relationship type.
     derived_from: tosca.capabilities.Endpoint
 
-  tosca.capabilities.nfv.Forwarder:
+  tosca.capabilities.nfv.VirtualCompute:
     _extensions:
-      shorthand_name: Forwarder
-      type_qualified_name: tosca:Forwarder
+      shorthand_name: VirtualCompute
+      type_qualified_name: tosca:VirtualCompute
       specification: tosca-simple-nfv-1.0
-      specification_section: 10.3.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714718'
-    description: >-
-      A node type that includes the Forwarder capability indicates that it can be pointed by tosca.relationships.nfv.FowardsTo
-      relationship type.
+      specification_section: 5.5.3
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896071'
     derived_from: tosca.capabilities.Root
-
-  tosca.capabilities.nfv.VirtualLinkable:
-    _extensions:
-      shorthand_name: VirtualLinkable
-      type_qualified_name: tosca:VirtualLinkable
-      specification: tosca-simple-nfv-1.0
-      specification_section: 11.3.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714735'
-    description: >-
-      A node type that includes the VirtualLinkable capability indicates that it can be pointed by
-      tosca.relationships.nfv.VirtualLinksTo relationship type.
-    derived_from: tosca.capabilities.Node
+    properties:
+      requested_additional_capabilities:
+        # ARIA NOTE: in section [5.5.3.1 Properties] the name of this property is
+        # "request_additional_capabilities", and its type is not a map, but
+        # tosca.datatypes.nfv.RequestedAdditionalCapability
+        description: >-
+          Describes additional capability for a particular VDU.
+        type: map
+        entry_schema:
+           type: tosca.datatypes.nfv.RequestedAdditionalCapability
+        required: false
+      virtual_memory:
+        description: >-
+          Describes virtual memory of the virtualized compute.
+        type: tosca.datatypes.nfv.VirtualMemory
+        required: true
+      virtual_cpu:
+        description: >-
+          Describes virtual CPU(s) of the virtualized compute.
+        type: tosca.datatypes.nfv.VirtualCpu
+        required: true

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/data.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/data.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/data.yaml
index 89e3565..889dcf7 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/data.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/data.yaml
@@ -15,77 +15,304 @@
 
 data_types:
 
-  tosca.datatypes.compute.Container.Architecture.CPUAllocation:
+  tosca.datatypes.nfv.L2AddressData:
+    # TBD
     _extensions:
-      shorthand_name: Container.Architecture.CPUAllocation # seems to be a mistake in the spec; the norm is to add a "Container.Architecture." prefix
-      type_qualified_name: tosca:Container.Architecture.CPUAllocation
+      shorthand_name: L2AddressData
+      type_qualified_name: tosca:L2AddressData
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.3.1
+      specification_section: 5.3.1
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896055'
+
+  tosca.datatypes.nfv.L3AddressData:
+    _extensions:
+      shorthand_name: L3AddressData
+      type_qualified_name: tosca:L3AddressData
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.2
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896056'
     description: >-
-      Granular CPU allocation requirements for NFV workloads.
+      The L3AddressData type is a complex TOSCA data type used to describe L3AddressData information
+      element as defined in [ETSI GS NFV-IFA 011], it provides the information on the IP addresses
+      to be assigned to the connection point instantiated from the parent Connection Point
+      Descriptor.
     derived_from: tosca.datatypes.Root
     properties:
-      cpu_affinity:
+      ip_address_assignment:
+        description: >-
+          Specify if the address assignment is the responsibility of management and orchestration
+          function or not. If it is set to True, it is the management and orchestration function
+          responsibility.
+        type: boolean
+        required: true
+      floating_ip_activated:
+        description: Specify if the floating IP scheme is activated on the Connection Point or not.
+        type: boolean
+        required: true
+      ip_address_type:
         description: >-
-          Describes whether vCPU need to be pinned to dedicated CPU core or shared dynamically.
+          Define address type. The address type should be aligned with the address type supported by
+          the layer_protocol properties of the parent VnfExtCpd.
         type: string
+        required: false
         constraints:
-          - valid_values: [ shared, dedicated ]
+          - valid_values: [ ipv4, ipv6 ]
+      number_of_ip_address:
+        description: >-
+          Minimum number of IP addresses to be assigned.
+        type: integer
         required: false
-      thread_allocation:
+
+  tosca.datatypes.nfv.AddressData:
+    _extensions:
+      shorthand_name: AddressData
+      type_qualified_name: tosca:AddressData
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.3
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896057'
+    description: >-
+      The AddressData type is a complex TOSCA data type used to describe AddressData information
+      element as defined in [ETSI GS NFV-IFA 011], it provides information on the addresses to be
+      assigned to the connection point(s) instantiated from a Connection Point Descriptor.
+    derived_from: tosca.datatypes.Root
+    properties:
+      address_type:
         description: >-
-          Describe thread allocation requirement.
+          Describes the type of the address to be assigned to the connection point instantiated from
+          the parent Connection Point Descriptor. The content type shall be aligned with the address
+          type supported by the layerProtocol property of the parent Connection Point Descriptor.
         type: string
+        required: true
         constraints:
-          - valid_values: [ avoid, isolate, separate, prefer ]
+          - valid_values: [ mac_address, ip_address ]
+      l2_address_data:
+        # Shall be present when the addressType is mac_address.
+        description: >-
+          Provides the information on the MAC addresses to be assigned to the connection point(s)
+          instantiated from the parent Connection Point Descriptor.
+        type: tosca.datatypes.nfv.L2AddressData # Empty in "GS NFV IFA011 V0.7.3"
         required: false
-      socket_count:
+      l3_address_data:
+        # Shall be present when the addressType is ip_address.
         description: >-
-          Number of CPU sockets.
-        type: integer
+          Provides the information on the IP addresses to be assigned to the connection point
+          instantiated from the parent Connection Point Descriptor.
+        type: tosca.datatypes.nfv.L3AddressData
         required: false
-      core_count:
+
+  tosca.datatypes.nfv.VirtualNetworkInterfaceRequirements:
+    _extensions:
+      shorthand_name: VirtualNetworkInterfaceRequirements
+      type_qualified_name: tosca:VirtualNetworkInterfaceRequirements
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.4
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896058'
+    description: >-
+      The VirtualNetworkInterfaceRequirements type is a complex TOSCA data type used to describe
+      VirtualNetworkInterfaceRequirements information element as defined in [ETSI GS NFV-IFA 011],
+      it provides the information to specify requirements on a virtual network interface realising the
+      CPs instantiated from this CPD.
+    derived_from: tosca.datatypes.Root
+    properties:
+      name:
         description: >-
-          Number of cores per socket.
-        type: integer
+          Provides a human readable name for the requirement.
+        type: string
         required: false
-      thread_count:
+      description:
         description: >-
-          Number of threads per core.
-        type: integer
+          Provides a human readable description for the requirement.
+        type: string
         required: false
+      support_mandatory:
+        description: >-
+          Indicates whether fulfilling the constraint is mandatory (TRUE) for successful operation
+          or desirable (FALSE).
+        type: boolean
+        required: false
+      requirement:
+        description: >-
+          Specifies a requirement such as the support of SR-IOV, a particular data plane
+          acceleration library, an API to be exposed by a NIC, etc.
+        type: string # ARIA NOTE: the spec says "not specified", but TOSCA requires a type
+        required: true
 
-  tosca.datatypes.compute.Container.Architecture.NUMA:
+  tosca.datatypes.nfv.ConnectivityType:
     _extensions:
-      shorthand_name: Container.Architecture.NUMA # ARIA NOTE: seems to be a mistake in the spec; the norm is to add a "Container.Architecture." prefix
-      type_qualified_name: tosca:Container.Architecture.NUMA
+      shorthand_name: ConnectivityType
+      type_qualified_name: tosca:ConnectivityType
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.3.2
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714697'
+      specification_section: 5.3.5
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896059'
     description: >-
-      Granular Non-Uniform Memory Access (NUMA) topology requirements for NFV workloads.
+      The TOSCA ConnectivityType type is a complex TOSCA data type used to describe ConnectivityType
+      information element as defined in [ETSI GS NFV-IFA 011].
     derived_from: tosca.datatypes.Root
     properties:
-      id:
+      layer_protocol:
         description: >-
-          CPU socket identifier.
-        type: integer
+          Identifies the protocol this VL gives access to (ethernet, mpls, odu2, ipv4, ipv6,
+          pseudo_wire).
+        type: string
+        required: true
         constraints:
-          - greater_or_equal: 0
+          - valid_values: [ ethernet, mpls, odu2, ipv4, ipv6, pseudo_wire ]
+      flow_pattern:
+        description: >-
+          Identifies the flow pattern of the connectivity (Line, Tree, Mesh).
+        type: string
         required: false
-      vcpus:
+
+  tosca.datatypes.nfv.RequestedAdditionalCapability:
+    _extensions:
+      shorthand_name: RequestedAdditionalCapability
+      type_qualified_name: tosca:RequestedAdditionalCapability
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.6
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896060'
+    description: >-
+      RequestAdditionalCapability describes additional capability for a particular VDU.
+    derived_from: tosca.datatypes.Root
+    properties:
+      request_additional_capability_name:
         description: >-
-          List of specific host cpu numbers within a NUMA socket complex.
-  
-          TODO: need a new base type, with non-overlapping, positive value validation (exclusivity),
+          Identifies a requested additional capability for the VDU.
+        type: string
+        required: true
+      support_mandatory:
+        description: >-
+          Indicates whether the requested additional capability is mandatory for successful
+          operation.
+        type: string
+        required: true
+      min_requested_additional_capability_version:
+        description: >-
+          Identifies the minimum version of the requested additional capability.
+        type: string
+        required: false
+      preferred_requested_additional_capability_version:
+        description: >-
+          Identifies the preferred version of the requested additional capability.
+        type: string
+        required: false
+      target_performance_parameters:
+        description: >-
+          Identifies specific attributes, dependent on the requested additional capability type.
         type: map
         entry_schema:
-          type: integer
+          type: string
+        required: true
+
+  tosca.datatypes.nfv.VirtualMemory:
+    _extensions:
+      shorthand_name: VirtualMemory
+      type_qualified_name: tosca:VirtualMemory
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.7
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896061'
+    description: >-
+      VirtualMemory describes virtual memory for a particular VDU.
+    derived_from: tosca.datatypes.Root
+    properties:
+      virtual_mem_size:
+        description: Amount of virtual memory.
+        type: scalar-unit.size
+        required: true
+      virtual_mem_oversubscription_policy:
+        description: >-
+          The memory core oversubscription policy in terms of virtual memory to physical memory on
+          the platform. The cardinality can be 0 during the allocation request, if no particular
+          value is requested.
+        type: string
         required: false
-      mem_size:
+      numa_enabled:
         description: >-
-          Size of memory allocated from this NUMA memory bank.
-        type: scalar-unit.size
+          It specifies the memory allocation to be cognisant of the relevant process/core
+          allocation. The cardinality can be 0 during the allocation request, if no particular value
+          is requested.
+        type: boolean
+        required: false
+
+  tosca.datatypes.nfv.VirtualCpu:
+    _extensions:
+      shorthand_name: VirtualCpu
+      type_qualified_name: tosca:VirtualCpu
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.8
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896062'
+    description: >-
+      VirtualMemory describes virtual memory for a particular VDU.
+    derived_from: tosca.datatypes.Root
+    properties:
+      cpu_architecture:
+        description: >-
+          CPU architecture type. Examples are x86, ARM.
+        type: string
+        required: false
+      num_virtual_cpu:
+        description: >-
+          Number of virtual CPUs.
+        type: integer
+        required: true
+      virtual_cpu_clock:
+        description: >-
+          Minimum virtual CPU clock rate.
+        type: scalar-unit.frequency
+        required: false
+      virtual_cpu_oversubscription_policy:
+        description: >-
+          CPU core oversubscription policy.
+        type: string
+        required: false
+      virtual_cpu_pinning:
+        description: >-
+          The virtual CPU pinning configuration for the virtualized compute resource.
+        type: tosca.datatypes.nfv.VirtualCpuPinning
+        required: false
+
+  tosca.datatypes.nfv.VirtualCpuPinning:
+    _extensions:
+      shorthand_name: VirtualCpuPinning
+      type_qualified_name: tosca:VirtualCpuPinning
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.9
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896064'
+    description: >-
+      VirtualCpuPinning describes CPU pinning configuration for a particular CPU.
+    derived_from: tosca.datatypes.Root
+    properties:
+      cpu_pinning_policy:
+        description: >-
+          Indicates the policy for CPU pinning.
+        type: string
         constraints:
-          - greater_or_equal: 0 MB
+          - valid_values: [ static, dynamic ]
+        required: false
+      cpu_pinning_map:
+        description: >-
+          If cpuPinningPolicy is defined as "static", the cpuPinningMap provides the map of pinning
+          virtual CPU cores to physical CPU cores/threads.
+        type: map
+        entry_schema:
+          type: string
+        required: false
+
+  tosca.datatypes.nfv.VnfcConfigurableProperties:
+    _extensions:
+      shorthand_name: VnfcconfigurableProperties
+      type_qualified_name: tosca:VnfcconfigurableProperties
+      specification: tosca-simple-nfv-1.0
+      specification_section: 5.3.10
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896065'
+    # ARIA NOTE: description is mangled in spec
+    description: >-
+      VnfcConfigurableProperties describes additional configurable properties of a VNFC.
+    derived_from: tosca.datatypes.Root
+    properties:
+      additional_vnfc_configurable_properties:
+        description: >-
+          Describes additional configuration for VNFC.
+        type: map
+        entry_schema:
+          type: string
         required: false

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/groups.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/groups.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/groups.yaml
deleted file mode 100644
index 5eb87c8..0000000
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/groups.yaml
+++ /dev/null
@@ -1,56 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-group_types:
-
-  tosca.groups.nfv.VNFFG:
-    _extensions:
-      shorthand_name: VNFFG # ARIA NOTE: the spec must be mistaken here, says "VL"
-      type_qualified_name: tosca:VNFFG
-      specification: tosca-simple-nfv-1.0
-      specification_section: 10.6.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714727'
-    description: >-
-      The NFV VNFFG group type represents a logical VNF forwarding graph entity as defined by [ETSI GS NFV-MAN 001 v1.1.1].
-    derived_from: tosca.groups.Root
-    properties:
-      vendor:
-        description: >-
-          Specify the vendor generating this VNFFG.
-        type: string
-      version:
-        description: >-
-          Specify the identifier (e.g. name), version, and description of service this VNFFG is describing.
-        type: string
-      number_of_endpoints:
-        description: >-
-          Count of the external endpoints included in this VNFFG, to form an index.
-        type: integer
-      dependent_virtual_link:
-        description: >-
-          Reference to a list of VLD used in this Forwarding Graph.
-        type: list
-        entry_schema: string
-      connection_point:
-        description: >-
-          Reference to Connection Points forming the VNFFG.
-        type: list
-        entry_schema: string
-      constituent_vnfs:
-        description: >-
-          Reference to a list of VNFD used in this VNF Forwarding Graph.
-        type: list
-        entry_schema: string
-    members: [ tosca.nodes.nfv.FP ]

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/nodes.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/nodes.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/nodes.yaml
index 0dfe38d..73f0ecd 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/nodes.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/nodes.yaml
@@ -15,169 +15,246 @@
 
 node_types:
 
-  tosca.nodes.nfv.VNF:
+  tosca.nodes.nfv.VDU.Compute:
     _extensions:
-      shorthand_name: VNF # ARIA NOTE: ommitted in the spec
-      type_qualified_name: tosca:VNF
+      shorthand_name: VDU.Compute
+      type_qualified_name: tosca:VDU.Compute
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.5.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc379455076'
+      specification_section: 5.9.2
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896079'
     description: >-
-      The NFV VNF Node Type represents a Virtual Network Function as defined by [ETSI GS NFV-MAN 001 v1.1.1]. It is the default
-      type that all other VNF Node Types derive from. This allows for all VNF nodes to have a consistent set of features for
-      modeling and management (e.g., consistent definitions for requirements, capabilities and lifecycle interfaces).
-    derived_from: tosca.nodes.Root
+      The TOSCA nfv.VDU.Compute node type represents the virtual compute part of a VDU entity which
+      it mainly describes the deployment and operational behavior of a VNF component (VNFC), as
+      defined by [ETSI NFV IFA011].
+    derived_from: tosca.nodes.Compute
     properties:
-      id:
+      name:
         description: >-
-          ID of this VNF.
+          Human readable name of the VDU.
         type: string
-      vendor:
+        required: true
+      description:
         description: >-
-          Name of the vendor who generate this VNF.
+          Human readable description of the VDU.
         type: string
-      version:
+        required: true
+      boot_order:
+        description: >-
+          The key indicates the boot index (lowest index defines highest boot priority).
+          The Value references a descriptor from which a valid boot device is created e.g.
+          VirtualStorageDescriptor from which a VirtualStorage instance is created. If no boot order
+          is defined the default boot order defined in the VIM or NFVI shall be used.
+        type: list # ARIA NOTE: an explicit index (boot index) is unnecessary, contrary to IFA011
+        entry_schema:
+          type: string
+        required: false
+      nfvi_constraints:
+        description: >-
+          Describes constraints on the NFVI for the VNFC instance(s) created from this VDU.
+          For example, aspects of a secure hosting environment for the VNFC instance that involve
+          additional entities or processes. More software images can be attached to the
+          virtualization container using virtual_storage.
+        type: list
+        entry_schema:
+          type: string
+        required: false
+      configurable_properties:
         description: >-
-          Version of the software for this VNF.
+          Describes the configurable properties of all VNFC instances based on this VDU.
+        type: map
+        entry_schema:
+          type: tosca.datatypes.nfv.VnfcConfigurableProperties
+        required: true
+    attributes:
+      # ARIA NOTE: The attributes are only described in section [5.9.2.5 Definition], but are not
+      # mentioned in section [5.9.2.2 Attributes]. Additionally, it does not seem to make sense to
+      # deprecate inherited attributes, as it breaks the inheritence contract.
+      private_address:
         type: string
-    requirements:
-      - virtual_link:
-          capability: tosca.capabilities.nfv.VirtualLinkable
-          relationship: tosca.relationships.nfv.VirtualLinksTo
-
-  tosca.nodes.nfv.VDU:
-    _extensions:
-      shorthand_name: VDU
-      type_qualified_name: tosca:VDU
-      specification: tosca-simple-nfv-1.0
-      specification_section: 8.5.2
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290242'
-    description: >-
-      The NFV vdu node type represents a logical vdu entity as defined by [ETSI GS NFV-MAN 001 v1.1.1].
-    derived_from: tosca.nodes.Root
+        status: deprecated
+      public_address:
+        type: string
+        status: deprecated
+      networks:
+        type: map
+        entry_schema:
+          type: tosca.datatypes.network.NetworkInfo
+        status: deprecated
+      ports:
+        type: map
+        entry_schema:
+          type: tosca.datatypes.network.PortInfo
+        status: deprecated
     capabilities:
-      nfv_compute:
-        type: tosca.capabilities.Compute.Container.Architecture
+      virtual_compute:
+        description: >-
+          Describes virtual compute resources capabilities.
+        type: tosca.capabilities.nfv.VirtualCompute
       virtual_binding:
+        description: >-
+          Defines ability of VirtualBindable.
         type: tosca.capabilities.nfv.VirtualBindable
       monitoring_parameter:
+        # ARIA NOTE: commented out in 5.9.2.5
+        description: >-
+          Monitoring parameter, which can be tracked for a VNFC based on this VDU. Examples include:
+          memory-consumption, CPU-utilisation, bandwidth-consumption, VNFC downtime, etc.        
         type: tosca.capabilities.nfv.Metric
+    #requirements:
+      # ARIA NOTE: virtual_storage is TBD
+      
+      # ARIA NOTE: csd04 attempts to deprecate the inherited local_storage requirement, but this
+      # is not possible in TOSCA
+    artifacts:
+      sw_image:
+        description: >-
+          Describes the software image which is directly loaded on the virtualization container
+          realizing this virtual storage.
+        file: '' # ARIA NOTE: missing value even though it is required in TOSCA
+        type: tosca.artifacts.nfv.SwImage
 
-  tosca.nodes.nfv.CP:
+  tosca.nodes.nfv.VDU.VirtualStorage:
     _extensions:
-      shorthand_name: CP
-      type_qualified_name: tosca:CP
+      shorthand_name: VirtualStorage # ARIA NOTE: seems wrong in spec
+      type_qualified_name: tosca:VirtualStorage # ARIA NOTE: seems wrong in spec
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.5.3
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290245'
+      specification_section: 5.9.3
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896080'
     description: >-
-      The NFV CP node represents a logical connection point entity as defined by [ETSI GS NFV-MAN 001 v1.1.1]. A connection point
-      may be, for example, a virtual port, a virtual NIC address, a physical port, a physical NIC address or the endpoint of an IP
-      VPN enabling network connectivity. It is assumed that each type of connection point will be modeled using subtypes of the CP
-      type.
-    derived_from: tosca.nodes.network.Port
+      The NFV VirtualStorage node type represents a virtual storage entity which it describes the
+      deployment and operational behavior of a virtual storage resources, as defined by
+      [ETSI NFV IFA011].
+    derived_from: tosca.nodes.Root
     properties:
-      type:
+      type_of_storage:
         description: >-
-          This may be, for example, a virtual port, a virtual NIC address, a SR-IOV port, a physical port, a physical NIC address
-          or the endpoint of an IP VPN enabling network connectivity.
+          Type of virtualized storage resource.
         type: string
-      anti_spoof_protection:
+        required: true
+      size_of_storage:
         description: >-
-          Indicates of whether anti-spoofing rule need to be enabled for this vNIC. This is applicable only when CP type is virtual
-          NIC (vPort).
+          Size of virtualized storage resource (in GB).
+        type: scalar-unit.size
+        required: true
+      rdma_enabled:
+        description: >-
+          Indicate if the storage support RDMA.
         type: boolean
         required: false
-    attributes:
-      address:
+    artifacts:
+      sw_image:
         description: >-
-          The actual virtual NIC address that is been assigned when instantiating the connection point.
-        type: string
-    requirements:
-      - virtual_link:
-          capability: tosca.capabilities.nfv.VirtualLinkable
-          relationship: tosca.relationships.nfv.VirtualLinksTo
-      - virtual_binding:
-          capability: tosca.capabilities.nfv.VirtualBindable
-          relationship: tosca.relationships.nfv.VirtualBindsTo
+          Describes the software image which is directly loaded on the virtualization container
+          realizing this virtual storage.
+        file: '' # ARIA NOTE: missing in spec
+        type: tosca.artifacts.nfv.SwImage
 
-  tosca.nodes.nfv.FP:
+  tosca.nodes.nfv.Cpd:
     _extensions:
-      shorthand_name: FP # ARIA NOTE: the spec must be mistaken here, says "VL"
-      type_qualified_name: tosca:FP
+      shorthand_name: Cpd
+      type_qualified_name: tosca:Cpd
       specification: tosca-simple-nfv-1.0
-      specification_section: 10.5.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714722'
+      specification_section: 5.9.4
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896081'
     description: >-
-      The NFV FP node type represents a logical network forwarding path entity as defined by [ETSI GS NFV-MAN 001 v1.1.1].
+      The TOSCA nfv.Cpd node represents network connectivity to a compute resource or a VL as defined
+      by [ETSI GS NFV-IFA 011]. This is an abstract type used as parent for the various Cpd types.
     derived_from: tosca.nodes.Root
     properties:
-      policy:
+      layer_protocol:
         description: >-
-          A policy or rule to apply to the NFP
+          Identifies which protocol the connection point uses for connectivity purposes.
         type: string
+        constraints:
+          - valid_values: [ ethernet, mpls, odu2, ipv4, ipv6, pseudo_wire ]
         required: false
-    requirements:
-      - forwarder:
-          capability: tosca.capabilities.nfv.Forwarder
-
-  #
-  # Virtual link
-  #
-
-  tosca.nodes.nfv.VL:
-    _extensions:
-      shorthand_name: VL
-      type_qualified_name: tosca:VL
-      specification: tosca-simple-nfv-1.0
-      specification_section: 9.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290251'
-    description: >-
-      The NFV VL node type represents a logical virtual link entity as defined by [ETSI GS NFV-MAN 001 v1.1.1]. It is the default
-      type from which all other virtual link types derive.
-    derived_from: tosca.nodes.network.Network
-    properties:
-      vendor:
+      role: # Name in ETSI NFV IFA011 v0.7.3 cpRole
         description: >-
-          Vendor generating this VLD.
+          Identifies the role of the port in the context of the traffic flow patterns in the VNF or
+          parent NS. For example a VNF with a tree flow pattern within the VNF will have legal
+          cpRoles of ROOT and LEAF.
         type: string
-    capabilities:
-      virtual_linkable:
-        type: tosca.capabilities.nfv.VirtualLinkable        
-
-  tosca.nodes.nfv.VL.ELine:
-    _extensions:
-      shorthand_name: VL.ELine # ARIA NOTE: ommitted in the spec
-      type_qualified_name: tosca:VL.ELine
-      specification: tosca-simple-nfv-1.0
-      specification_section: 9.2
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290256'
-    description: >-
-      The NFV VL.ELine node represents an E-Line virtual link entity.
-    derived_from: tosca.nodes.nfv.VL  
-    capabilities:
-      virtual_linkable:
-        type: tosca.capabilities.nfv.VirtualLinkable
-        occurrences: [ 2, UNBOUNDED ] # ARIA NOTE: the spec is wrong here, must be a range
+        constraints:
+          - valid_values: [ root, leaf ]
+        required: false
+      description:
+        description: >-
+          Provides human-readable information on the purpose of the connection point
+          (e.g. connection point for control plane traffic).
+        type: string
+        required: false
+      address_data:
+        description: >-
+          Provides information on the addresses to be assigned to the connection point(s) instantiated
+          from this Connection Point Descriptor.
+        type: list
+        entry_schema:
+          type: tosca.datatypes.nfv.AddressData
+        required: false
 
-  tosca.nodes.nfv.VL.ELAN:
+  tosca.nodes.nfv.VduCpd:
     _extensions:
-      shorthand_name: VL.ELAN # ARIA NOTE: ommitted in the spec
-      type_qualified_name: tosca:VL.ELAN
-      specification: tosca-simple-nfv-1.0
-      specification_section: 9.3
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290257'
+       shorthand_name: VduCpd
+       type_qualified_name: tosca:VduCpd
+       specification: tosca-simple-nfv-1.0
+       specification_section: 5.9.5
+       specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896082'
     description: >-
-      The NFV VL.ELan node represents an E-LAN virtual link entity.
-    derived_from: tosca.nodes.network.Network
+      The TOSCA nfv.VduCpd node type represents a type of TOSCA Cpd node and describes network
+      connectivity between a VNFC instance (based on this VDU) and an internal VL as defined by
+      [ETSI GS NFV-IFA 011].
+    derived_from: tosca.nodes.nfv.Cpd
+    properties:
+      bitrate_requirement:
+        description: >-
+          Bitrate requirement on this connection point.
+        type: integer
+        required: false
+      virtual_network_interface_requirements:
+        description: >-
+          Specifies requirements on a virtual network interface realising the CPs instantiated from
+          this CPD.
+        type: list
+        entry_schema:
+          type: VirtualNetworkInterfaceRequirements
+        required: false
+    requirements:
+     # ARIA NOTE: seems to be a leftover from csd03
+     # - virtual_link:
+     #     description: Describes the requirements for linking to virtual link
+     #     capability: tosca.capabilities.nfv.VirtualLinkable
+     #     relationship: tosca.relationships.nfv.VirtualLinksTo
+     #     node: tosca.nodes.nfv.VnfVirtualLinkDesc
+      - virtual_binding:
+          capability: tosca.capabilities.nfv.VirtualBindable
+          relationship: tosca.relationships.nfv.VirtualBindsTo
+          node: tosca.nodes.nfv.VDU.Compute # ARIA NOTE: seems wrong in spec
 
-  tosca.nodes.nfv.VL.ETree:
+  tosca.nodes.nfv.VnfVirtualLinkDesc:
     _extensions:
-      shorthand_name: VL.ETree # ARIA NOTE: ommitted in the spec
-      type_qualified_name: tosca:VL.ETree
-      specification: tosca-simple-nfv-1.0
-      specification_section: 9.4
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290258'
+       shorthand_name: VnfVirtualLinkDesc
+       type_qualified_name: tosca:VnfVirtualLinkDesc
+       specification: tosca-simple-nfv-1.0
+       specification_section: 5.9.6
+       specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896083'
     description: >-
-      The NFV VL.ETree node represents an E-Tree virtual link entity.
-    derived_from: tosca.nodes.nfv.VL
+      The TOSCA nfv.VnfVirtualLinkDesc node type represents a logical internal virtual link as
+      defined by [ETSI GS NFV-IFA 011].
+    derived_from: tosca.nodes.Root
+    properties:
+      connectivity_type:
+        description: >-
+          specifies the protocol exposed by the VL and the flow pattern supported by the VL.
+        type: tosca.datatypes.nfv.ConnectivityType
+        required: true
+      description:
+        description: >-
+          Provides human-readable information on the purpose of the VL (e.g. control plane traffic).
+        type: string
+        required: false
+      test_access:
+        description: >-
+          Test access facilities available on the VL (e.g. none, passive, monitoring, or active
+          (intrusive) loopbacks at endpoints.
+        type: string
+        required: false

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/relationships.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/relationships.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/relationships.yaml
index b745735..4cf99a2 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/relationships.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/relationships.yaml
@@ -20,45 +20,24 @@ relationship_types:
       shorthand_name: VirtualBindsTo
       type_qualified_name: tosca:VirtualBindsTo
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.4.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc419290234'
+      specification_section: 5.7.1
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896074'
     description: >-
       This relationship type represents an association relationship between VDU and CP node types.
     derived_from: tosca.relationships.DependsOn
     valid_target_types: [ tosca.capabilities.nfv.VirtualBindable ]
 
+  # ARIA NOTE: csd04 lacks the definition of tosca.relationships.nfv.Monitor (the derived_from and
+  # valid_target_types), so we are using the definition in csd03 section 8.4.2.
   tosca.relationships.nfv.Monitor:
     _extensions:
       shorthand_name: Monitor
       type_qualified_name: tosca:Monitor
       specification: tosca-simple-nfv-1.0
-      specification_section: 8.4.2
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc418607880'
+      specification_section: 5.7.2
+      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd04/tosca-nfv-v1.0-csd04.html#_Toc482896075'
     description: >-
-      This relationship type represents an association relationship to the Metric capability of VDU node types.
+      This relationship type represents an association relationship to the Metric capability of VDU
+      node types.
     derived_from: tosca.relationships.ConnectsTo
     valid_target_types: [ tosca.capabilities.nfv.Metric ]
-
-  tosca.relationships.nfv.ForwardsTo:
-    _extensions:
-      shorthand_name: ForwardsTo
-      type_qualified_name: tosca:ForwardsTo
-      specification: tosca-simple-nfv-1.0
-      specification_section: 10.4.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714720'
-    description: >-
-      This relationship type represents a traffic flow between two connection point node types.
-    derived_from: tosca.relationships.Root
-    valid_target_types: [ tosca.capabilities.nfv.Forwarder ]
-
-  tosca.relationships.nfv.VirtualLinksTo:
-    _extensions:
-      shorthand_name: VirtualLinksTo
-      type_qualified_name: tosca:VirtualLinksTo
-      specification: tosca-simple-nfv-1.0
-      specification_section: 11.4.1
-      specification_url: 'http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/csd03/tosca-nfv-v1.0-csd03.html#_Toc447714737'
-    description: >-
-      This relationship type represents an association relationship between VNFs and VL node types.
-    derived_from: tosca.relationships.DependsOn
-    valid_target_types: [ tosca.capabilities.nfv.VirtualLinkable ]

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/1e883c57/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/tosca-simple-nfv-1.0.yaml
----------------------------------------------------------------------
diff --git a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/tosca-simple-nfv-1.0.yaml b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/tosca-simple-nfv-1.0.yaml
index 911ff3b..764c739 100644
--- a/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/tosca-simple-nfv-1.0.yaml
+++ b/extensions/aria_extension_tosca/profiles/tosca-simple-nfv-1.0/tosca-simple-nfv-1.0.yaml
@@ -14,8 +14,8 @@
 # limitations under the License.
 
 imports:
+  - artifacts.yaml
   - capabilities.yaml
   - data.yaml
-  - groups.yaml
   - nodes.yaml
   - relationships.yaml


[4/5] incubator-ariatosca git commit: wip

Posted by mx...@apache.org.
wip


Project: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/commit/907ed6eb
Tree: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/tree/907ed6eb
Diff: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/diff/907ed6eb

Branch: refs/heads/ARIA-278-Remove-core-tasks
Commit: 907ed6eb5d7364573ef8143a646471b5546a7f8e
Parents: 2149a5e
Author: max-orlov <ma...@gigaspaces.com>
Authored: Sun Jun 11 19:05:35 2017 +0300
Committer: max-orlov <ma...@gigaspaces.com>
Committed: Wed Jun 14 17:29:12 2017 +0300

----------------------------------------------------------------------
 aria/modeling/mixins.py                         |   4 +-
 aria/modeling/orchestration.py                  | 150 +++++++---
 aria/orchestrator/context/operation.py          |   3 +
 aria/orchestrator/workflow_runner.py            | 118 +++++++-
 aria/orchestrator/workflows/core/__init__.py    |   2 +-
 aria/orchestrator/workflows/core/_task.py       | 267 ++++++++++++++++++
 aria/orchestrator/workflows/core/engine.py      |  67 +++--
 .../workflows/core/events_handler.py            |  77 +++---
 aria/orchestrator/workflows/core/task.py        | 271 -------------------
 aria/orchestrator/workflows/core/translation.py | 109 --------
 aria/orchestrator/workflows/events_logging.py   |  25 +-
 aria/orchestrator/workflows/executor/base.py    |  32 ++-
 aria/orchestrator/workflows/executor/process.py |  18 +-
 aria/orchestrator/workflows/executor/thread.py  |  18 +-
 tests/orchestrator/context/__init__.py          |   8 +-
 tests/orchestrator/context/test_operation.py    |  26 +-
 tests/orchestrator/context/test_serialize.py    |   8 +-
 .../orchestrator/execution_plugin/test_local.py |  11 +-
 .../orchestrator/workflows/core/test_engine.py  |  14 +-
 .../orchestrator/workflows/core/test_events.py  |  10 +-
 .../test_task_graph_into_execution_graph.py     |  52 ++--
 .../orchestrator/workflows/executor/__init__.py |  58 ++--
 .../workflows/executor/test_executor.py         |  95 +++----
 .../workflows/executor/test_process_executor.py |   1 -
 24 files changed, 751 insertions(+), 693 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/modeling/mixins.py
----------------------------------------------------------------------
diff --git a/aria/modeling/mixins.py b/aria/modeling/mixins.py
index c98a866..31675fe 100644
--- a/aria/modeling/mixins.py
+++ b/aria/modeling/mixins.py
@@ -18,14 +18,12 @@ classes:
     * ModelMixin - abstract model implementation.
     * ModelIDMixin - abstract model implementation with IDs.
 """
-
 from sqlalchemy.ext import associationproxy
 from sqlalchemy import (
     Column,
     Integer,
     Text,
-    PickleType
-)
+    PickleType)
 
 from ..parser.consumption import ConsumptionContext
 from ..utils import console, collections, caching, formatting

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/modeling/orchestration.py
----------------------------------------------------------------------
diff --git a/aria/modeling/orchestration.py b/aria/modeling/orchestration.py
index 995c8c2..c0b7f04 100644
--- a/aria/modeling/orchestration.py
+++ b/aria/modeling/orchestration.py
@@ -21,9 +21,10 @@ classes:
 """
 
 # pylint: disable=no-self-argument, no-member, abstract-method
-
+from contextlib import contextmanager
 from datetime import datetime
 
+from networkx import DiGraph
 from sqlalchemy import (
     Column,
     Integer,
@@ -34,19 +35,19 @@ from sqlalchemy import (
     String,
     Float,
     orm,
-)
+    PickleType)
 from sqlalchemy.ext.associationproxy import association_proxy
 from sqlalchemy.ext.declarative import declared_attr
 
 from ..orchestrator.exceptions import (TaskAbortException, TaskRetryException)
-from .mixins import ModelMixin, ParameterMixin
+from . import mixins
 from . import (
     relationship,
     types as modeling_types
 )
 
 
-class ExecutionBase(ModelMixin):
+class ExecutionBase(mixins.ModelMixin):
     """
     Execution model representation.
     """
@@ -152,7 +153,7 @@ class ExecutionBase(ModelMixin):
         )
 
 
-class PluginBase(ModelMixin):
+class PluginBase(mixins.ModelMixin):
     """
     An installed plugin.
 
@@ -213,7 +214,7 @@ class PluginBase(ModelMixin):
     uploaded_at = Column(DateTime, nullable=False, index=True)
 
 
-class TaskBase(ModelMixin):
+class TaskBase(mixins.ModelMixin):
     """
     Represents the smallest unit of stateful execution in ARIA. The task state includes inputs,
     outputs, as well as an atomic status, ensuring that the task can only be running once at any
@@ -257,10 +258,25 @@ class TaskBase(ModelMixin):
 
     __tablename__ = 'task'
 
-    __private_fields__ = ['node_fk',
-                          'relationship_fk',
-                          'plugin_fk',
-                          'execution_fk']
+    __private_fields__ =  ['dependency_operation_task_fk', 'dependency_stub_task_fk', 'node_fk',
+                           'relationship_fk', 'plugin_fk', 'execution_fk']
+
+
+    START_WORKFLOW = 'start_workflow'
+    END_WORKFLOW = 'end_workflow'
+    START_SUBWROFKLOW = 'start_subworkflow'
+    END_SUBWORKFLOW = 'end_subworkflow'
+    STUB = 'stub'
+    CONDITIONAL = 'conditional'
+
+    STUB_TYPES = (
+        START_WORKFLOW,
+        START_SUBWROFKLOW,
+        END_WORKFLOW,
+        END_SUBWORKFLOW,
+        STUB,
+        CONDITIONAL,
+    )
 
     PENDING = 'pending'
     RETRYING = 'retrying'
@@ -276,10 +292,27 @@ class TaskBase(ModelMixin):
         SUCCESS,
         FAILED,
     )
-
     INFINITE_RETRIES = -1
 
     @declared_attr
+    def execution(cls):
+        return relationship.many_to_one(cls, 'execution')
+
+    @declared_attr
+    def execution_fk(cls):
+        return relationship.foreign_key('execution', nullable=True)
+
+    status = Column(Enum(*STATES, name='status'), default=PENDING)
+    due_at = Column(DateTime, nullable=False, index=True, default=datetime.utcnow())
+    started_at = Column(DateTime, default=None)
+    ended_at = Column(DateTime, default=None)
+    attempts_count = Column(Integer, default=1)
+    api_id = Column(String)
+
+    _executor = Column(PickleType)
+    _context_cls = Column(PickleType)
+
+    @declared_attr
     def logs(cls):
         return relationship.one_to_many(cls, 'log')
 
@@ -296,10 +329,6 @@ class TaskBase(ModelMixin):
         return relationship.many_to_one(cls, 'plugin')
 
     @declared_attr
-    def execution(cls):
-        return relationship.many_to_one(cls, 'execution')
-
-    @declared_attr
     def arguments(cls):
         return relationship.one_to_many(cls, 'argument', dict_key='name')
 
@@ -307,19 +336,10 @@ class TaskBase(ModelMixin):
     max_attempts = Column(Integer, default=1)
     retry_interval = Column(Float, default=0)
     ignore_failure = Column(Boolean, default=False)
+    interface_name = Column(String)
+    operation_name = Column(String)
 
-    # State
-    status = Column(Enum(*STATES, name='status'), default=PENDING)
-    due_at = Column(DateTime, nullable=False, index=True, default=datetime.utcnow())
-    started_at = Column(DateTime, default=None)
-    ended_at = Column(DateTime, default=None)
-    attempts_count = Column(Integer, default=1)
-
-    def has_ended(self):
-        return self.status in (self.SUCCESS, self.FAILED)
-
-    def is_waiting(self):
-        return self.status in (self.PENDING, self.RETRYING)
+    stub_type = Column(Enum(*STUB_TYPES))
 
     @property
     def actor(self):
@@ -351,10 +371,6 @@ class TaskBase(ModelMixin):
     def plugin_fk(cls):
         return relationship.foreign_key('plugin', nullable=True)
 
-    @declared_attr
-    def execution_fk(cls):
-        return relationship.foreign_key('execution', nullable=True)
-
     # endregion
 
     # region association proxies
@@ -376,14 +392,6 @@ class TaskBase(ModelMixin):
 
     # endregion
 
-    @classmethod
-    def for_node(cls, actor, **kwargs):
-        return cls(node=actor, **kwargs)
-
-    @classmethod
-    def for_relationship(cls, actor, **kwargs):
-        return cls(relationship=actor, **kwargs)
-
     @staticmethod
     def abort(message=None):
         raise TaskAbortException(message)
@@ -392,8 +400,68 @@ class TaskBase(ModelMixin):
     def retry(message=None, retry_interval=None):
         raise TaskRetryException(message, retry_interval=retry_interval)
 
+    @declared_attr
+    def dependency_fk(self):
+        return relationship.foreign_key('task', nullable=True)
 
-class LogBase(ModelMixin):
+    @declared_attr
+    def dependencies(cls):
+        # symmetric relationship causes funky graphs
+        return relationship.one_to_many_self(cls, 'dependency_fk')
+
+    def has_ended(self):
+        if self.stub_type is not None:
+            return self.status == self.SUCCESS
+        else:
+            return self.status in (self.SUCCESS, self.FAILED)
+
+    def is_waiting(self):
+        if self.stub_type:
+            return not self.has_ended()
+        else:
+            return self.status in (self.PENDING, self.RETRYING)
+
+    @classmethod
+    def from_api_task(cls, api_task, executor, **kwargs):
+        from aria.orchestrator import context
+        instantiation_kwargs = {}
+
+        if hasattr(api_task.actor, 'outbound_relationships'):
+            context_cls = context.operation.NodeOperationContext
+            instantiation_kwargs['node'] = api_task.actor
+        elif hasattr(api_task.actor, 'source_node'):
+            context_cls = context.operation.RelationshipOperationContext
+            instantiation_kwargs['relationship'] = api_task.actor
+        else:
+            raise RuntimeError('No operation context could be created for {actor.model_cls}'
+                               .format(actor=api_task.actor))
+
+        instantiation_kwargs.update(
+            {
+                'name': api_task.name,
+                'status': cls.PENDING,
+                'max_attempts': api_task.max_attempts,
+                'retry_interval': api_task.retry_interval,
+                'ignore_failure': api_task.ignore_failure,
+                'execution': api_task._workflow_context.execution,
+                'interface_name': api_task.interface_name,
+                'operation_name': api_task.operation_name,
+
+                # Only non-stub tasks have these fields
+                'plugin': api_task.plugin,
+                'function': api_task.function,
+                'arguments': api_task.arguments,
+                'api_id': api_task.id,
+                '_context_cls': context_cls,
+                '_executor': executor,
+        })
+
+        instantiation_kwargs.update(**kwargs)
+
+        return cls(**instantiation_kwargs)
+
+
+class LogBase(mixins.ModelMixin):
 
     __tablename__ = 'log'
 
@@ -435,7 +503,7 @@ class LogBase(ModelMixin):
         return '{name}: {self.msg}'.format(name=name, self=self)
 
 
-class ArgumentBase(ParameterMixin):
+class ArgumentBase(mixins.ParameterMixin):
 
     __tablename__ = 'argument'
 

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/context/operation.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/context/operation.py b/aria/orchestrator/context/operation.py
index efdc04d..7477912 100644
--- a/aria/orchestrator/context/operation.py
+++ b/aria/orchestrator/context/operation.py
@@ -58,6 +58,9 @@ class BaseOperationContext(common.BaseContext):
             self._thread_local.task = self.model.task.get(self._task_id)
         return self._thread_local.task
 
+    def update_task(self):
+        self.model.task.update(self.task)
+
     @property
     def plugin_workdir(self):
         """

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflow_runner.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflow_runner.py b/aria/orchestrator/workflow_runner.py
index 848c59b..f09cb79 100644
--- a/aria/orchestrator/workflow_runner.py
+++ b/aria/orchestrator/workflow_runner.py
@@ -21,11 +21,15 @@ import os
 import sys
 from datetime import datetime
 
+from networkx import DiGraph
+
 from . import exceptions
 from .context.workflow import WorkflowContext
 from .workflows import builtin
 from .workflows.core.engine import Engine
 from .workflows.executor.process import ProcessExecutor
+from .workflows.executor.base import StubTaskExecutor
+from .workflows.api import task
 from ..modeling import models
 from ..modeling import utils as modeling_utils
 from ..utils.imports import import_fullname
@@ -80,15 +84,21 @@ class WorkflowRunner(object):
             task_max_attempts=task_max_attempts,
             task_retry_interval=task_retry_interval)
 
+        # Set default executor and kwargs
+        executor = executor or ProcessExecutor(plugin_manager=plugin_manager)
+
         # transforming the execution inputs to dict, to pass them to the workflow function
         execution_inputs_dict = dict(inp.unwrapped for inp in self.execution.inputs.values())
+
         self._tasks_graph = workflow_fn(ctx=workflow_context, **execution_inputs_dict)
+        construct_execution_tasks(self.execution, self._tasks_graph, executor.__class__)
 
-        executor = executor or ProcessExecutor(plugin_manager=plugin_manager)
-        self._engine = Engine(
-            executor=executor,
-            workflow_context=workflow_context,
-            tasks_graph=self._tasks_graph)
+        # Update the state
+        self._model_storage.execution.update(execution)
+
+        self._engine = Engine(executor=executor,
+                              workflow_context=workflow_context,
+                              execution_graph=get_execution_graph(self.execution))
 
     @property
     def execution_id(self):
@@ -166,3 +176,101 @@ class WorkflowRunner(object):
                     self._workflow_name, workflow.function))
 
         return workflow_fn
+
+
+def get_execution_graph(execution):
+    graph = DiGraph()
+    for task in execution.tasks:
+        for dependency in task.dependencies:
+            graph.add_edge(dependency, task)
+
+    return graph
+
+
+def construct_execution_tasks(execution,
+                              task_graph,
+                              default_executor,
+                              stub_executor=StubTaskExecutor,
+                              start_stub_type=models.Task.START_WORKFLOW,
+                              end_stub_type=models.Task.END_WORKFLOW,
+                              depends_on=()):
+    """
+    Translates the user graph to the execution graph
+    :param task_graph: The user's graph
+    :param start_stub_type: internal use
+    :param end_stub_type: internal use
+    :param depends_on: internal use
+    """
+    depends_on = list(depends_on)
+
+    # Insert start marker
+    start_task = models.Task(api_id=_start_graph_suffix(task_graph.id),
+                             _executor=stub_executor,
+                             execution=execution,
+                             stub_type=start_stub_type,
+                             dependencies=depends_on)
+
+    for api_task in task_graph.topological_order(reverse=True):
+        operation_dependencies = _get_tasks_from_dependencies(execution,
+            task_graph.get_dependencies(api_task), [start_task])
+
+        if isinstance(api_task, task.OperationTask):
+            models.Task.from_api_task(api_task=api_task,
+                                      executor=default_executor,
+                                      dependencies=operation_dependencies)
+
+        elif isinstance(api_task, task.WorkflowTask):
+            # Build the graph recursively while adding start and end markers
+            construct_execution_tasks(
+                execution=execution,
+                task_graph=api_task,
+                default_executor=default_executor,
+                stub_executor=stub_executor,
+                start_stub_type=models.Task.START_SUBWROFKLOW,
+                end_stub_type=models.Task.END_SUBWORKFLOW,
+                depends_on=operation_dependencies
+            )
+        elif isinstance(api_task, task.StubTask):
+            models.Task(api_id=api_task.id,
+                        _executor=stub_executor,
+                        execution=execution,
+                        stub_type=models.Task.STUB,
+                        dependencies=operation_dependencies)
+        else:
+            raise
+
+    # Insert end marker
+    models.Task(api_id=_end_graph_suffix(task_graph.id),
+                _executor=stub_executor,
+                execution=execution,
+                stub_type=end_stub_type,
+                dependencies=_get_non_dependent_tasks(execution) or [start_task])
+
+
+def _start_graph_suffix(api_id):
+    return '{0}-Start'.format(api_id)
+
+
+def _end_graph_suffix(api_id):
+    return '{0}-End'.format(api_id)
+
+
+def _get_non_dependent_tasks(execution):
+    dependency_tasks = set()
+    for task in execution.tasks:
+        dependency_tasks.update(task.dependencies)
+    return list(set(execution.tasks) - set(dependency_tasks))
+
+
+def _get_tasks_from_dependencies(execution, dependencies, default=()):
+        """
+        Returns task list from dependencies.
+        """
+        tasks = []
+        for dependency in dependencies:
+            if getattr(dependency, 'actor', False):
+                dependency_name = dependency.id
+            else:
+                dependency_name = _end_graph_suffix(dependency.id)
+            tasks.extend(task for task in execution.tasks if task.api_id == dependency_name)
+        return tasks or default

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/core/__init__.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/__init__.py b/aria/orchestrator/workflows/core/__init__.py
index e377153..81db43f 100644
--- a/aria/orchestrator/workflows/core/__init__.py
+++ b/aria/orchestrator/workflows/core/__init__.py
@@ -17,4 +17,4 @@
 Core for the workflow execution mechanism
 """
 
-from . import task, translation, engine
+from . import engine

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/core/_task.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/_task.py b/aria/orchestrator/workflows/core/_task.py
new file mode 100644
index 0000000..399c177
--- /dev/null
+++ b/aria/orchestrator/workflows/core/_task.py
@@ -0,0 +1,267 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Workflow tasks
+"""
+
+from contextlib import contextmanager
+from datetime import datetime
+from functools import (
+    partial,
+    wraps,
+)
+
+
+from ....modeling import models
+from ...context import operation as operation_context
+from .. import exceptions
+
+
+def _locked(func=None):
+    if func is None:
+        return partial(_locked, func=_locked)
+
+    @wraps(func)
+    def _wrapper(self, value, **kwargs):
+        if self._update_fields is None:
+            raise exceptions.TaskException('Task is not in update mode')
+        return func(self, value, **kwargs)
+    return _wrapper
+
+
+class BaseTask(object):
+    """
+    Base class for Task objects
+    """
+
+    def __init__(self, id, executor, *args, **kwargs):
+        super(BaseTask, self).__init__(*args, **kwargs)
+        self._id = id
+        self._executor = executor
+
+    def execute(self):
+        return self._executor.execute(self)
+
+    @property
+    def id(self):
+        """
+        :return: the task's id
+        """
+        return self._id
+
+
+class StubTask(BaseTask):
+    """
+    Base stub task for marker user tasks that only mark the start/end of a workflow
+    or sub-workflow
+    """
+    STARTED = models.Task.STARTED
+    SUCCESS = models.Task.SUCCESS
+
+    def __init__(self, *args, **kwargs):
+        super(StubTask, self).__init__(*args, **kwargs)
+        self.status = models.Task.PENDING
+        self.due_at = datetime.utcnow()
+
+
+
+
+class StartWorkflowTask(StubTask):
+    """
+    Task marking a workflow start
+    """
+    pass
+
+
+class EndWorkflowTask(StubTask):
+    """
+    Task marking a workflow end
+    """
+    pass
+
+
+class StartSubWorkflowTask(StubTask):
+    """
+    Task marking a subworkflow start
+    """
+    pass
+
+
+class EndSubWorkflowTask(StubTask):
+    """
+    Task marking a subworkflow end
+    """
+    pass
+
+
+class OperationTask(BaseTask):
+    """
+    Operation task
+    """
+    def __init__(self, api_task, *args, **kwargs):
+        # If no executor is provided, we infer that this is an empty task which does not need to be
+        # executed.
+        super(OperationTask, self).__init__(id=api_task.id, *args, **kwargs)
+        self._workflow_context = api_task._workflow_context
+        self.interface_name = api_task.interface_name
+        self.operation_name = api_task.operation_name
+        model_storage = api_task._workflow_context.model
+
+        actor = getattr(api_task.actor, '_wrapped', api_task.actor)
+
+        base_task_model = model_storage.task.model_cls
+        if isinstance(actor, models.Node):
+            context_cls = operation_context.NodeOperationContext
+            create_task_model = base_task_model.for_node
+        elif isinstance(actor, models.Relationship):
+            context_cls = operation_context.RelationshipOperationContext
+            create_task_model = base_task_model.for_relationship
+        else:
+            raise RuntimeError('No operation context could be created for {actor.model_cls}'
+                               .format(actor=actor))
+
+        task_model = create_task_model(
+            name=api_task.name,
+            actor=actor,
+            status=base_task_model.PENDING,
+            max_attempts=api_task.max_attempts,
+            retry_interval=api_task.retry_interval,
+            ignore_failure=api_task.ignore_failure,
+            execution=self._workflow_context.execution,
+
+            # Only non-stub tasks have these fields
+            plugin=api_task.plugin,
+            function=api_task.function,
+            arguments=api_task.arguments
+        )
+        self._workflow_context.model.task.put(task_model)
+
+        self._ctx = context_cls(name=api_task.name,
+                                model_storage=self._workflow_context.model,
+                                resource_storage=self._workflow_context.resource,
+                                service_id=self._workflow_context._service_id,
+                                task_id=task_model.id,
+                                actor_id=actor.id,
+                                execution_id=self._workflow_context._execution_id,
+                                workdir=self._workflow_context._workdir)
+        self._task_id = task_model.id
+        self._update_fields = None
+
+    @contextmanager
+    def _update(self):
+        """
+        A context manager which puts the task into update mode, enabling fields update.
+        :yields: None
+        """
+        self._update_fields = {}
+        try:
+            yield
+            for key, value in self._update_fields.items():
+                setattr(self.model_task, key, value)
+            self.model_task = self.model_task
+        finally:
+            self._update_fields = None
+
+    @property
+    def model_task(self):
+        """
+        Returns the task model in storage
+        :return: task in storage
+        """
+        return self._workflow_context.model.task.get(self._task_id)
+
+    @model_task.setter
+    def model_task(self, value):
+        self._workflow_context.model.task.put(value)
+
+    @property
+    def context(self):
+        """
+        Contexts for the operation
+        :return:
+        """
+        return self._ctx
+
+    @property
+    def status(self):
+        """
+        Returns the task status
+        :return: task status
+        """
+        return self.model_task.status
+
+    @status.setter
+    @_locked
+    def status(self, value):
+        self._update_fields['status'] = value
+
+    @property
+    def started_at(self):
+        """
+        Returns when the task started
+        :return: when task started
+        """
+        return self.model_task.started_at
+
+    @started_at.setter
+    @_locked
+    def started_at(self, value):
+        self._update_fields['started_at'] = value
+
+    @property
+    def ended_at(self):
+        """
+        Returns when the task ended
+        :return: when task ended
+        """
+        return self.model_task.ended_at
+
+    @ended_at.setter
+    @_locked
+    def ended_at(self, value):
+        self._update_fields['ended_at'] = value
+
+    @property
+    def attempts_count(self):
+        """
+        Returns the attempts count for the task
+        :return: attempts count
+        """
+        return self.model_task.attempts_count
+
+    @attempts_count.setter
+    @_locked
+    def attempts_count(self, value):
+        self._update_fields['attempts_count'] = value
+
+    @property
+    def due_at(self):
+        """
+        Returns the minimum datetime in which the task can be executed
+        :return: eta
+        """
+        return self.model_task.due_at
+
+    @due_at.setter
+    @_locked
+    def due_at(self, value):
+        self._update_fields['due_at'] = value
+
+    def __getattr__(self, attr):
+        try:
+            return getattr(self.model_task, attr)
+        except AttributeError:
+            return super(OperationTask, self).__getattribute__(attr)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/core/engine.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/engine.py b/aria/orchestrator/workflows/core/engine.py
index 3a96804..e1b6412 100644
--- a/aria/orchestrator/workflows/core/engine.py
+++ b/aria/orchestrator/workflows/core/engine.py
@@ -20,15 +20,12 @@ The workflow engine. Executes workflows
 import time
 from datetime import datetime
 
-import networkx
-
 from aria import logger
 from aria.modeling import models
 from aria.orchestrator import events
+from aria.orchestrator.context import operation
 
 from .. import exceptions
-from . import task as engine_task
-from . import translation
 # Import required so all signals are registered
 from . import events_handler  # pylint: disable=unused-import
 
@@ -38,13 +35,11 @@ class Engine(logger.LoggerMixin):
     The workflow engine. Executes workflows
     """
 
-    def __init__(self, executor, workflow_context, tasks_graph, **kwargs):
+    def __init__(self, executor, workflow_context, execution_graph, **kwargs):
         super(Engine, self).__init__(**kwargs)
         self._workflow_context = workflow_context
-        self._execution_graph = networkx.DiGraph()
-        translation.build_execution_graph(task_graph=tasks_graph,
-                                          execution_graph=self._execution_graph,
-                                          default_executor=executor)
+        self._executors = {executor.__class__: executor}
+        self._execution_graph = execution_graph
 
     def execute(self):
         """
@@ -79,43 +74,57 @@ class Engine(logger.LoggerMixin):
         will be modified to 'cancelled' directly.
         """
         events.on_cancelling_workflow_signal.send(self._workflow_context)
+        self._workflow_context.execution = self._workflow_context.execution
 
     def _is_cancel(self):
-        return self._workflow_context.execution.status in (models.Execution.CANCELLING,
-                                                           models.Execution.CANCELLED)
+        execution = self._workflow_context.model.execution.update(self._workflow_context.execution)
+        return execution.status in (models.Execution.CANCELLING, models.Execution.CANCELLED)
 
     def _executable_tasks(self):
         now = datetime.utcnow()
-        return (task for task in self._tasks_iter()
-                if task.is_waiting() and
-                task.due_at <= now and
-                not self._task_has_dependencies(task))
+        return (
+            task for task in self._tasks_iter()
+            if task.is_waiting() and task.due_at <= now and not self._task_has_dependencies(task)
+        )
 
     def _ended_tasks(self):
-        return (task for task in self._tasks_iter() if task.has_ended())
+        return (task for task in self._tasks_iter()
+                if task.has_ended() and task in self._execution_graph)
 
     def _task_has_dependencies(self, task):
-        return len(self._execution_graph.pred.get(task.id, {})) > 0
+        return task.dependencies and all(d in self._execution_graph for d in task.dependencies)
 
     def _all_tasks_consumed(self):
         return len(self._execution_graph.node) == 0
 
     def _tasks_iter(self):
-        for _, data in self._execution_graph.nodes_iter(data=True):
-            task = data['task']
-            if isinstance(task, engine_task.OperationTask):
-                if not task.model_task.has_ended():
-                    self._workflow_context.model.task.refresh(task.model_task)
-            yield task
-
-    @staticmethod
-    def _handle_executable_task(task):
-        if isinstance(task, engine_task.OperationTask):
+        for task in self._workflow_context.execution.tasks:
+            yield self._workflow_context.model.task.refresh(task)
+
+    def _handle_executable_task(self, task):
+        if not task.stub_type:
             events.sent_task_signal.send(task)
-        task.execute()
+
+        if task._executor not in self._executors:
+            self._executors[task._executor] = task._executor()
+        executor = self._executors[task._executor]
+
+        context_cls = task._context_cls or operation.BaseOperationContext
+        op_ctx = context_cls(
+            model_storage=self._workflow_context.model,
+            resource_storage=self._workflow_context.resource,
+            workdir=self._workflow_context._workdir,
+            task_id=task.id,
+            actor_id=task.actor.id if task.actor else None,
+            service_id=task.execution.service.id,
+            execution_id=task.execution.id,
+            name=task.name
+        )
+
+        executor.execute(op_ctx)
 
     def _handle_ended_tasks(self, task):
         if task.status == models.Task.FAILED and not task.ignore_failure:
             raise exceptions.ExecutorException('Workflow failed')
         else:
-            self._execution_graph.remove_node(task.id)
+            self._execution_graph.remove_node(task)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/core/events_handler.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/events_handler.py b/aria/orchestrator/workflows/core/events_handler.py
index 669fb43..8b217f5 100644
--- a/aria/orchestrator/workflows/core/events_handler.py
+++ b/aria/orchestrator/workflows/core/events_handler.py
@@ -31,50 +31,47 @@ from ... import exceptions
 
 @events.sent_task_signal.connect
 def _task_sent(task, *args, **kwargs):
-    with task._update():
-        task.status = task.SENT
+    task.status = task.SENT
 
 
 @events.start_task_signal.connect
-def _task_started(task, *args, **kwargs):
-    with task._update():
-        task.started_at = datetime.utcnow()
-        task.status = task.STARTED
-    _update_node_state_if_necessary(task, is_transitional=True)
+def _task_started(ctx, *args, **kwargs):
+    ctx.task.started_at = datetime.utcnow()
+    ctx.task.status = ctx.task.STARTED
+    _update_node_state_if_necessary(ctx, is_transitional=True)
 
 
 @events.on_failure_task_signal.connect
-def _task_failed(task, exception, *args, **kwargs):
-    with task._update():
-        should_retry = all([
-            not isinstance(exception, exceptions.TaskAbortException),
-            task.attempts_count < task.max_attempts or task.max_attempts == task.INFINITE_RETRIES,
-            # ignore_failure check here means the task will not be retries and it will be marked
-            # as failed. The engine will also look at ignore_failure so it won't fail the
-            # workflow.
-            not task.ignore_failure
-        ])
-        if should_retry:
-            retry_interval = None
-            if isinstance(exception, exceptions.TaskRetryException):
-                retry_interval = exception.retry_interval
-            if retry_interval is None:
-                retry_interval = task.retry_interval
-            task.status = task.RETRYING
-            task.attempts_count += 1
-            task.due_at = datetime.utcnow() + timedelta(seconds=retry_interval)
-        else:
-            task.ended_at = datetime.utcnow()
-            task.status = task.FAILED
+def _task_failed(ctx, exception, *args, **kwargs):
+    should_retry = all([
+        not isinstance(exception, exceptions.TaskAbortException),
+        ctx.task.attempts_count < ctx.task.max_attempts or
+        ctx.task.max_attempts == ctx.task.INFINITE_RETRIES,
+        # ignore_failure check here means the task will not be retried and it will be marked
+        # as failed. The engine will also look at ignore_failure so it won't fail the
+        # workflow.
+        not ctx.task.ignore_failure
+    ])
+    if should_retry:
+        retry_interval = None
+        if isinstance(exception, exceptions.TaskRetryException):
+            retry_interval = exception.retry_interval
+        if retry_interval is None:
+            retry_interval = ctx.task.retry_interval
+        ctx.task.status = ctx.task.RETRYING
+        ctx.task.attempts_count += 1
+        ctx.task.due_at = datetime.utcnow() + timedelta(seconds=retry_interval)
+    else:
+        ctx.task.ended_at = datetime.utcnow()
+        ctx.task.status = ctx.task.FAILED
 
 
 @events.on_success_task_signal.connect
-def _task_succeeded(task, *args, **kwargs):
-    with task._update():
-        task.ended_at = datetime.utcnow()
-        task.status = task.SUCCESS
+def _task_succeeded(ctx, *args, **kwargs):
+    ctx.task.ended_at = datetime.utcnow()
+    ctx.task.status = ctx.task.SUCCESS
 
-    _update_node_state_if_necessary(task)
+    _update_node_state_if_necessary(ctx)
 
 
 @events.start_workflow_signal.connect
@@ -133,17 +130,17 @@ def _workflow_cancelling(workflow_context, *args, **kwargs):
         workflow_context.execution = execution
 
 
-def _update_node_state_if_necessary(task, is_transitional=False):
+def _update_node_state_if_necessary(ctx, is_transitional=False):
     # TODO: this is not the right way to check! the interface name is arbitrary
     # and also will *never* be the type name
-    model_task = task.model_task
-    node = model_task.node if model_task is not None else None
+    node = ctx.task.node if ctx.task is not None else None
     if (node is not None) and \
-        (task.interface_name in ('Standard', 'tosca.interfaces.node.lifecycle.Standard')):
-        state = node.determine_state(op_name=task.operation_name, is_transitional=is_transitional)
+        (ctx.task.interface_name in ('Standard', 'tosca.interfaces.node.lifecycle.Standard')):
+        state = node.determine_state(op_name=ctx.task.operation_name,
+                                     is_transitional=is_transitional)
         if state:
             node.state = state
-            task.context.model.node.update(node)
+            ctx.model.node.update(node)
 
 
 def _log_tried_to_cancel_execution_but_it_already_ended(workflow_context, status):

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/core/task.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/task.py b/aria/orchestrator/workflows/core/task.py
deleted file mode 100644
index d732f09..0000000
--- a/aria/orchestrator/workflows/core/task.py
+++ /dev/null
@@ -1,271 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Workflow tasks
-"""
-
-from contextlib import contextmanager
-from datetime import datetime
-from functools import (
-    partial,
-    wraps,
-)
-
-
-from ....modeling import models
-from ...context import operation as operation_context
-from .. import exceptions
-
-
-def _locked(func=None):
-    if func is None:
-        return partial(_locked, func=_locked)
-
-    @wraps(func)
-    def _wrapper(self, value, **kwargs):
-        if self._update_fields is None:
-            raise exceptions.TaskException('Task is not in update mode')
-        return func(self, value, **kwargs)
-    return _wrapper
-
-
-class BaseTask(object):
-    """
-    Base class for Task objects
-    """
-
-    def __init__(self, id, executor, *args, **kwargs):
-        super(BaseTask, self).__init__(*args, **kwargs)
-        self._id = id
-        self._executor = executor
-
-    def execute(self):
-        return self._executor.execute(self)
-
-    @property
-    def id(self):
-        """
-        :return: the task's id
-        """
-        return self._id
-
-
-class StubTask(BaseTask):
-    """
-    Base stub task for marker user tasks that only mark the start/end of a workflow
-    or sub-workflow
-    """
-    STARTED = models.Task.STARTED
-    SUCCESS = models.Task.SUCCESS
-
-    def __init__(self, *args, **kwargs):
-        super(StubTask, self).__init__(*args, **kwargs)
-        self.status = models.Task.PENDING
-        self.due_at = datetime.utcnow()
-
-    def has_ended(self):
-        return self.status == self.SUCCESS
-
-    def is_waiting(self):
-        return not self.has_ended()
-
-
-class StartWorkflowTask(StubTask):
-    """
-    Task marking a workflow start
-    """
-    pass
-
-
-class EndWorkflowTask(StubTask):
-    """
-    Task marking a workflow end
-    """
-    pass
-
-
-class StartSubWorkflowTask(StubTask):
-    """
-    Task marking a subworkflow start
-    """
-    pass
-
-
-class EndSubWorkflowTask(StubTask):
-    """
-    Task marking a subworkflow end
-    """
-    pass
-
-
-class OperationTask(BaseTask):
-    """
-    Operation task
-    """
-    def __init__(self, api_task, *args, **kwargs):
-        # If no executor is provided, we infer that this is an empty task which does not need to be
-        # executed.
-        super(OperationTask, self).__init__(id=api_task.id, *args, **kwargs)
-        self._workflow_context = api_task._workflow_context
-        self.interface_name = api_task.interface_name
-        self.operation_name = api_task.operation_name
-        model_storage = api_task._workflow_context.model
-
-        actor = getattr(api_task.actor, '_wrapped', api_task.actor)
-
-        base_task_model = model_storage.task.model_cls
-        if isinstance(actor, models.Node):
-            context_cls = operation_context.NodeOperationContext
-            create_task_model = base_task_model.for_node
-        elif isinstance(actor, models.Relationship):
-            context_cls = operation_context.RelationshipOperationContext
-            create_task_model = base_task_model.for_relationship
-        else:
-            raise RuntimeError('No operation context could be created for {actor.model_cls}'
-                               .format(actor=actor))
-
-        task_model = create_task_model(
-            name=api_task.name,
-            actor=actor,
-            status=base_task_model.PENDING,
-            max_attempts=api_task.max_attempts,
-            retry_interval=api_task.retry_interval,
-            ignore_failure=api_task.ignore_failure,
-            execution=self._workflow_context.execution,
-
-            # Only non-stub tasks have these fields
-            plugin=api_task.plugin,
-            function=api_task.function,
-            arguments=api_task.arguments
-        )
-        self._workflow_context.model.task.put(task_model)
-
-        self._ctx = context_cls(name=api_task.name,
-                                model_storage=self._workflow_context.model,
-                                resource_storage=self._workflow_context.resource,
-                                service_id=self._workflow_context._service_id,
-                                task_id=task_model.id,
-                                actor_id=actor.id,
-                                execution_id=self._workflow_context._execution_id,
-                                workdir=self._workflow_context._workdir)
-        self._task_id = task_model.id
-        self._update_fields = None
-
-    @contextmanager
-    def _update(self):
-        """
-        A context manager which puts the task into update mode, enabling fields update.
-        :yields: None
-        """
-        self._update_fields = {}
-        try:
-            yield
-            for key, value in self._update_fields.items():
-                setattr(self.model_task, key, value)
-            self.model_task = self.model_task
-        finally:
-            self._update_fields = None
-
-    @property
-    def model_task(self):
-        """
-        Returns the task model in storage
-        :return: task in storage
-        """
-        return self._workflow_context.model.task.get(self._task_id)
-
-    @model_task.setter
-    def model_task(self, value):
-        self._workflow_context.model.task.put(value)
-
-    @property
-    def context(self):
-        """
-        Contexts for the operation
-        :return:
-        """
-        return self._ctx
-
-    @property
-    def status(self):
-        """
-        Returns the task status
-        :return: task status
-        """
-        return self.model_task.status
-
-    @status.setter
-    @_locked
-    def status(self, value):
-        self._update_fields['status'] = value
-
-    @property
-    def started_at(self):
-        """
-        Returns when the task started
-        :return: when task started
-        """
-        return self.model_task.started_at
-
-    @started_at.setter
-    @_locked
-    def started_at(self, value):
-        self._update_fields['started_at'] = value
-
-    @property
-    def ended_at(self):
-        """
-        Returns when the task ended
-        :return: when task ended
-        """
-        return self.model_task.ended_at
-
-    @ended_at.setter
-    @_locked
-    def ended_at(self, value):
-        self._update_fields['ended_at'] = value
-
-    @property
-    def attempts_count(self):
-        """
-        Returns the attempts count for the task
-        :return: attempts count
-        """
-        return self.model_task.attempts_count
-
-    @attempts_count.setter
-    @_locked
-    def attempts_count(self, value):
-        self._update_fields['attempts_count'] = value
-
-    @property
-    def due_at(self):
-        """
-        Returns the minimum datetime in which the task can be executed
-        :return: eta
-        """
-        return self.model_task.due_at
-
-    @due_at.setter
-    @_locked
-    def due_at(self, value):
-        self._update_fields['due_at'] = value
-
-    def __getattr__(self, attr):
-        try:
-            return getattr(self.model_task, attr)
-        except AttributeError:
-            return super(OperationTask, self).__getattribute__(attr)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/core/translation.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/translation.py b/aria/orchestrator/workflows/core/translation.py
deleted file mode 100644
index fec108b..0000000
--- a/aria/orchestrator/workflows/core/translation.py
+++ /dev/null
@@ -1,109 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Translation of user graph's API to the execution graph
-"""
-
-from .. import api
-from ..executor import base
-from . import task as core_task
-
-
-def build_execution_graph(
-        task_graph,
-        execution_graph,
-        default_executor,
-        start_cls=core_task.StartWorkflowTask,
-        end_cls=core_task.EndWorkflowTask,
-        depends_on=()):
-    """
-    Translates the user graph to the execution graph
-    :param task_graph: The user's graph
-    :param workflow_context: The workflow
-    :param execution_graph: The execution graph that is being built
-    :param start_cls: internal use
-    :param end_cls: internal use
-    :param depends_on: internal use
-    """
-    # Insert start marker
-    start_task = start_cls(id=_start_graph_suffix(task_graph.id), executor=base.StubTaskExecutor())
-    _add_task_and_dependencies(execution_graph, start_task, depends_on)
-
-    for api_task in task_graph.topological_order(reverse=True):
-        dependencies = task_graph.get_dependencies(api_task)
-        operation_dependencies = _get_tasks_from_dependencies(
-            execution_graph, dependencies, default=[start_task])
-
-        if isinstance(api_task, api.task.OperationTask):
-            operation_task = core_task.OperationTask(api_task, executor=default_executor)
-            _add_task_and_dependencies(execution_graph, operation_task, operation_dependencies)
-        elif isinstance(api_task, api.task.WorkflowTask):
-            # Build the graph recursively while adding start and end markers
-            build_execution_graph(
-                task_graph=api_task,
-                execution_graph=execution_graph,
-                default_executor=default_executor,
-                start_cls=core_task.StartSubWorkflowTask,
-                end_cls=core_task.EndSubWorkflowTask,
-                depends_on=operation_dependencies
-            )
-        elif isinstance(api_task, api.task.StubTask):
-            stub_task = core_task.StubTask(id=api_task.id, executor=base.StubTaskExecutor())
-            _add_task_and_dependencies(execution_graph, stub_task, operation_dependencies)
-        else:
-            raise RuntimeError('Undefined state')
-
-    # Insert end marker
-    workflow_dependencies = _get_tasks_from_dependencies(
-        execution_graph,
-        _get_non_dependency_tasks(task_graph),
-        default=[start_task])
-    end_task = end_cls(id=_end_graph_suffix(task_graph.id), executor=base.StubTaskExecutor())
-    _add_task_and_dependencies(execution_graph, end_task, workflow_dependencies)
-
-
-def _add_task_and_dependencies(execution_graph, operation_task, operation_dependencies=()):
-    execution_graph.add_node(operation_task.id, task=operation_task)
-    for dependency in operation_dependencies:
-        execution_graph.add_edge(dependency.id, operation_task.id)
-
-
-def _get_tasks_from_dependencies(execution_graph, dependencies, default=()):
-    """
-    Returns task list from dependencies.
-    """
-    tasks = []
-    for dependency in dependencies:
-        if isinstance(dependency, (api.task.OperationTask, api.task.StubTask)):
-            dependency_id = dependency.id
-        else:
-            dependency_id = _end_graph_suffix(dependency.id)
-        tasks.append(execution_graph.node[dependency_id]['task'])
-    return tasks or default
-
-
-def _start_graph_suffix(id):
-    return '{0}-Start'.format(id)
-
-
-def _end_graph_suffix(id):
-    return '{0}-End'.format(id)
-
-
-def _get_non_dependency_tasks(graph):
-    for task in graph.tasks:
-        if len(list(graph.get_dependents(task))) == 0:
-            yield task

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/events_logging.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/events_logging.py b/aria/orchestrator/workflows/events_logging.py
index 036c1f7..4cee867 100644
--- a/aria/orchestrator/workflows/events_logging.py
+++ b/aria/orchestrator/workflows/events_logging.py
@@ -34,31 +34,32 @@ def _get_task_name(task):
 
 
 @events.start_task_signal.connect
-def _start_task_handler(task, **kwargs):
+def _start_task_handler(ctx, **kwargs):
     # If the task has no function this is an empty task.
-    if task.function:
+    if ctx.task.function:
         suffix = 'started...'
-        logger = task.context.logger.info
+        logger = ctx.logger.info
     else:
         suffix = 'has no implementation'
-        logger = task.context.logger.debug
+        logger = ctx.logger.debug
 
     logger('{name} {task.interface_name}.{task.operation_name} {suffix}'.format(
-        name=_get_task_name(task), task=task, suffix=suffix))
+        name=_get_task_name(ctx.task), task=ctx.task, suffix=suffix))
+
 
 @events.on_success_task_signal.connect
-def _success_task_handler(task, **kwargs):
-    if not task.function:
+def _success_task_handler(ctx, **kwargs):
+    if not ctx.task.function:
         return
-    task.context.logger.info('{name} {task.interface_name}.{task.operation_name} successful'
-                             .format(name=_get_task_name(task), task=task))
+    ctx.logger.info('{name} {task.interface_name}.{task.operation_name} successful'
+                    .format(name=_get_task_name(ctx.task), task=ctx.task))
 
 
 @events.on_failure_task_signal.connect
-def _failure_operation_handler(task, traceback, **kwargs):
-    task.context.logger.error(
+def _failure_operation_handler(ctx, traceback, **kwargs):
+    ctx.logger.error(
         '{name} {task.interface_name}.{task.operation_name} failed'
-        .format(name=_get_task_name(task), task=task), extra=dict(traceback=traceback)
+        .format(name=_get_task_name(ctx.task), task=ctx.task), extra=dict(traceback=traceback)
     )
 
 

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/executor/base.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/executor/base.py b/aria/orchestrator/workflows/executor/base.py
index 7fece6f..fc4b800 100644
--- a/aria/orchestrator/workflows/executor/base.py
+++ b/aria/orchestrator/workflows/executor/base.py
@@ -28,19 +28,20 @@ class BaseExecutor(logger.LoggerMixin):
     def _execute(self, task):
         raise NotImplementedError
 
-    def execute(self, task):
+    def execute(self, ctx):
         """
         Execute a task
         :param task: task to execute
         """
-        if task.function:
-            self._execute(task)
+        ctx.update_task()
+        if ctx.task.function:
+            self._execute(ctx)
         else:
             # In this case the task is missing a function. This task still gets to an
             # executor, but since there is nothing to run, we by default simply skip the execution
             # itself.
-            self._task_started(task)
-            self._task_succeeded(task)
+            self._task_started(ctx)
+            self._task_succeeded(ctx)
 
     def close(self):
         """
@@ -49,18 +50,19 @@ class BaseExecutor(logger.LoggerMixin):
         pass
 
     @staticmethod
-    def _task_started(task):
-        events.start_task_signal.send(task)
+    def _task_started(ctx):
+        events.start_task_signal.send(ctx)
+        ctx.update_task()
 
-    @staticmethod
-    def _task_failed(task, exception, traceback=None):
-        events.on_failure_task_signal.send(task, exception=exception, traceback=traceback)
+    def _task_failed(self, ctx, exception, traceback=None):
+        events.on_failure_task_signal.send(ctx, exception=exception, traceback=traceback)
+        ctx.update_task()
 
-    @staticmethod
-    def _task_succeeded(task):
-        events.on_success_task_signal.send(task)
+    def _task_succeeded(self, ctx):
+        events.on_success_task_signal.send(ctx)
+        ctx.update_task()
 
 
 class StubTaskExecutor(BaseExecutor):                                                               # pylint: disable=abstract-method
-    def execute(self, task):
-        task.status = task.SUCCESS
+    def execute(self, ctx, *args, **kwargs):
+        ctx.task.status = ctx.task.SUCCESS

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/executor/process.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/executor/process.py b/aria/orchestrator/workflows/executor/process.py
index 634f1f2..8518b33 100644
--- a/aria/orchestrator/workflows/executor/process.py
+++ b/aria/orchestrator/workflows/executor/process.py
@@ -113,17 +113,17 @@ class ProcessExecutor(base.BaseExecutor):
         self._server_socket.close()
         self._listener_thread.join(timeout=60)
 
-    def _execute(self, task):
+    def _execute(self, ctx):
         self._check_closed()
-        self._tasks[task.id] = task
+        self._tasks[ctx.task.id] = ctx
 
         # Temporary file used to pass arguments to the started subprocess
         file_descriptor, arguments_json_path = tempfile.mkstemp(prefix='executor-', suffix='.json')
         os.close(file_descriptor)
         with open(arguments_json_path, 'wb') as f:
-            f.write(pickle.dumps(self._create_arguments_dict(task)))
+            f.write(pickle.dumps(self._create_arguments_dict(ctx)))
 
-        env = self._construct_subprocess_env(task=task)
+        env = self._construct_subprocess_env(task=ctx.task)
         # Asynchronously start the operation in a subprocess
         subprocess.Popen(
             '{0} {1} {2}'.format(sys.executable, __file__, arguments_json_path),
@@ -137,13 +137,13 @@ class ProcessExecutor(base.BaseExecutor):
         if self._stopped:
             raise RuntimeError('Executor closed')
 
-    def _create_arguments_dict(self, task):
+    def _create_arguments_dict(self, ctx):
         return {
-            'task_id': task.id,
-            'function': task.function,
-            'operation_arguments': dict(arg.unwrapped for arg in task.arguments.values()),
+            'task_id': ctx.task.id,
+            'function': ctx.task.function,
+            'operation_arguments': dict(arg.unwrapped for arg in ctx.task.arguments.values()),
             'port': self._server_port,
-            'context': task.context.serialization_dict,
+            'context': ctx.serialization_dict,
         }
 
     def _construct_subprocess_env(self, task):

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/aria/orchestrator/workflows/executor/thread.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/executor/thread.py b/aria/orchestrator/workflows/executor/thread.py
index 56a56a5..8c447b6 100644
--- a/aria/orchestrator/workflows/executor/thread.py
+++ b/aria/orchestrator/workflows/executor/thread.py
@@ -46,8 +46,8 @@ class ThreadExecutor(BaseExecutor):
             thread.start()
             self._pool.append(thread)
 
-    def _execute(self, task):
-        self._queue.put(task)
+    def _execute(self, ctx):
+        self._queue.put(ctx)
 
     def close(self):
         self._stopped = True
@@ -57,15 +57,15 @@ class ThreadExecutor(BaseExecutor):
     def _processor(self):
         while not self._stopped:
             try:
-                task = self._queue.get(timeout=1)
-                self._task_started(task)
+                ctx = self._queue.get(timeout=1)
+                self._task_started(ctx)
                 try:
-                    task_func = imports.load_attribute(task.function)
-                    arguments = dict(arg.unwrapped for arg in task.arguments.values())
-                    task_func(ctx=task.context, **arguments)
-                    self._task_succeeded(task)
+                    task_func = imports.load_attribute(ctx.task.function)
+                    arguments = dict(arg.unwrapped for arg in ctx.task.arguments.values())
+                    task_func(ctx=ctx, **arguments)
+                    self._task_succeeded(ctx)
                 except BaseException as e:
-                    self._task_failed(task,
+                    self._task_failed(ctx,
                                       exception=e,
                                       traceback=exceptions.get_exception_as_string(*sys.exc_info()))
             # Daemon threads

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/context/__init__.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/__init__.py b/tests/orchestrator/context/__init__.py
index 4fde0a7..cb282a3 100644
--- a/tests/orchestrator/context/__init__.py
+++ b/tests/orchestrator/context/__init__.py
@@ -16,6 +16,7 @@
 import sys
 
 from aria.orchestrator.workflows.core import engine
+from aria.orchestrator import workflow_runner
 
 
 def op_path(func, module_path=None):
@@ -25,5 +26,10 @@ def op_path(func, module_path=None):
 
 def execute(workflow_func, workflow_context, executor):
     graph = workflow_func(ctx=workflow_context)
-    eng = engine.Engine(executor=executor, workflow_context=workflow_context, tasks_graph=graph)
+
+    workflow_runner.construct_execution_tasks(workflow_context.execution, graph, executor.__class__)
+    workflow_context.execution = workflow_context.execution
+    execution_graph = workflow_runner.get_execution_graph(workflow_context.execution)
+    eng = engine.Engine(executor, workflow_context, execution_graph)
+
     eng.execute()

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/context/test_operation.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/test_operation.py b/tests/orchestrator/context/test_operation.py
index 3dcfaa2..f654fe5 100644
--- a/tests/orchestrator/context/test_operation.py
+++ b/tests/orchestrator/context/test_operation.py
@@ -50,21 +50,12 @@ def ctx(tmpdir):
 
 
 @pytest.fixture
-def process_executor():
-    ex = process.ProcessExecutor(**dict(python_path=tests.ROOT_DIR))
-    try:
-        yield ex
-    finally:
-        ex.close()
-
-
-@pytest.fixture
 def thread_executor():
-    ex = thread.ThreadExecutor()
+    result = thread.ThreadExecutor()
     try:
-        yield ex
+        yield result
     finally:
-        ex.close()
+        result.close()
 
 
 @pytest.fixture
@@ -266,12 +257,12 @@ def test_plugin_workdir(ctx, thread_executor, tmpdir):
     (process.ProcessExecutor, {'python_path': [tests.ROOT_DIR]}),
 ])
 def executor(request):
-    executor_cls, executor_kwargs = request.param
-    result = executor_cls(**executor_kwargs)
+    ex_cls, kwargs = request.param
+    ex = ex_cls(**kwargs)
     try:
-        yield result
+        yield ex
     finally:
-        result.close()
+        ex.close()
 
 
 def test_node_operation_logging(ctx, executor):
@@ -304,7 +295,6 @@ def test_node_operation_logging(ctx, executor):
                 arguments=arguments
             )
         )
-
     execute(workflow_func=basic_workflow, workflow_context=ctx, executor=executor)
     _assert_loggins(ctx, arguments)
 
@@ -418,7 +408,7 @@ def _assert_loggins(ctx, arguments):
     assert len(executions) == 1
     execution = executions[0]
 
-    tasks = ctx.model.task.list()
+    tasks = ctx.model.task.list(filters={'stub_type': None})
     assert len(tasks) == 1
     task = tasks[0]
     assert len(task.logs) == 4

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/context/test_serialize.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/test_serialize.py b/tests/orchestrator/context/test_serialize.py
index 0919e81..aa19f56 100644
--- a/tests/orchestrator/context/test_serialize.py
+++ b/tests/orchestrator/context/test_serialize.py
@@ -18,7 +18,7 @@ import pytest
 from aria.orchestrator.workflows import api
 from aria.orchestrator.workflows.core import engine
 from aria.orchestrator.workflows.executor import process
-from aria.orchestrator import workflow, operation
+from aria.orchestrator import workflow, operation, workflow_runner
 import tests
 from tests import mock
 from tests import storage
@@ -33,6 +33,12 @@ def test_serialize_operation_context(context, executor, tmpdir):
     test_file.write(TEST_FILE_CONTENT)
     resource = context.resource
     resource.service_template.upload(TEST_FILE_ENTRY_ID, str(test_file))
+    graph = _mock_workflow(ctx=context)  # pylint: disable=no-value-for-parameter
+    workflow_runner.construct_execution_tasks(context.execution, graph, executor.__class__)
+    context.execution = context.execution
+    execution_graph = workflow_runner.get_execution_graph(context.execution)
+    eng = engine.Engine(executor, context, execution_graph)
+    eng.execute()
 
     node = context.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
     plugin = mock.models.create_plugin()

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/execution_plugin/test_local.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/execution_plugin/test_local.py b/tests/orchestrator/execution_plugin/test_local.py
index f667460..99a0cb6 100644
--- a/tests/orchestrator/execution_plugin/test_local.py
+++ b/tests/orchestrator/execution_plugin/test_local.py
@@ -19,7 +19,7 @@ import os
 import pytest
 
 from aria import workflow
-from aria.orchestrator import events
+from aria.orchestrator import events, workflow_runner
 from aria.orchestrator.workflows import api
 from aria.orchestrator.workflows.exceptions import ExecutorException
 from aria.orchestrator.exceptions import TaskAbortException, TaskRetryException
@@ -500,10 +500,11 @@ if __name__ == '__main__':
                 arguments=arguments))
             return graph
         tasks_graph = mock_workflow(ctx=workflow_context)  # pylint: disable=no-value-for-parameter
-        eng = engine.Engine(
-            executor=executor,
-            workflow_context=workflow_context,
-            tasks_graph=tasks_graph)
+        workflow_runner.construct_execution_tasks(
+            workflow_context.execution, tasks_graph, executor.__class__)
+        workflow_context.execution = workflow_context.execution
+        execution_graph = workflow_runner.get_execution_graph(workflow_context.execution)
+        eng = engine.Engine(executor, workflow_context, execution_graph)
         eng.execute()
         return workflow_context.model.node.get_by_name(
             mock.models.DEPENDENCY_NODE_NAME).attributes

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/workflows/core/test_engine.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/core/test_engine.py b/tests/orchestrator/workflows/core/test_engine.py
index 0438544..8bcf01e 100644
--- a/tests/orchestrator/workflows/core/test_engine.py
+++ b/tests/orchestrator/workflows/core/test_engine.py
@@ -24,6 +24,7 @@ from aria.orchestrator import (
     operation,
 )
 from aria.modeling import models
+from aria.orchestrator import workflow_runner
 from aria.orchestrator.workflows import (
     api,
     exceptions,
@@ -50,9 +51,13 @@ class BaseTest(object):
     @staticmethod
     def _engine(workflow_func, workflow_context, executor):
         graph = workflow_func(ctx=workflow_context)
+        execution = workflow_context.execution
+        workflow_runner.construct_execution_tasks(execution, graph, executor.__class__)
+        workflow_context.execution = execution
+
         return engine.Engine(executor=executor,
                              workflow_context=workflow_context,
-                             tasks_graph=graph)
+                             execution_graph=workflow_runner.get_execution_graph(execution))
 
     @staticmethod
     def _create_interface(ctx, func, arguments=None):
@@ -98,9 +103,10 @@ class BaseTest(object):
 
     @pytest.fixture(autouse=True)
     def signals_registration(self, ):
-        def sent_task_handler(*args, **kwargs):
-            calls = global_test_holder.setdefault('sent_task_signal_calls', 0)
-            global_test_holder['sent_task_signal_calls'] = calls + 1
+        def sent_task_handler(task, *args, **kwargs):
+            if task.stub_type is None:
+                calls = global_test_holder.setdefault('sent_task_signal_calls', 0)
+                global_test_holder['sent_task_signal_calls'] = calls + 1
 
         def start_workflow_handler(workflow_context, *args, **kwargs):
             workflow_context.states.append('start')

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/workflows/core/test_events.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/core/test_events.py b/tests/orchestrator/workflows/core/test_events.py
index 6d542e9..92582a9 100644
--- a/tests/orchestrator/workflows/core/test_events.py
+++ b/tests/orchestrator/workflows/core/test_events.py
@@ -15,6 +15,7 @@
 
 import pytest
 
+from aria.orchestrator import workflow_runner
 from tests import mock, storage
 from aria.modeling.service_instance import NodeBase
 from aria.orchestrator.decorators import operation, workflow
@@ -112,13 +113,14 @@ def run_operation_on_node(ctx, op_name, interface_name):
         operation_name=op_name,
         operation_kwargs=dict(function='{name}.{func.__name__}'.format(name=__name__, func=func)))
     node.interfaces[interface.name] = interface
+    workflow_runner.construct_execution_tasks(
+        ctx.execution, single_operation_workflow(
+            ctx=ctx, node=node, interface_name=interface_name, op_name=op_name), ThreadExecutor)
+    ctx.execution = ctx.execution
 
     eng = engine.Engine(executor=ThreadExecutor(),
                         workflow_context=ctx,
-                        tasks_graph=single_operation_workflow(ctx=ctx,  # pylint: disable=no-value-for-parameter
-                                                              node=node,
-                                                              interface_name=interface_name,
-                                                              op_name=op_name))
+                        execution_graph=workflow_runner.get_execution_graph(ctx.execution))
     eng.execute()
     return node
 

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/workflows/core/test_task_graph_into_execution_graph.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/core/test_task_graph_into_execution_graph.py b/tests/orchestrator/workflows/core/test_task_graph_into_execution_graph.py
index 5dd2855..aebae38 100644
--- a/tests/orchestrator/workflows/core/test_task_graph_into_execution_graph.py
+++ b/tests/orchestrator/workflows/core/test_task_graph_into_execution_graph.py
@@ -13,12 +13,15 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-from networkx import topological_sort, DiGraph
-
-from aria.orchestrator import context
-from aria.orchestrator.workflows import api, core
+from networkx import topological_sort
+
+from aria.modeling import models
+from aria.orchestrator import (
+    context,
+    workflow_runner
+)
+from aria.orchestrator.workflows import api
 from aria.orchestrator.workflows.executor import base
-
 from tests import mock
 from tests import storage
 
@@ -65,10 +68,12 @@ def test_task_graph_into_execution_graph(tmpdir):
     test_task_graph.add_dependency(simple_after_task, inner_task_graph)
 
     # Direct check
-    execution_graph = DiGraph()
-    core.translation.build_execution_graph(task_graph=test_task_graph,
-                                           execution_graph=execution_graph,
-                                           default_executor=base.StubTaskExecutor())
+    execution = task_context.model.execution.list()[0]
+
+    workflow_runner.construct_execution_tasks(execution, test_task_graph, base.StubTaskExecutor)
+    task_context.execution = execution
+
+    execution_graph = workflow_runner.get_execution_graph(execution)
     execution_tasks = topological_sort(execution_graph)
 
     assert len(execution_tasks) == 7
@@ -83,30 +88,23 @@ def test_task_graph_into_execution_graph(tmpdir):
         '{0}-End'.format(test_task_graph.id)
     ]
 
-    assert expected_tasks_names == execution_tasks
-
-    assert isinstance(_get_task_by_name(execution_tasks[0], execution_graph),
-                      core.task.StartWorkflowTask)
-
-    _assert_execution_is_api_task(_get_task_by_name(execution_tasks[1], execution_graph),
-                                  simple_before_task)
-    assert isinstance(_get_task_by_name(execution_tasks[2], execution_graph),
-                      core.task.StartSubWorkflowTask)
+    assert expected_tasks_names == [t.api_id for t in execution_tasks]
+    assert all(isinstance(task, models.Task) for task in execution_tasks)
+    execution_tasks = iter(execution_tasks)
 
-    _assert_execution_is_api_task(_get_task_by_name(execution_tasks[3], execution_graph),
-                                  inner_task)
-    assert isinstance(_get_task_by_name(execution_tasks[4], execution_graph),
-                      core.task.EndSubWorkflowTask)
+    assert next(execution_tasks).stub_type == models.Task.START_WORKFLOW
+    _assert_execution_is_api_task(next(execution_tasks), simple_before_task)
+    assert next(execution_tasks).stub_type == models.Task.START_SUBWROFKLOW
+    _assert_execution_is_api_task(next(execution_tasks), inner_task)
+    assert next(execution_tasks).stub_type == models.Task.END_SUBWORKFLOW
+    _assert_execution_is_api_task(next(execution_tasks), simple_after_task)
+    assert next(execution_tasks).stub_type == models.Task.END_WORKFLOW
 
-    _assert_execution_is_api_task(_get_task_by_name(execution_tasks[5], execution_graph),
-                                  simple_after_task)
-    assert isinstance(_get_task_by_name(execution_tasks[6], execution_graph),
-                      core.task.EndWorkflowTask)
     storage.release_sqlite_storage(task_context.model)
 
 
 def _assert_execution_is_api_task(execution_task, api_task):
-    assert execution_task.id == api_task.id
+    assert execution_task.api_id == api_task.id
     assert execution_task.name == api_task.name
     assert execution_task.function == api_task.function
     assert execution_task.actor == api_task.actor

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/workflows/executor/__init__.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/executor/__init__.py b/tests/orchestrator/workflows/executor/__init__.py
index ac6d325..b8032b7 100644
--- a/tests/orchestrator/workflows/executor/__init__.py
+++ b/tests/orchestrator/workflows/executor/__init__.py
@@ -12,68 +12,44 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-import uuid
 import logging
-from collections import namedtuple
-from contextlib import contextmanager
 
 import aria
-from aria.modeling import models
-
-
-class MockTask(object):
-
-    INFINITE_RETRIES = models.Task.INFINITE_RETRIES
-
-    def __init__(self, function, arguments=None, plugin=None, storage=None):
-        self.function = self.name = function
-        self.plugin_fk = plugin.id if plugin else None
-        self.plugin = plugin or None
-        self.arguments = arguments or {}
-        self.states = []
-        self.exception = None
-        self.id = str(uuid.uuid4())
-        self.logger = logging.getLogger()
-        self.context = MockContext(storage)
-        self.attempts_count = 1
-        self.max_attempts = 1
-        self.ignore_failure = False
-        self.interface_name = 'interface_name'
-        self.operation_name = 'operation_name'
-        self.actor = namedtuple('actor', 'name')(name='actor_name')
-        self.model_task = None
-
-        for state in models.Task.STATES:
-            setattr(self, state.upper(), state)
-
-    @contextmanager
-    def _update(self):
-        yield self
 
 
 class MockContext(object):
 
-    def __init__(self, storage=None):
+    def __init__(self, storage, **kwargs):
         self.logger = logging.getLogger('mock_logger')
-        self.task = type('SubprocessMockTask', (object, ), {'plugin': None})
         self.model = storage
+        task = storage.task.model_cls(**kwargs)
+        self.model.task.put(task)
+        self._task_id = task.id
+        self.states = []
+        self.exception = None
+
+    @property
+    def task(self):
+        return self.model.task.get(self._task_id)
 
     @property
     def serialization_dict(self):
         if self.model:
-            return {'context': self.model.serialization_dict, 'context_cls': self.__class__}
+            context = self.model.serialization_dict
+            context['task_id'] = self.task_id
+            return {'context': context, 'context_cls': self.__class__}
         else:
-            return {'context_cls': self.__class__, 'context': {}}
+            return {'context_cls': self.__class__, 'context': {'task': self.task_id}}
 
     def __getattr__(self, item):
         return None
 
     @classmethod
-    def instantiate_from_dict(cls, **kwargs):
+    def instantiate_from_dict(cls, task_id, **kwargs):
         if kwargs:
-            return cls(storage=aria.application_model_storage(**kwargs))
+            return cls(task_id=task_id, storage=aria.application_model_storage(**kwargs))
         else:
-            return cls()
+            return cls(task=task_id)
 
     @staticmethod
     def close():

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/workflows/executor/test_executor.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/executor/test_executor.py b/tests/orchestrator/workflows/executor/test_executor.py
index 3079c60..410a982 100644
--- a/tests/orchestrator/workflows/executor/test_executor.py
+++ b/tests/orchestrator/workflows/executor/test_executor.py
@@ -17,6 +17,8 @@
 import pytest
 import retrying
 
+from tests import mock, storage
+
 try:
     import celery as _celery
     app = _celery.Celery()
@@ -25,7 +27,6 @@ except ImportError:
     _celery = None
     app = None
 
-import aria
 from aria.modeling import models
 from aria.orchestrator import events
 from aria.orchestrator.workflows.executor import (
@@ -35,41 +36,50 @@ from aria.orchestrator.workflows.executor import (
 )
 
 import tests
-from . import MockTask
 
 
 def _get_function(func):
     return '{module}.{func.__name__}'.format(module=__name__, func=func)
 
 
-def execute_and_assert(executor, storage=None):
+def execute_and_assert(executor, ctx):
+    node = ctx.model.node.list()[0]
+    execution = ctx.model.execution.list()[0]
     expected_value = 'value'
-    successful_task = MockTask(_get_function(mock_successful_task), storage=storage)
-    failing_task = MockTask(_get_function(mock_failing_task), storage=storage)
-    task_with_inputs = MockTask(_get_function(mock_task_with_input),
-                                arguments={'input': models.Argument.wrap('input', 'value')},
-                                storage=storage)
-
-    for task in [successful_task, failing_task, task_with_inputs]:
-        executor.execute(task)
+    successful_ctx = models.Task(function=_get_function(mock_successful_task),
+                                 node=node, _executor=executor, execution=execution)
+    failing_ctx = models.Task(
+        function=_get_function(mock_failing_task), node=node, _executor=executor, execution=execution)
+    ctx_with_inputs = models.Task(
+        node=node,
+        function=_get_function(mock_task_with_input),
+        arguments={'input': models.Argument.wrap('input', 'value')},
+        _executor=executor,
+        execution=execution)
+
+    ctx.model.execution.update(execution)
+
+    for op_ctx in [successful_ctx, failing_ctx, ctx_with_inputs]:
+        op_ctx.states = []
+        op_ctx.execute(ctx)
 
     @retrying.retry(stop_max_delay=10000, wait_fixed=100)
     def assertion():
-        assert successful_task.states == ['start', 'success']
-        assert failing_task.states == ['start', 'failure']
-        assert task_with_inputs.states == ['start', 'failure']
-        assert isinstance(failing_task.exception, MockException)
-        assert isinstance(task_with_inputs.exception, MockException)
-        assert task_with_inputs.exception.message == expected_value
+        assert successful_ctx.states == ['start', 'success']
+        assert failing_ctx.states == ['start', 'failure']
+        assert ctx_with_inputs.states == ['start', 'failure']
+        assert isinstance(failing_ctx.exception, MockException)
+        assert isinstance(ctx_with_inputs.exception, MockException)
+        assert ctx_with_inputs.exception.message == expected_value
     assertion()
 
 
-def test_thread_execute(thread_executor):
-    execute_and_assert(thread_executor)
+def test_thread_execute(thread_executor, ctx):
+    execute_and_assert(thread_executor, ctx)
 
 
-def test_process_execute(process_executor, storage):
-    execute_and_assert(process_executor, storage)
+def test_process_execute(process_executor, ctx):
+    execute_and_assert(process_executor, ctx)
 
 
 def mock_successful_task(**_):
@@ -94,25 +104,16 @@ class MockException(Exception):
 
 
 @pytest.fixture
-def storage(tmpdir):
-    return aria.application_model_storage(
-        aria.storage.sql_mapi.SQLAlchemyModelAPI,
-        initiator_kwargs=dict(base_dir=str(tmpdir))
-    )
-
-
-@pytest.fixture(params=[
-    (thread.ThreadExecutor, {'pool_size': 1}),
-    (thread.ThreadExecutor, {'pool_size': 2}),
-    # subprocess needs to load a tests module so we explicitly add the root directory as if
-    # the project has been installed in editable mode
-    # (celery.CeleryExecutor, {'app': app})
-])
-def thread_executor(request):
-    executor_cls, executor_kwargs = request.param
-    result = executor_cls(**executor_kwargs)
-    yield result
-    result.close()
+def ctx(tmpdir):
+    context = mock.context.simple(str(tmpdir))
+    ctx.states = []
+    yield context
+    storage.release_sqlite_storage(context.model)
+
+
+@pytest.fixture
+def thread_executor():
+    return thread.ThreadExecutor
 
 
 @pytest.fixture
@@ -124,15 +125,15 @@ def process_executor():
 
 @pytest.fixture(autouse=True)
 def register_signals():
-    def start_handler(task, *args, **kwargs):
-        task.states.append('start')
+    def start_handler(ctx, *args, **kwargs):
+        ctx.states.append('start')
 
-    def success_handler(task, *args, **kwargs):
-        task.states.append('success')
+    def success_handler(ctx, *args, **kwargs):
+        ctx.states.append('success')
 
-    def failure_handler(task, exception, *args, **kwargs):
-        task.states.append('failure')
-        task.exception = exception
+    def failure_handler(ctx, exception, *args, **kwargs):
+        ctx.states.append('failure')
+        ctx.exception = exception
 
     events.start_task_signal.connect(start_handler)
     events.on_success_task_signal.connect(success_handler)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/907ed6eb/tests/orchestrator/workflows/executor/test_process_executor.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/executor/test_process_executor.py b/tests/orchestrator/workflows/executor/test_process_executor.py
index 058190e..bca2ea3 100644
--- a/tests/orchestrator/workflows/executor/test_process_executor.py
+++ b/tests/orchestrator/workflows/executor/test_process_executor.py
@@ -30,7 +30,6 @@ from tests.fixtures import (  # pylint: disable=unused-import
     plugin_manager,
     fs_model as model
 )
-from . import MockTask
 
 
 class TestProcessExecutor(object):


[5/5] incubator-ariatosca git commit: deleted _task and fixed some tests

Posted by mx...@apache.org.
deleted _task and fixed some tests


Project: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/commit/546e7af7
Tree: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/tree/546e7af7
Diff: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/diff/546e7af7

Branch: refs/heads/ARIA-278-Remove-core-tasks
Commit: 546e7af7ae648c9e89a6b0c9d2ef565740a07136
Parents: 907ed6e
Author: max-orlov <ma...@gigaspaces.com>
Authored: Wed Jun 14 17:33:15 2017 +0300
Committer: max-orlov <ma...@gigaspaces.com>
Committed: Wed Jun 14 17:33:15 2017 +0300

----------------------------------------------------------------------
 aria/orchestrator/workflows/core/_task.py       | 267 -------------------
 tests/orchestrator/execution_plugin/test_ssh.py |  11 +-
 2 files changed, 6 insertions(+), 272 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/546e7af7/aria/orchestrator/workflows/core/_task.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/_task.py b/aria/orchestrator/workflows/core/_task.py
deleted file mode 100644
index 399c177..0000000
--- a/aria/orchestrator/workflows/core/_task.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Workflow tasks
-"""
-
-from contextlib import contextmanager
-from datetime import datetime
-from functools import (
-    partial,
-    wraps,
-)
-
-
-from ....modeling import models
-from ...context import operation as operation_context
-from .. import exceptions
-
-
-def _locked(func=None):
-    if func is None:
-        return partial(_locked, func=_locked)
-
-    @wraps(func)
-    def _wrapper(self, value, **kwargs):
-        if self._update_fields is None:
-            raise exceptions.TaskException('Task is not in update mode')
-        return func(self, value, **kwargs)
-    return _wrapper
-
-
-class BaseTask(object):
-    """
-    Base class for Task objects
-    """
-
-    def __init__(self, id, executor, *args, **kwargs):
-        super(BaseTask, self).__init__(*args, **kwargs)
-        self._id = id
-        self._executor = executor
-
-    def execute(self):
-        return self._executor.execute(self)
-
-    @property
-    def id(self):
-        """
-        :return: the task's id
-        """
-        return self._id
-
-
-class StubTask(BaseTask):
-    """
-    Base stub task for marker user tasks that only mark the start/end of a workflow
-    or sub-workflow
-    """
-    STARTED = models.Task.STARTED
-    SUCCESS = models.Task.SUCCESS
-
-    def __init__(self, *args, **kwargs):
-        super(StubTask, self).__init__(*args, **kwargs)
-        self.status = models.Task.PENDING
-        self.due_at = datetime.utcnow()
-
-
-
-
-class StartWorkflowTask(StubTask):
-    """
-    Task marking a workflow start
-    """
-    pass
-
-
-class EndWorkflowTask(StubTask):
-    """
-    Task marking a workflow end
-    """
-    pass
-
-
-class StartSubWorkflowTask(StubTask):
-    """
-    Task marking a subworkflow start
-    """
-    pass
-
-
-class EndSubWorkflowTask(StubTask):
-    """
-    Task marking a subworkflow end
-    """
-    pass
-
-
-class OperationTask(BaseTask):
-    """
-    Operation task
-    """
-    def __init__(self, api_task, *args, **kwargs):
-        # If no executor is provided, we infer that this is an empty task which does not need to be
-        # executed.
-        super(OperationTask, self).__init__(id=api_task.id, *args, **kwargs)
-        self._workflow_context = api_task._workflow_context
-        self.interface_name = api_task.interface_name
-        self.operation_name = api_task.operation_name
-        model_storage = api_task._workflow_context.model
-
-        actor = getattr(api_task.actor, '_wrapped', api_task.actor)
-
-        base_task_model = model_storage.task.model_cls
-        if isinstance(actor, models.Node):
-            context_cls = operation_context.NodeOperationContext
-            create_task_model = base_task_model.for_node
-        elif isinstance(actor, models.Relationship):
-            context_cls = operation_context.RelationshipOperationContext
-            create_task_model = base_task_model.for_relationship
-        else:
-            raise RuntimeError('No operation context could be created for {actor.model_cls}'
-                               .format(actor=actor))
-
-        task_model = create_task_model(
-            name=api_task.name,
-            actor=actor,
-            status=base_task_model.PENDING,
-            max_attempts=api_task.max_attempts,
-            retry_interval=api_task.retry_interval,
-            ignore_failure=api_task.ignore_failure,
-            execution=self._workflow_context.execution,
-
-            # Only non-stub tasks have these fields
-            plugin=api_task.plugin,
-            function=api_task.function,
-            arguments=api_task.arguments
-        )
-        self._workflow_context.model.task.put(task_model)
-
-        self._ctx = context_cls(name=api_task.name,
-                                model_storage=self._workflow_context.model,
-                                resource_storage=self._workflow_context.resource,
-                                service_id=self._workflow_context._service_id,
-                                task_id=task_model.id,
-                                actor_id=actor.id,
-                                execution_id=self._workflow_context._execution_id,
-                                workdir=self._workflow_context._workdir)
-        self._task_id = task_model.id
-        self._update_fields = None
-
-    @contextmanager
-    def _update(self):
-        """
-        A context manager which puts the task into update mode, enabling fields update.
-        :yields: None
-        """
-        self._update_fields = {}
-        try:
-            yield
-            for key, value in self._update_fields.items():
-                setattr(self.model_task, key, value)
-            self.model_task = self.model_task
-        finally:
-            self._update_fields = None
-
-    @property
-    def model_task(self):
-        """
-        Returns the task model in storage
-        :return: task in storage
-        """
-        return self._workflow_context.model.task.get(self._task_id)
-
-    @model_task.setter
-    def model_task(self, value):
-        self._workflow_context.model.task.put(value)
-
-    @property
-    def context(self):
-        """
-        Contexts for the operation
-        :return:
-        """
-        return self._ctx
-
-    @property
-    def status(self):
-        """
-        Returns the task status
-        :return: task status
-        """
-        return self.model_task.status
-
-    @status.setter
-    @_locked
-    def status(self, value):
-        self._update_fields['status'] = value
-
-    @property
-    def started_at(self):
-        """
-        Returns when the task started
-        :return: when task started
-        """
-        return self.model_task.started_at
-
-    @started_at.setter
-    @_locked
-    def started_at(self, value):
-        self._update_fields['started_at'] = value
-
-    @property
-    def ended_at(self):
-        """
-        Returns when the task ended
-        :return: when task ended
-        """
-        return self.model_task.ended_at
-
-    @ended_at.setter
-    @_locked
-    def ended_at(self, value):
-        self._update_fields['ended_at'] = value
-
-    @property
-    def attempts_count(self):
-        """
-        Returns the attempts count for the task
-        :return: attempts count
-        """
-        return self.model_task.attempts_count
-
-    @attempts_count.setter
-    @_locked
-    def attempts_count(self, value):
-        self._update_fields['attempts_count'] = value
-
-    @property
-    def due_at(self):
-        """
-        Returns the minimum datetime in which the task can be executed
-        :return: eta
-        """
-        return self.model_task.due_at
-
-    @due_at.setter
-    @_locked
-    def due_at(self, value):
-        self._update_fields['due_at'] = value
-
-    def __getattr__(self, attr):
-        try:
-            return getattr(self.model_task, attr)
-        except AttributeError:
-            return super(OperationTask, self).__getattribute__(attr)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/546e7af7/tests/orchestrator/execution_plugin/test_ssh.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/execution_plugin/test_ssh.py b/tests/orchestrator/execution_plugin/test_ssh.py
index 8c4dd2d..e3ba2c4 100644
--- a/tests/orchestrator/execution_plugin/test_ssh.py
+++ b/tests/orchestrator/execution_plugin/test_ssh.py
@@ -25,7 +25,7 @@ from fabric.contrib import files
 from fabric import context_managers
 
 from aria.modeling import models
-from aria.orchestrator import events
+from aria.orchestrator import events, workflow_runner
 from aria.orchestrator import workflow
 from aria.orchestrator.workflows import api
 from aria.orchestrator.workflows.executor import process
@@ -254,10 +254,11 @@ class TestWithActualSSHServer(object):
             graph.sequence(*ops)
             return graph
         tasks_graph = mock_workflow(ctx=self._workflow_context)  # pylint: disable=no-value-for-parameter
-        eng = engine.Engine(
-            executor=self._executor,
-            workflow_context=self._workflow_context,
-            tasks_graph=tasks_graph)
+        workflow_runner.construct_execution_tasks(
+            self._workflow_context.execution, tasks_graph, self._executor.__class__)
+        self._workflow_context.execution = self._workflow_context.execution
+        execution_graph = workflow_runner.get_execution_graph(self._workflow_context.execution)
+        eng = engine.Engine(self._executor, self._workflow_context, execution_graph)
         eng.execute()
         return self._workflow_context.model.node.get_by_name(
             mock.models.DEPENDENCY_NODE_NAME).attributes


[3/5] incubator-ariatosca git commit: ARIA-276 Support model instrumentation for workflows

Posted by mx...@apache.org.
ARIA-276 Support model instrumentation for workflows


Project: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/commit/2149a5ee
Tree: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/tree/2149a5ee
Diff: http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/diff/2149a5ee

Branch: refs/heads/ARIA-278-Remove-core-tasks
Commit: 2149a5ee0c4656a253f54db20f279197961588c1
Parents: 1e883c5
Author: max-orlov <ma...@gigaspaces.com>
Authored: Thu Jun 8 09:52:31 2017 +0300
Committer: max-orlov <ma...@gigaspaces.com>
Committed: Tue Jun 13 14:34:41 2017 +0300

----------------------------------------------------------------------
 aria/orchestrator/context/common.py             |   7 +
 aria/orchestrator/context/operation.py          |   7 -
 aria/orchestrator/decorators.py                 |   5 +-
 aria/orchestrator/workflows/api/task.py         |   2 -
 aria/orchestrator/workflows/core/task.py        |  12 +-
 aria/storage/collection_instrumentation.py      |  46 +--
 .../context/test_collection_instrumentation.py  | 325 -------------------
 .../context/test_context_instrumentation.py     | 108 ++++++
 tests/orchestrator/context/test_serialize.py    |  20 +-
 tests/orchestrator/context/test_workflow.py     |  93 ++++--
 .../orchestrator/execution_plugin/test_local.py |  26 +-
 tests/orchestrator/execution_plugin/test_ssh.py |  50 +--
 .../workflows/builtin/test_execute_operation.py |   9 +-
 .../orchestrator/workflows/core/test_engine.py  |  88 +++--
 .../executor/test_process_executor_extension.py |  24 +-
 .../test_process_executor_tracked_changes.py    |  26 +-
 .../storage/test_collection_instrumentation.py  | 257 +++++++++++++++
 17 files changed, 627 insertions(+), 478 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/aria/orchestrator/context/common.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/context/common.py b/aria/orchestrator/context/common.py
index c98e026..f4df317 100644
--- a/aria/orchestrator/context/common.py
+++ b/aria/orchestrator/context/common.py
@@ -36,6 +36,13 @@ class BaseContext(object):
     Base context object for workflow and operation
     """
 
+    INSTRUMENTATION_FIELDS = (
+        modeling.models.Node.attributes,
+        modeling.models.Node.properties,
+        modeling.models.NodeTemplate.attributes,
+        modeling.models.NodeTemplate.properties
+    )
+
     class PrefixedLogger(object):
         def __init__(self, base_logger, task_id=None):
             self._logger = base_logger

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/aria/orchestrator/context/operation.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/context/operation.py b/aria/orchestrator/context/operation.py
index af7220d..efdc04d 100644
--- a/aria/orchestrator/context/operation.py
+++ b/aria/orchestrator/context/operation.py
@@ -29,13 +29,6 @@ class BaseOperationContext(common.BaseContext):
     Context object used during operation creation and execution
     """
 
-    INSTRUMENTATION_FIELDS = (
-        aria.modeling.models.Node.attributes,
-        aria.modeling.models.Node.properties,
-        aria.modeling.models.NodeTemplate.attributes,
-        aria.modeling.models.NodeTemplate.properties
-    )
-
     def __init__(self, task_id, actor_id, **kwargs):
         self._task_id = task_id
         self._actor_id = actor_id

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/aria/orchestrator/decorators.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/decorators.py b/aria/orchestrator/decorators.py
index 80f6962..389bfb8 100644
--- a/aria/orchestrator/decorators.py
+++ b/aria/orchestrator/decorators.py
@@ -49,8 +49,9 @@ def workflow(func=None, suffix_template=''):
         workflow_parameters.setdefault('ctx', ctx)
         workflow_parameters.setdefault('graph', task_graph.TaskGraph(workflow_name))
         validate_function_arguments(func, workflow_parameters)
-        with context.workflow.current.push(ctx):
-            func(**workflow_parameters)
+        with ctx.model.instrument(*ctx.INSTRUMENTATION_FIELDS):
+            with context.workflow.current.push(ctx):
+                func(**workflow_parameters)
         return workflow_parameters['graph']
     return _wrapper
 

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/aria/orchestrator/workflows/api/task.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/api/task.py b/aria/orchestrator/workflows/api/task.py
index bcba56e..ca125a8 100644
--- a/aria/orchestrator/workflows/api/task.py
+++ b/aria/orchestrator/workflows/api/task.py
@@ -108,8 +108,6 @@ class OperationTask(BaseTask):
                 ``interface_name`` and ``operation_name`` to not refer to an operation on the actor
         """
 
-        assert isinstance(actor, (models.Node, models.Relationship))
-
         # Creating OperationTask directly should raise an error when there is no
         # interface/operation.
         if not has_operation(actor, interface_name, operation_name):

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/aria/orchestrator/workflows/core/task.py
----------------------------------------------------------------------
diff --git a/aria/orchestrator/workflows/core/task.py b/aria/orchestrator/workflows/core/task.py
index 72d83ea..d732f09 100644
--- a/aria/orchestrator/workflows/core/task.py
+++ b/aria/orchestrator/workflows/core/task.py
@@ -124,20 +124,22 @@ class OperationTask(BaseTask):
         self.operation_name = api_task.operation_name
         model_storage = api_task._workflow_context.model
 
+        actor = getattr(api_task.actor, '_wrapped', api_task.actor)
+
         base_task_model = model_storage.task.model_cls
-        if isinstance(api_task.actor, models.Node):
+        if isinstance(actor, models.Node):
             context_cls = operation_context.NodeOperationContext
             create_task_model = base_task_model.for_node
-        elif isinstance(api_task.actor, models.Relationship):
+        elif isinstance(actor, models.Relationship):
             context_cls = operation_context.RelationshipOperationContext
             create_task_model = base_task_model.for_relationship
         else:
             raise RuntimeError('No operation context could be created for {actor.model_cls}'
-                               .format(actor=api_task.actor))
+                               .format(actor=actor))
 
         task_model = create_task_model(
             name=api_task.name,
-            actor=api_task.actor,
+            actor=actor,
             status=base_task_model.PENDING,
             max_attempts=api_task.max_attempts,
             retry_interval=api_task.retry_interval,
@@ -156,7 +158,7 @@ class OperationTask(BaseTask):
                                 resource_storage=self._workflow_context.resource,
                                 service_id=self._workflow_context._service_id,
                                 task_id=task_model.id,
-                                actor_id=api_task.actor.id,
+                                actor_id=actor.id,
                                 execution_id=self._workflow_context._execution_id,
                                 workdir=self._workflow_context._workdir)
         self._task_id = task_model.id

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/aria/storage/collection_instrumentation.py
----------------------------------------------------------------------
diff --git a/aria/storage/collection_instrumentation.py b/aria/storage/collection_instrumentation.py
index 27d8322..454f97a 100644
--- a/aria/storage/collection_instrumentation.py
+++ b/aria/storage/collection_instrumentation.py
@@ -198,23 +198,28 @@ class _InstrumentedList(_InstrumentedCollection, list):
         return list(self)
 
 
-class _InstrumentedModel(object):
+class _WrappedBase(object):
 
-    def __init__(self, original_model, mapi, instrumentation):
+    def __init__(self, wrapped, instrumentation):
+        self._wrapped = wrapped
+        self._instrumentation = instrumentation
+
+
+class _InstrumentedModel(_WrappedBase):
+
+    def __init__(self, mapi, *args, **kwargs):
         """
         The original model
-        :param original_model: the model to be instrumented
+        :param wrapped: the model to be instrumented
         :param mapi: the mapi for that model
         """
-        super(_InstrumentedModel, self).__init__()
-        self._original_model = original_model
+        super(_InstrumentedModel, self).__init__(*args, **kwargs)
         self._mapi = mapi
-        self._instrumentation = instrumentation
         self._apply_instrumentation()
 
     def __getattr__(self, item):
-        return_value = getattr(self._original_model, item)
-        if isinstance(return_value, self._original_model.__class__):
+        return_value = getattr(self._wrapped, item)
+        if isinstance(return_value, self._wrapped.__class__):
             return _create_instrumented_model(return_value, self._mapi, self._instrumentation)
         if isinstance(return_value, (list, dict)):
             return _create_wrapped_model(return_value, self._mapi, self._instrumentation)
@@ -224,7 +229,7 @@ class _InstrumentedModel(object):
         for field in self._instrumentation:
             field_name = field.key
             field_cls = field.mapper.class_
-            field = getattr(self._original_model, field_name)
+            field = getattr(self._wrapped, field_name)
 
             # Preserve the original value. e.g. original attributes would be located under
             # _attributes
@@ -241,20 +246,20 @@ class _InstrumentedModel(object):
                     "ARIA supports instrumentation for dict and list. Field {field} of the "
                     "class {model} is of {type} type.".format(
                         field=field,
-                        model=self._original_model,
+                        model=self._wrapped,
                         type=type(field)))
 
             instrumented_class = instrumentation_cls(seq=field,
-                                                     parent=self._original_model,
+                                                     parent=self._wrapped,
                                                      mapi=self._mapi,
                                                      field_name=field_name,
                                                      field_cls=field_cls)
             setattr(self, field_name, instrumented_class)
 
 
-class _WrappedModel(object):
+class _WrappedModel(_WrappedBase):
 
-    def __init__(self, wrapped, instrumentation, **kwargs):
+    def __init__(self, instrumentation_kwargs, *args, **kwargs):
         """
 
         :param instrumented_cls: The class to be instrumented
@@ -262,9 +267,8 @@ class _WrappedModel(object):
         :param wrapped: the currently wrapped instance
         :param kwargs: and kwargs to the passed to the instrumented class.
         """
-        self._kwargs = kwargs
-        self._instrumentation = instrumentation
-        self._wrapped = wrapped
+        super(_WrappedModel, self).__init__(*args, **kwargs)
+        self._kwargs = instrumentation_kwargs
 
     def _wrap(self, value):
         if value.__class__ in (class_.class_ for class_ in self._instrumentation):
@@ -286,16 +290,18 @@ class _WrappedModel(object):
         return self._wrap(self._wrapped[item])
 
 
-def _create_instrumented_model(original_model, mapi, instrumentation, **kwargs):
+def _create_instrumented_model(original_model, mapi, instrumentation):
     return type('Instrumented{0}'.format(original_model.__class__.__name__),
                 (_InstrumentedModel,),
-                {})(original_model, mapi, instrumentation, **kwargs)
+                {})(wrapped=original_model, instrumentation=instrumentation, mapi=mapi)
 
 
-def _create_wrapped_model(original_model, mapi, instrumentation, **kwargs):
+def _create_wrapped_model(original_model, mapi, instrumentation):
     return type('Wrapped{0}'.format(original_model.__class__.__name__),
                 (_WrappedModel, ),
-                {})(original_model, instrumentation, mapi=mapi, **kwargs)
+                {})(wrapped=original_model,
+                    instrumentation=instrumentation,
+                    instrumentation_kwargs=dict(mapi=mapi))
 
 
 def instrument(instrumentation, original_model, mapi):

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/context/test_collection_instrumentation.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/test_collection_instrumentation.py b/tests/orchestrator/context/test_collection_instrumentation.py
deleted file mode 100644
index ae3e8ac..0000000
--- a/tests/orchestrator/context/test_collection_instrumentation.py
+++ /dev/null
@@ -1,325 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import pytest
-
-from aria.modeling import models
-from aria.storage import collection_instrumentation
-from aria.orchestrator.context import operation
-
-from tests import (
-    mock,
-    storage
-)
-
-
-class MockActor(object):
-    def __init__(self):
-        self.dict_ = {}
-        self.list_ = []
-
-
-class MockMAPI(object):
-
-    def __init__(self):
-        pass
-
-    def put(self, *args, **kwargs):
-        pass
-
-    def update(self, *args, **kwargs):
-        pass
-
-
-class CollectionInstrumentation(object):
-
-    @pytest.fixture
-    def actor(self):
-        return MockActor()
-
-    @pytest.fixture
-    def model(self):
-        return MockMAPI()
-
-    @pytest.fixture
-    def dict_(self, actor, model):
-        return collection_instrumentation._InstrumentedDict(model, actor, 'dict_', models.Attribute)
-
-    @pytest.fixture
-    def list_(self, actor, model):
-        return collection_instrumentation._InstrumentedList(model, actor, 'list_', models.Attribute)
-
-
-class TestDict(CollectionInstrumentation):
-
-    def test_keys(self, actor, dict_):
-        dict_.update(
-            {
-                'key1': models.Attribute.wrap('key1', 'value1'),
-                'key2': models.Attribute.wrap('key2', 'value2')
-            }
-        )
-        assert sorted(dict_.keys()) == sorted(['key1', 'key2']) == sorted(actor.dict_.keys())
-
-    def test_values(self, actor, dict_):
-        dict_.update({
-            'key1': models.Attribute.wrap('key1', 'value1'),
-            'key2': models.Attribute.wrap('key1', 'value2')
-        })
-        assert (sorted(dict_.values()) ==
-                sorted(['value1', 'value2']) ==
-                sorted(v.value for v in actor.dict_.values()))
-
-    def test_items(self, dict_):
-        dict_.update({
-            'key1': models.Attribute.wrap('key1', 'value1'),
-            'key2': models.Attribute.wrap('key1', 'value2')
-        })
-        assert sorted(dict_.items()) == sorted([('key1', 'value1'), ('key2', 'value2')])
-
-    def test_iter(self, actor, dict_):
-        dict_.update({
-            'key1': models.Attribute.wrap('key1', 'value1'),
-            'key2': models.Attribute.wrap('key1', 'value2')
-        })
-        assert sorted(list(dict_)) == sorted(['key1', 'key2']) == sorted(actor.dict_.keys())
-
-    def test_bool(self, dict_):
-        assert not dict_
-        dict_.update({
-            'key1': models.Attribute.wrap('key1', 'value1'),
-            'key2': models.Attribute.wrap('key1', 'value2')
-        })
-        assert dict_
-
-    def test_set_item(self, actor, dict_):
-        dict_['key1'] = models.Attribute.wrap('key1', 'value1')
-        assert dict_['key1'] == 'value1' == actor.dict_['key1'].value
-        assert isinstance(actor.dict_['key1'], models.Attribute)
-
-    def test_nested(self, actor, dict_):
-        dict_['key'] = {}
-        assert isinstance(actor.dict_['key'], models.Attribute)
-        assert dict_['key'] == actor.dict_['key'].value == {}
-
-        dict_['key']['inner_key'] = 'value'
-
-        assert len(dict_) == 1
-        assert 'inner_key' in dict_['key']
-        assert dict_['key']['inner_key'] == 'value'
-        assert dict_['key'].keys() == ['inner_key']
-        assert dict_['key'].values() == ['value']
-        assert dict_['key'].items() == [('inner_key', 'value')]
-        assert isinstance(actor.dict_['key'], models.Attribute)
-        assert isinstance(dict_['key'], collection_instrumentation._InstrumentedDict)
-
-        dict_['key'].update({'updated_key': 'updated_value'})
-        assert len(dict_) == 1
-        assert 'updated_key' in dict_['key']
-        assert dict_['key']['updated_key'] == 'updated_value'
-        assert sorted(dict_['key'].keys()) == sorted(['inner_key', 'updated_key'])
-        assert sorted(dict_['key'].values()) == sorted(['value', 'updated_value'])
-        assert sorted(dict_['key'].items()) == sorted([('inner_key', 'value'),
-                                                       ('updated_key', 'updated_value')])
-        assert isinstance(actor.dict_['key'], models.Attribute)
-        assert isinstance(dict_['key'], collection_instrumentation._InstrumentedDict)
-
-        dict_.update({'key': 'override_value'})
-        assert len(dict_) == 1
-        assert 'key' in dict_
-        assert dict_['key'] == 'override_value'
-        assert len(actor.dict_) == 1
-        assert isinstance(actor.dict_['key'], models.Attribute)
-        assert actor.dict_['key'].value == 'override_value'
-
-    def test_get_item(self, actor, dict_):
-        dict_['key1'] = models.Attribute.wrap('key1', 'value1')
-        assert isinstance(actor.dict_['key1'], models.Attribute)
-
-    def test_update(self, actor, dict_):
-        dict_['key1'] = 'value1'
-
-        new_dict = {'key2': 'value2'}
-        dict_.update(new_dict)
-        assert len(dict_) == 2
-        assert dict_['key2'] == 'value2'
-        assert isinstance(actor.dict_['key2'], models.Attribute)
-
-        new_dict = {}
-        new_dict.update(dict_)
-        assert new_dict['key1'] == dict_['key1']
-
-    def test_copy(self, dict_):
-        dict_['key1'] = 'value1'
-
-        new_dict = dict_.copy()
-        assert new_dict is not dict_
-        assert new_dict == dict_
-
-        dict_['key1'] = 'value2'
-        assert new_dict['key1'] == 'value1'
-        assert dict_['key1'] == 'value2'
-
-    def test_clear(self, dict_):
-        dict_['key1'] = 'value1'
-        dict_.clear()
-
-        assert len(dict_) == 0
-
-
-class TestList(CollectionInstrumentation):
-
-    def test_append(self, actor, list_):
-        list_.append(models.Attribute.wrap('name', 'value1'))
-        list_.append('value2')
-        assert len(actor.list_) == 2
-        assert len(list_) == 2
-        assert isinstance(actor.list_[0], models.Attribute)
-        assert list_[0] == 'value1'
-
-        assert isinstance(actor.list_[1], models.Attribute)
-        assert list_[1] == 'value2'
-
-        list_[0] = 'new_value1'
-        list_[1] = 'new_value2'
-        assert isinstance(actor.list_[1], models.Attribute)
-        assert isinstance(actor.list_[1], models.Attribute)
-        assert list_[0] == 'new_value1'
-        assert list_[1] == 'new_value2'
-
-    def test_iter(self, list_):
-        list_.append('value1')
-        list_.append('value2')
-        assert sorted(list_) == sorted(['value1', 'value2'])
-
-    def test_insert(self, actor, list_):
-        list_.append('value1')
-        list_.insert(0, 'value2')
-        list_.insert(2, 'value3')
-        list_.insert(10, 'value4')
-        assert sorted(list_) == sorted(['value1', 'value2', 'value3', 'value4'])
-        assert len(actor.list_) == 4
-
-    def test_set(self, list_):
-        list_.append('value1')
-        list_.append('value2')
-
-        list_[1] = 'value3'
-        assert len(list_) == 2
-        assert sorted(list_) == sorted(['value1', 'value3'])
-
-    def test_insert_into_nested(self, actor, list_):
-        list_.append([])
-
-        list_[0].append('inner_item')
-        assert isinstance(actor.list_[0], models.Attribute)
-        assert len(list_) == 1
-        assert list_[0][0] == 'inner_item'
-
-        list_[0].append('new_item')
-        assert isinstance(actor.list_[0], models.Attribute)
-        assert len(list_) == 1
-        assert list_[0][1] == 'new_item'
-
-        assert list_[0] == ['inner_item', 'new_item']
-        assert ['inner_item', 'new_item'] == list_[0]
-
-
-class TestDictList(CollectionInstrumentation):
-    def test_dict_in_list(self, actor, list_):
-        list_.append({})
-        assert len(list_) == 1
-        assert isinstance(actor.list_[0], models.Attribute)
-        assert actor.list_[0].value == {}
-
-        list_[0]['key'] = 'value'
-        assert list_[0]['key'] == 'value'
-        assert len(actor.list_) == 1
-        assert isinstance(actor.list_[0], models.Attribute)
-        assert actor.list_[0].value['key'] == 'value'
-
-    def test_list_in_dict(self, actor, dict_):
-        dict_['key'] = []
-        assert len(dict_) == 1
-        assert isinstance(actor.dict_['key'], models.Attribute)
-        assert actor.dict_['key'].value == []
-
-        dict_['key'].append('value')
-        assert dict_['key'][0] == 'value'
-        assert len(actor.dict_) == 1
-        assert isinstance(actor.dict_['key'], models.Attribute)
-        assert actor.dict_['key'].value[0] == 'value'
-
-
-class TestModelInstrumentation(object):
-
-    @pytest.fixture
-    def workflow_ctx(self, tmpdir):
-        context = mock.context.simple(str(tmpdir), inmemory=True)
-        yield context
-        storage.release_sqlite_storage(context.model)
-
-    def test_attributes_access(self, workflow_ctx):
-        node = workflow_ctx.model.node.list()[0]
-        task = models.Task(node=node)
-        workflow_ctx.model.task.put(task)
-
-        ctx = operation.NodeOperationContext(
-            task.id, node.id, name='', service_id=workflow_ctx.model.service.list()[0].id,
-            model_storage=workflow_ctx.model, resource_storage=workflow_ctx.resource,
-            execution_id=1)
-
-        def _run_assertions(is_under_ctx):
-            def ctx_assert(expr):
-                if is_under_ctx:
-                    assert expr
-                else:
-                    assert not expr
-
-            ctx_assert(isinstance(ctx.node.attributes,
-                                  collection_instrumentation._InstrumentedDict))
-            assert not isinstance(ctx.node.properties,
-                                  collection_instrumentation._InstrumentedCollection)
-
-            for rel in ctx.node.inbound_relationships:
-                ctx_assert(isinstance(rel, collection_instrumentation._WrappedModel))
-                ctx_assert(isinstance(rel.source_node.attributes,
-                                      collection_instrumentation._InstrumentedDict))
-                ctx_assert(isinstance(rel.target_node.attributes,
-                                      collection_instrumentation._InstrumentedDict))
-
-            for node in ctx.model.node:
-                ctx_assert(isinstance(node.attributes,
-                                      collection_instrumentation._InstrumentedDict))
-                assert not isinstance(node.properties,
-                                      collection_instrumentation._InstrumentedCollection)
-
-            for rel in ctx.model.relationship:
-                ctx_assert(isinstance(rel, collection_instrumentation._WrappedModel))
-
-                ctx_assert(isinstance(rel.source_node.attributes,
-                                      collection_instrumentation._InstrumentedDict))
-                ctx_assert(isinstance(rel.target_node.attributes,
-                                      collection_instrumentation._InstrumentedDict))
-
-                assert not isinstance(rel.source_node.properties,
-                                      collection_instrumentation._InstrumentedCollection)
-                assert not isinstance(rel.target_node.properties,
-                                      collection_instrumentation._InstrumentedCollection)
-
-        with ctx.model.instrument(models.Node.attributes):
-            _run_assertions(True)
-
-        _run_assertions(False)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/context/test_context_instrumentation.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/test_context_instrumentation.py b/tests/orchestrator/context/test_context_instrumentation.py
new file mode 100644
index 0000000..6cc8096
--- /dev/null
+++ b/tests/orchestrator/context/test_context_instrumentation.py
@@ -0,0 +1,108 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+
+from aria.modeling import models
+from aria.storage import collection_instrumentation
+from aria.orchestrator.context import operation
+
+from tests import (
+    mock,
+    storage
+)
+
+
+class TestContextInstrumentation(object):
+
+    @pytest.fixture
+    def workflow_ctx(self, tmpdir):
+        context = mock.context.simple(str(tmpdir), inmemory=True)
+        yield context
+        storage.release_sqlite_storage(context.model)
+
+    def test_workflow_context_instrumentation(self, workflow_ctx):
+        with workflow_ctx.model.instrument(models.Node.attributes):
+            self._run_common_assertions(workflow_ctx, True)
+        self._run_common_assertions(workflow_ctx, False)
+
+    def test_operation_context_instrumentation(self, workflow_ctx):
+        node = workflow_ctx.model.node.list()[0]
+        task = models.Task(node=node)
+        workflow_ctx.model.task.put(task)
+
+        ctx = operation.NodeOperationContext(
+            task.id, node.id, name='', service_id=workflow_ctx.model.service.list()[0].id,
+            model_storage=workflow_ctx.model, resource_storage=workflow_ctx.resource,
+            execution_id=1)
+
+        with ctx.model.instrument(models.Node.attributes):
+            self._run_op_assertions(ctx, True)
+            self._run_common_assertions(ctx, True)
+
+        self._run_op_assertions(ctx, False)
+        self._run_common_assertions(ctx, False)
+
+    @staticmethod
+    def ctx_assert(expr, is_under_ctx):
+        if is_under_ctx:
+            assert expr
+        else:
+            assert not expr
+
+    def _run_op_assertions(self, ctx, is_under_ctx):
+        self.ctx_assert(isinstance(ctx.node.attributes,
+                                   collection_instrumentation._InstrumentedDict), is_under_ctx)
+        assert not isinstance(ctx.node.properties,
+                              collection_instrumentation._InstrumentedCollection)
+
+        for rel in ctx.node.inbound_relationships:
+            self.ctx_assert(
+                isinstance(rel, collection_instrumentation._WrappedModel), is_under_ctx)
+            self.ctx_assert(
+                isinstance(rel.source_node.attributes,
+                           collection_instrumentation._InstrumentedDict),
+                is_under_ctx)
+            self.ctx_assert(
+                isinstance(rel.target_node.attributes,
+                           collection_instrumentation._InstrumentedDict),
+                is_under_ctx)
+
+    def _run_common_assertions(self, ctx, is_under_ctx):
+
+        for node in ctx.model.node:
+            self.ctx_assert(
+                isinstance(node.attributes, collection_instrumentation._InstrumentedDict),
+                is_under_ctx)
+            assert not isinstance(node.properties,
+                                  collection_instrumentation._InstrumentedCollection)
+
+        for rel in ctx.model.relationship:
+            self.ctx_assert(
+                isinstance(rel, collection_instrumentation._WrappedModel), is_under_ctx)
+
+            self.ctx_assert(
+                isinstance(rel.source_node.attributes,
+                           collection_instrumentation._InstrumentedDict),
+                is_under_ctx)
+            self.ctx_assert(
+                isinstance(rel.target_node.attributes,
+                           collection_instrumentation._InstrumentedDict),
+                is_under_ctx)
+
+            assert not isinstance(rel.source_node.properties,
+                                  collection_instrumentation._InstrumentedCollection)
+            assert not isinstance(rel.target_node.properties,
+                                  collection_instrumentation._InstrumentedCollection)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/context/test_serialize.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/test_serialize.py b/tests/orchestrator/context/test_serialize.py
index 4db7bf4..0919e81 100644
--- a/tests/orchestrator/context/test_serialize.py
+++ b/tests/orchestrator/context/test_serialize.py
@@ -33,16 +33,10 @@ def test_serialize_operation_context(context, executor, tmpdir):
     test_file.write(TEST_FILE_CONTENT)
     resource = context.resource
     resource.service_template.upload(TEST_FILE_ENTRY_ID, str(test_file))
-    graph = _mock_workflow(ctx=context)  # pylint: disable=no-value-for-parameter
-    eng = engine.Engine(executor=executor, workflow_context=context, tasks_graph=graph)
-    eng.execute()
-
 
-@workflow
-def _mock_workflow(ctx, graph):
-    node = ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
+    node = context.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
     plugin = mock.models.create_plugin()
-    ctx.model.plugin.put(plugin)
+    context.model.plugin.put(plugin)
     interface = mock.models.create_interface(
         node.service,
         'test',
@@ -51,6 +45,16 @@ def _mock_workflow(ctx, graph):
                               plugin=plugin)
     )
     node.interfaces[interface.name] = interface
+    context.model.node.update(node)
+
+    graph = _mock_workflow(ctx=context)  # pylint: disable=no-value-for-parameter
+    eng = engine.Engine(executor=executor, workflow_context=context, tasks_graph=graph)
+    eng.execute()
+
+
+@workflow
+def _mock_workflow(ctx, graph):
+    node = ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
     task = api.task.OperationTask(node, interface_name='test', operation_name='op')
     graph.add_tasks(task)
     return graph

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/context/test_workflow.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/context/test_workflow.py b/tests/orchestrator/context/test_workflow.py
index 3c35435..6d53c2a 100644
--- a/tests/orchestrator/context/test_workflow.py
+++ b/tests/orchestrator/context/test_workflow.py
@@ -17,11 +17,14 @@ from datetime import datetime
 
 import pytest
 
-from aria import application_model_storage
+from aria import application_model_storage, workflow
 from aria.orchestrator import context
 from aria.storage import sql_mapi
-from tests import storage as test_storage
-from tests.mock import models
+from aria.orchestrator.workflows.executor import thread, process
+
+from tests import storage as test_storage, ROOT_DIR
+from ... import mock
+from . import execute
 
 
 class TestWorkflowContext(object):
@@ -30,10 +33,10 @@ class TestWorkflowContext(object):
         ctx = self._create_ctx(storage)
         execution = storage.execution.get(ctx.execution.id)             # pylint: disable=no-member
         assert execution.service == storage.service.get_by_name(
-            models.SERVICE_NAME)
-        assert execution.workflow_name == models.WORKFLOW_NAME
+            mock.models.SERVICE_NAME)
+        assert execution.workflow_name == mock.models.WORKFLOW_NAME
         assert execution.service_template == storage.service_template.get_by_name(
-            models.SERVICE_TEMPLATE_NAME)
+            mock.models.SERVICE_TEMPLATE_NAME)
         assert execution.status == storage.execution.model_cls.PENDING
         assert execution.inputs == {}
         assert execution.created_at <= datetime.utcnow()
@@ -49,27 +52,75 @@ class TestWorkflowContext(object):
         :param storage:
         :return WorkflowContext:
         """
-        service = storage.service.get_by_name(models.SERVICE_NAME)
+        service = storage.service.get_by_name(mock.models.SERVICE_NAME)
         return context.workflow.WorkflowContext(
             name='simple_context',
             model_storage=storage,
             resource_storage=None,
             service_id=service,
             execution_id=storage.execution.list(filters=dict(service=service))[0].id,
-            workflow_name=models.WORKFLOW_NAME,
-            task_max_attempts=models.TASK_MAX_ATTEMPTS,
-            task_retry_interval=models.TASK_RETRY_INTERVAL
+            workflow_name=mock.models.WORKFLOW_NAME,
+            task_max_attempts=mock.models.TASK_MAX_ATTEMPTS,
+            task_retry_interval=mock.models.TASK_RETRY_INTERVAL
         )
 
+    @pytest.fixture
+    def storage(self):
+        workflow_storage = application_model_storage(
+            sql_mapi.SQLAlchemyModelAPI, initiator=test_storage.init_inmemory_model_storage)
+        workflow_storage.service_template.put(mock.models.create_service_template())
+        service_template = workflow_storage.service_template.get_by_name(
+            mock.models.SERVICE_TEMPLATE_NAME)
+        service = mock.models.create_service(service_template)
+        workflow_storage.service.put(service)
+        workflow_storage.execution.put(mock.models.create_execution(service))
+        yield workflow_storage
+        test_storage.release_sqlite_storage(workflow_storage)
+
+
+@pytest.fixture
+def ctx(tmpdir):
+    context = mock.context.simple(
+        str(tmpdir),
+        context_kwargs=dict(workdir=str(tmpdir.join('workdir')))
+    )
+    yield context
+    test_storage.release_sqlite_storage(context.model)
+
+
+@pytest.fixture(params=[
+    (thread.ThreadExecutor, {}),
+    (process.ProcessExecutor, {'python_path': [ROOT_DIR]}),
+])
+def executor(request):
+    executor_cls, executor_kwargs = request.param
+    result = executor_cls(**executor_kwargs)
+    try:
+        yield result
+    finally:
+        result.close()
+
+
+def test_attribute_consumption(ctx, executor):
+
+    node = ctx.model.node.get_by_name(mock.models.DEPENDENT_NODE_NAME)
+    node.attributes['key'] = ctx.model.attribute.model_cls.wrap('key', 'value')
+    node.attributes['key2'] = ctx.model.attribute.model_cls.wrap('key2', 'value_to_change')
+    ctx.model.node.update(node)
+
+    assert node.attributes['key'].value == 'value'
+    assert node.attributes['key2'].value == 'value_to_change'
+
+    @workflow
+    def basic_workflow(ctx, **_):
+        node = ctx.model.node.get_by_name(mock.models.DEPENDENT_NODE_NAME)
+        node.attributes['new_key'] = 'new_value'
+        node.attributes['key2'] = 'changed_value'
+
+    execute(workflow_func=basic_workflow, workflow_context=ctx, executor=executor)
+    node = ctx.model.node.get_by_name(mock.models.DEPENDENT_NODE_NAME)
 
-@pytest.fixture(scope='function')
-def storage():
-    workflow_storage = application_model_storage(
-        sql_mapi.SQLAlchemyModelAPI, initiator=test_storage.init_inmemory_model_storage)
-    workflow_storage.service_template.put(models.create_service_template())
-    service_template = workflow_storage.service_template.get_by_name(models.SERVICE_TEMPLATE_NAME)
-    service = models.create_service(service_template)
-    workflow_storage.service.put(service)
-    workflow_storage.execution.put(models.create_execution(service))
-    yield workflow_storage
-    test_storage.release_sqlite_storage(workflow_storage)
+    assert len(node.attributes) == 3
+    assert node.attributes['key'].value == 'value'
+    assert node.attributes['new_key'].value == 'new_value'
+    assert node.attributes['key2'].value == 'changed_value'

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/execution_plugin/test_local.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/execution_plugin/test_local.py b/tests/orchestrator/execution_plugin/test_local.py
index d792a57..f667460 100644
--- a/tests/orchestrator/execution_plugin/test_local.py
+++ b/tests/orchestrator/execution_plugin/test_local.py
@@ -477,20 +477,22 @@ if __name__ == '__main__':
             'input_as_env_var': env_var
         })
 
+        node = workflow_context.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
+        interface = mock.models.create_interface(
+            node.service,
+            'test',
+            'op',
+            operation_kwargs=dict(
+                function='{0}.{1}'.format(
+                    operations.__name__,
+                    operations.run_script_locally.__name__),
+                arguments=arguments)
+        )
+        node.interfaces[interface.name] = interface
+        workflow_context.model.node.update(node)
+
         @workflow
         def mock_workflow(ctx, graph):
-            node = ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
-            interface = mock.models.create_interface(
-                node.service,
-                'test',
-                'op',
-                operation_kwargs=dict(
-                    function='{0}.{1}'.format(
-                        operations.__name__,
-                        operations.run_script_locally.__name__),
-                    arguments=arguments)
-            )
-            node.interfaces[interface.name] = interface
             graph.add_tasks(api.task.OperationTask(
                 node,
                 interface_name='test',

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/execution_plugin/test_ssh.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/execution_plugin/test_ssh.py b/tests/orchestrator/execution_plugin/test_ssh.py
index 8b326e7..8c4dd2d 100644
--- a/tests/orchestrator/execution_plugin/test_ssh.py
+++ b/tests/orchestrator/execution_plugin/test_ssh.py
@@ -214,33 +214,33 @@ class TestWithActualSSHServer(object):
         else:
             operation = operations.run_script_with_ssh
 
+        node = self._workflow_context.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
+        arguments = {
+            'script_path': script_path,
+            'fabric_env': _FABRIC_ENV,
+            'process': process,
+            'use_sudo': use_sudo,
+            'custom_env_var': custom_input,
+            'test_operation': '',
+        }
+        if hide_output:
+            arguments['hide_output'] = hide_output
+        if commands:
+            arguments['commands'] = commands
+        interface = mock.models.create_interface(
+            node.service,
+            'test',
+            'op',
+            operation_kwargs=dict(
+                function='{0}.{1}'.format(
+                    operations.__name__,
+                    operation.__name__),
+                arguments=arguments)
+        )
+        node.interfaces[interface.name] = interface
+
         @workflow
         def mock_workflow(ctx, graph):
-            node = ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
-            arguments = {
-                'script_path': script_path,
-                'fabric_env': _FABRIC_ENV,
-                'process': process,
-                'use_sudo': use_sudo,
-                'custom_env_var': custom_input,
-                'test_operation': '',
-            }
-            if hide_output:
-                arguments['hide_output'] = hide_output
-            if commands:
-                arguments['commands'] = commands
-            interface = mock.models.create_interface(
-                node.service,
-                'test',
-                'op',
-                operation_kwargs=dict(
-                    function='{0}.{1}'.format(
-                        operations.__name__,
-                        operation.__name__),
-                    arguments=arguments)
-            )
-            node.interfaces[interface.name] = interface
-
             ops = []
             for test_operation in test_operations:
                 op_arguments = arguments.copy()

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/workflows/builtin/test_execute_operation.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/builtin/test_execute_operation.py b/tests/orchestrator/workflows/builtin/test_execute_operation.py
index 88818ca..8713e3c 100644
--- a/tests/orchestrator/workflows/builtin/test_execute_operation.py
+++ b/tests/orchestrator/workflows/builtin/test_execute_operation.py
@@ -56,12 +56,9 @@ def test_execute_operation(ctx):
     )
 
     assert len(execute_tasks) == 1
-    assert execute_tasks[0].name == task.OperationTask.NAME_FORMAT.format(
-        type='node',
-        name=node.name,
-        interface=interface_name,
-        operation=operation_name
-    )
+    assert getattr(execute_tasks[0].actor, '_wrapped', execute_tasks[0].actor) == node
+    assert execute_tasks[0].operation_name == operation_name
+    assert execute_tasks[0].interface_name == interface_name
 
 
 # TODO: add more scenarios

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/workflows/core/test_engine.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/core/test_engine.py b/tests/orchestrator/workflows/core/test_engine.py
index 6d2836c..0438544 100644
--- a/tests/orchestrator/workflows/core/test_engine.py
+++ b/tests/orchestrator/workflows/core/test_engine.py
@@ -55,12 +55,7 @@ class BaseTest(object):
                              tasks_graph=graph)
 
     @staticmethod
-    def _op(ctx,
-            func,
-            arguments=None,
-            max_attempts=None,
-            retry_interval=None,
-            ignore_failure=None):
+    def _create_interface(ctx, func, arguments=None):
         node = ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
         interface_name = 'aria.interfaces.lifecycle'
         operation_kwargs = dict(function='{name}.{func.__name__}'.format(
@@ -72,6 +67,17 @@ class BaseTest(object):
         interface = mock.models.create_interface(node.service, interface_name, operation_name,
                                                  operation_kwargs=operation_kwargs)
         node.interfaces[interface.name] = interface
+        ctx.model.node.update(node)
+
+        return node, interface_name, operation_name
+
+    @staticmethod
+    def _op(node,
+            operation_name,
+            arguments=None,
+            max_attempts=None,
+            retry_interval=None,
+            ignore_failure=None):
 
         return api.task.OperationTask(
             node,
@@ -158,9 +164,11 @@ class TestEngine(BaseTest):
         assert execution.status == models.Execution.SUCCEEDED
 
     def test_single_task_successful_execution(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(workflow_context, mock_success_task)
+
         @workflow
         def mock_workflow(ctx, graph):
-            graph.add_tasks(self._op(ctx, func=mock_success_task))
+            graph.add_tasks(self._op(node, operation_name))
         self._execute(
             workflow_func=mock_workflow,
             workflow_context=workflow_context,
@@ -170,9 +178,11 @@ class TestEngine(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 1
 
     def test_single_task_failed_execution(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(workflow_context, mock_failed_task)
+
         @workflow
         def mock_workflow(ctx, graph):
-            graph.add_tasks(self._op(ctx, func=mock_failed_task))
+            graph.add_tasks(self._op(node, operation_name))
         with pytest.raises(exceptions.ExecutorException):
             self._execute(
                 workflow_func=mock_workflow,
@@ -187,10 +197,13 @@ class TestEngine(BaseTest):
         assert execution.status == models.Execution.FAILED
 
     def test_two_tasks_execution_order(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_ordered_task, {'counter': 1})
+
         @workflow
         def mock_workflow(ctx, graph):
-            op1 = self._op(ctx, func=mock_ordered_task, arguments={'counter': 1})
-            op2 = self._op(ctx, func=mock_ordered_task, arguments={'counter': 2})
+            op1 = self._op(node, operation_name, arguments={'counter': 1})
+            op2 = self._op(node, operation_name, arguments={'counter': 2})
             graph.sequence(op1, op2)
         self._execute(
             workflow_func=mock_workflow,
@@ -202,11 +215,14 @@ class TestEngine(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 2
 
     def test_stub_and_subworkflow_execution(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_ordered_task, {'counter': 1})
+
         @workflow
         def sub_workflow(ctx, graph):
-            op1 = self._op(ctx, func=mock_ordered_task, arguments={'counter': 1})
+            op1 = self._op(node, operation_name, arguments={'counter': 1})
             op2 = api.task.StubTask()
-            op3 = self._op(ctx, func=mock_ordered_task, arguments={'counter': 2})
+            op3 = self._op(node, operation_name, arguments={'counter': 2})
             graph.sequence(op1, op2, op3)
 
         @workflow
@@ -225,11 +241,13 @@ class TestCancel(BaseTest):
 
     def test_cancel_started_execution(self, workflow_context, executor):
         number_of_tasks = 100
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_sleep_task, {'seconds': 0.1})
 
         @workflow
         def mock_workflow(ctx, graph):
             operations = (
-                self._op(ctx, func=mock_sleep_task, arguments=dict(seconds=0.1))
+                self._op(node, operation_name, arguments=dict(seconds=0.1))
                 for _ in range(number_of_tasks)
             )
             return graph.sequence(*operations)
@@ -267,9 +285,12 @@ class TestCancel(BaseTest):
 class TestRetries(BaseTest):
 
     def test_two_max_attempts_and_success_on_retry(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
+
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           arguments={'failure_count': 1},
                           max_attempts=2)
             graph.add_tasks(op)
@@ -283,9 +304,12 @@ class TestRetries(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 2
 
     def test_two_max_attempts_and_failure_on_retry(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
+
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           arguments={'failure_count': 2},
                           max_attempts=2)
             graph.add_tasks(op)
@@ -300,9 +324,11 @@ class TestRetries(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 2
 
     def test_three_max_attempts_and_success_on_first_retry(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           arguments={'failure_count': 1},
                           max_attempts=3)
             graph.add_tasks(op)
@@ -316,9 +342,12 @@ class TestRetries(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 2
 
     def test_three_max_attempts_and_success_on_second_retry(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
+
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           arguments={'failure_count': 2},
                           max_attempts=3)
             graph.add_tasks(op)
@@ -332,9 +361,11 @@ class TestRetries(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 3
 
     def test_infinite_retries(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           arguments={'failure_count': 1},
                           max_attempts=-1)
             graph.add_tasks(op)
@@ -358,9 +389,11 @@ class TestRetries(BaseTest):
                                   executor=executor)
 
     def _test_retry_interval(self, retry_interval, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           arguments={'failure_count': 1},
                           max_attempts=2,
                           retry_interval=retry_interval)
@@ -378,9 +411,11 @@ class TestRetries(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 2
 
     def test_ignore_failure(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_conditional_failure_task, {'failure_count': 1})
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_conditional_failure_task,
+            op = self._op(node, operation_name,
                           ignore_failure=True,
                           arguments={'failure_count': 100},
                           max_attempts=100)
@@ -401,10 +436,12 @@ class TestTaskRetryAndAbort(BaseTest):
 
     def test_task_retry_default_interval(self, workflow_context, executor):
         default_retry_interval = 0.1
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_task_retry, {'message': self.message})
 
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_task_retry,
+            op = self._op(node, operation_name,
                           arguments={'message': self.message},
                           retry_interval=default_retry_interval,
                           max_attempts=2)
@@ -425,10 +462,13 @@ class TestTaskRetryAndAbort(BaseTest):
     def test_task_retry_custom_interval(self, workflow_context, executor):
         default_retry_interval = 100
         custom_retry_interval = 0.1
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_task_retry, {'message': self.message,
+                                                'retry_interval': custom_retry_interval})
 
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_task_retry,
+            op = self._op(node, operation_name,
                           arguments={'message': self.message,
                                      'retry_interval': custom_retry_interval},
                           retry_interval=default_retry_interval,
@@ -449,9 +489,11 @@ class TestTaskRetryAndAbort(BaseTest):
         assert global_test_holder.get('sent_task_signal_calls') == 2
 
     def test_task_abort(self, workflow_context, executor):
+        node, _, operation_name = self._create_interface(
+            workflow_context, mock_task_abort, {'message': self.message})
         @workflow
         def mock_workflow(ctx, graph):
-            op = self._op(ctx, func=mock_task_abort,
+            op = self._op(node, operation_name,
                           arguments={'message': self.message},
                           retry_interval=100,
                           max_attempts=100)

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/workflows/executor/test_process_executor_extension.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/executor/test_process_executor_extension.py b/tests/orchestrator/workflows/executor/test_process_executor_extension.py
index 7969457..5f0b75f 100644
--- a/tests/orchestrator/workflows/executor/test_process_executor_extension.py
+++ b/tests/orchestrator/workflows/executor/test_process_executor_extension.py
@@ -32,19 +32,23 @@ def test_decorate_extension(context, executor):
     def get_node(ctx):
         return ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
 
+    node = get_node(context)
+    interface_name = 'test_interface'
+    operation_name = 'operation'
+    interface = mock.models.create_interface(
+        context.service,
+        interface_name,
+        operation_name,
+        operation_kwargs=dict(function='{0}.{1}'.format(__name__, _mock_operation.__name__),
+                              arguments=arguments)
+    )
+    node.interfaces[interface.name] = interface
+    context.model.node.update(node)
+
+
     @workflow
     def mock_workflow(ctx, graph):
         node = get_node(ctx)
-        interface_name = 'test_interface'
-        operation_name = 'operation'
-        interface = mock.models.create_interface(
-            ctx.service,
-            interface_name,
-            operation_name,
-            operation_kwargs=dict(function='{0}.{1}'.format(__name__, _mock_operation.__name__),
-                                  arguments=arguments)
-        )
-        node.interfaces[interface.name] = interface
         task = api.task.OperationTask(
             node,
             interface_name=interface_name,

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/orchestrator/workflows/executor/test_process_executor_tracked_changes.py
----------------------------------------------------------------------
diff --git a/tests/orchestrator/workflows/executor/test_process_executor_tracked_changes.py b/tests/orchestrator/workflows/executor/test_process_executor_tracked_changes.py
index 2d80a3b..7dbcc5a 100644
--- a/tests/orchestrator/workflows/executor/test_process_executor_tracked_changes.py
+++ b/tests/orchestrator/workflows/executor/test_process_executor_tracked_changes.py
@@ -83,20 +83,22 @@ def test_apply_tracked_changes_during_an_operation(context, executor):
 
 
 def _run_workflow(context, executor, op_func, arguments=None):
+    node = context.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
+    interface_name = 'test_interface'
+    operation_name = 'operation'
+    wf_arguments = arguments or {}
+    interface = mock.models.create_interface(
+        context.service,
+        interface_name,
+        operation_name,
+        operation_kwargs=dict(function=_operation_mapping(op_func),
+                              arguments=wf_arguments)
+    )
+    node.interfaces[interface.name] = interface
+    context.model.node.update(node)
+
     @workflow
     def mock_workflow(ctx, graph):
-        node = ctx.model.node.get_by_name(mock.models.DEPENDENCY_NODE_NAME)
-        interface_name = 'test_interface'
-        operation_name = 'operation'
-        wf_arguments = arguments or {}
-        interface = mock.models.create_interface(
-            ctx.service,
-            interface_name,
-            operation_name,
-            operation_kwargs=dict(function=_operation_mapping(op_func),
-                                  arguments=wf_arguments)
-        )
-        node.interfaces[interface.name] = interface
         task = api.task.OperationTask(
             node,
             interface_name=interface_name,

http://git-wip-us.apache.org/repos/asf/incubator-ariatosca/blob/2149a5ee/tests/storage/test_collection_instrumentation.py
----------------------------------------------------------------------
diff --git a/tests/storage/test_collection_instrumentation.py b/tests/storage/test_collection_instrumentation.py
new file mode 100644
index 0000000..e915421
--- /dev/null
+++ b/tests/storage/test_collection_instrumentation.py
@@ -0,0 +1,257 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+
+from aria.modeling import models
+from aria.storage import collection_instrumentation
+
+
+class MockActor(object):
+    def __init__(self):
+        self.dict_ = {}
+        self.list_ = []
+
+
+class MockMAPI(object):
+
+    def __init__(self):
+        pass
+
+    def put(self, *args, **kwargs):
+        pass
+
+    def update(self, *args, **kwargs):
+        pass
+
+
+class CollectionInstrumentation(object):
+
+    @pytest.fixture
+    def actor(self):
+        return MockActor()
+
+    @pytest.fixture
+    def model(self):
+        return MockMAPI()
+
+    @pytest.fixture
+    def dict_(self, actor, model):
+        return collection_instrumentation._InstrumentedDict(model, actor, 'dict_', models.Attribute)
+
+    @pytest.fixture
+    def list_(self, actor, model):
+        return collection_instrumentation._InstrumentedList(model, actor, 'list_', models.Attribute)
+
+
+class TestDict(CollectionInstrumentation):
+
+    def test_keys(self, actor, dict_):
+        dict_.update(
+            {
+                'key1': models.Attribute.wrap('key1', 'value1'),
+                'key2': models.Attribute.wrap('key2', 'value2')
+            }
+        )
+        assert sorted(dict_.keys()) == sorted(['key1', 'key2']) == sorted(actor.dict_.keys())
+
+    def test_values(self, actor, dict_):
+        dict_.update({
+            'key1': models.Attribute.wrap('key1', 'value1'),
+            'key2': models.Attribute.wrap('key1', 'value2')
+        })
+        assert (sorted(dict_.values()) ==
+                sorted(['value1', 'value2']) ==
+                sorted(v.value for v in actor.dict_.values()))
+
+    def test_items(self, dict_):
+        dict_.update({
+            'key1': models.Attribute.wrap('key1', 'value1'),
+            'key2': models.Attribute.wrap('key1', 'value2')
+        })
+        assert sorted(dict_.items()) == sorted([('key1', 'value1'), ('key2', 'value2')])
+
+    def test_iter(self, actor, dict_):
+        dict_.update({
+            'key1': models.Attribute.wrap('key1', 'value1'),
+            'key2': models.Attribute.wrap('key1', 'value2')
+        })
+        assert sorted(list(dict_)) == sorted(['key1', 'key2']) == sorted(actor.dict_.keys())
+
+    def test_bool(self, dict_):
+        assert not dict_
+        dict_.update({
+            'key1': models.Attribute.wrap('key1', 'value1'),
+            'key2': models.Attribute.wrap('key1', 'value2')
+        })
+        assert dict_
+
+    def test_set_item(self, actor, dict_):
+        dict_['key1'] = models.Attribute.wrap('key1', 'value1')
+        assert dict_['key1'] == 'value1' == actor.dict_['key1'].value
+        assert isinstance(actor.dict_['key1'], models.Attribute)
+
+    def test_nested(self, actor, dict_):
+        dict_['key'] = {}
+        assert isinstance(actor.dict_['key'], models.Attribute)
+        assert dict_['key'] == actor.dict_['key'].value == {}
+
+        dict_['key']['inner_key'] = 'value'
+
+        assert len(dict_) == 1
+        assert 'inner_key' in dict_['key']
+        assert dict_['key']['inner_key'] == 'value'
+        assert dict_['key'].keys() == ['inner_key']
+        assert dict_['key'].values() == ['value']
+        assert dict_['key'].items() == [('inner_key', 'value')]
+        assert isinstance(actor.dict_['key'], models.Attribute)
+        assert isinstance(dict_['key'], collection_instrumentation._InstrumentedDict)
+
+        dict_['key'].update({'updated_key': 'updated_value'})
+        assert len(dict_) == 1
+        assert 'updated_key' in dict_['key']
+        assert dict_['key']['updated_key'] == 'updated_value'
+        assert sorted(dict_['key'].keys()) == sorted(['inner_key', 'updated_key'])
+        assert sorted(dict_['key'].values()) == sorted(['value', 'updated_value'])
+        assert sorted(dict_['key'].items()) == sorted([('inner_key', 'value'),
+                                                       ('updated_key', 'updated_value')])
+        assert isinstance(actor.dict_['key'], models.Attribute)
+        assert isinstance(dict_['key'], collection_instrumentation._InstrumentedDict)
+
+        dict_.update({'key': 'override_value'})
+        assert len(dict_) == 1
+        assert 'key' in dict_
+        assert dict_['key'] == 'override_value'
+        assert len(actor.dict_) == 1
+        assert isinstance(actor.dict_['key'], models.Attribute)
+        assert actor.dict_['key'].value == 'override_value'
+
+    def test_get_item(self, actor, dict_):
+        dict_['key1'] = models.Attribute.wrap('key1', 'value1')
+        assert isinstance(actor.dict_['key1'], models.Attribute)
+
+    def test_update(self, actor, dict_):
+        dict_['key1'] = 'value1'
+
+        new_dict = {'key2': 'value2'}
+        dict_.update(new_dict)
+        assert len(dict_) == 2
+        assert dict_['key2'] == 'value2'
+        assert isinstance(actor.dict_['key2'], models.Attribute)
+
+        new_dict = {}
+        new_dict.update(dict_)
+        assert new_dict['key1'] == dict_['key1']
+
+    def test_copy(self, dict_):
+        dict_['key1'] = 'value1'
+
+        new_dict = dict_.copy()
+        assert new_dict is not dict_
+        assert new_dict == dict_
+
+        dict_['key1'] = 'value2'
+        assert new_dict['key1'] == 'value1'
+        assert dict_['key1'] == 'value2'
+
+    def test_clear(self, dict_):
+        dict_['key1'] = 'value1'
+        dict_.clear()
+
+        assert len(dict_) == 0
+
+
+class TestList(CollectionInstrumentation):
+
+    def test_append(self, actor, list_):
+        list_.append(models.Attribute.wrap('name', 'value1'))
+        list_.append('value2')
+        assert len(actor.list_) == 2
+        assert len(list_) == 2
+        assert isinstance(actor.list_[0], models.Attribute)
+        assert list_[0] == 'value1'
+
+        assert isinstance(actor.list_[1], models.Attribute)
+        assert list_[1] == 'value2'
+
+        list_[0] = 'new_value1'
+        list_[1] = 'new_value2'
+        assert isinstance(actor.list_[1], models.Attribute)
+        assert isinstance(actor.list_[1], models.Attribute)
+        assert list_[0] == 'new_value1'
+        assert list_[1] == 'new_value2'
+
+    def test_iter(self, list_):
+        list_.append('value1')
+        list_.append('value2')
+        assert sorted(list_) == sorted(['value1', 'value2'])
+
+    def test_insert(self, actor, list_):
+        list_.append('value1')
+        list_.insert(0, 'value2')
+        list_.insert(2, 'value3')
+        list_.insert(10, 'value4')
+        assert sorted(list_) == sorted(['value1', 'value2', 'value3', 'value4'])
+        assert len(actor.list_) == 4
+
+    def test_set(self, list_):
+        list_.append('value1')
+        list_.append('value2')
+
+        list_[1] = 'value3'
+        assert len(list_) == 2
+        assert sorted(list_) == sorted(['value1', 'value3'])
+
+    def test_insert_into_nested(self, actor, list_):
+        list_.append([])
+
+        list_[0].append('inner_item')
+        assert isinstance(actor.list_[0], models.Attribute)
+        assert len(list_) == 1
+        assert list_[0][0] == 'inner_item'
+
+        list_[0].append('new_item')
+        assert isinstance(actor.list_[0], models.Attribute)
+        assert len(list_) == 1
+        assert list_[0][1] == 'new_item'
+
+        assert list_[0] == ['inner_item', 'new_item']
+        assert ['inner_item', 'new_item'] == list_[0]
+
+
+class TestDictList(CollectionInstrumentation):
+    def test_dict_in_list(self, actor, list_):
+        list_.append({})
+        assert len(list_) == 1
+        assert isinstance(actor.list_[0], models.Attribute)
+        assert actor.list_[0].value == {}
+
+        list_[0]['key'] = 'value'
+        assert list_[0]['key'] == 'value'
+        assert len(actor.list_) == 1
+        assert isinstance(actor.list_[0], models.Attribute)
+        assert actor.list_[0].value['key'] == 'value'
+
+    def test_list_in_dict(self, actor, dict_):
+        dict_['key'] = []
+        assert len(dict_) == 1
+        assert isinstance(actor.dict_['key'], models.Attribute)
+        assert actor.dict_['key'].value == []
+
+        dict_['key'].append('value')
+        assert dict_['key'][0] == 'value'
+        assert len(actor.dict_) == 1
+        assert isinstance(actor.dict_['key'], models.Attribute)
+        assert actor.dict_['key'].value[0] == 'value'