You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@iota.apache.org by Tony Faustini <to...@litbit.com> on 2016/05/28 02:46:42 UTC

Followup to ApacheCon 2016

Hi all, it was fun seeing some of you for the first time at ApacheCon 2016. Following Anatole’s suggestion let’s start a discussion of iota. To get things started I just wanted to direct your attention to the iota web page
http://iota.incubator.apache.org/ <http://iota.incubator.apache.org/> from the block diagram we can see that we want iota to run at the device level, at a standalone level and at the cluster level. A device might be a RaspberryPi, a standalone might be someone’s laptop or a commodity server and a cluster a Mesos cluster of many commodity servers.

Let’s begin the discussion by considering two major components 1) the Dataflow runtime and 2) the Microservices and API framework. Security is critical but let’s leave that for a future discussion.

What is the problem we are trying to solve with iota? Certainly one goal is the orchestration of data coming from small and large networks of sensors often using many different protocols. What does this imply? We will need to interact with sensors and actuators that are IP based and older protocols that are based on legacy protocols such as Bacnet and Modbus. We want to enable s secure (later discussion) orchestration of a wide variety of devices across a broad spectrum of protocols. iota is IP based as are many modern protocols other protocols are not IP based and will require translators (in some instances) to bring them into an IP world. To summarize, we need to be able to ingest and output to devices from legacy systems to modern IP based systems.

Once we have ingested data where is it stored and for how long? How can it be used to orchestrate a network of devices that can interact with humans?  In other words how do we do orchestration? 

Finally how to we enable others to contribute microservices that can be used to build a vibrant orchestration system?

We have a solution that we have developed based on Akka Actors that I can describe called the ‘Fey’ engine. It is named after the person who invented the first slot machine (see https://en.wikipedia.org/wiki/Charles_Fey <https://en.wikipedia.org/wiki/Charles_Fey>).  

Before I proceed with a description of the ‘Fey’ engine I just wanted to open up a discussion on iota in general and to let those on the dev mail list introduce themselves.

Looking forward to hear from you. Next  the Fey Engine

-Tony


Re: Followup to ApacheCon 2016

Posted by Anatole Tresch <an...@apache.org>.
I did have a quick look at the site. For me as a software engineer it looks
great but it is more on birds high level perspective, basically a
collection of more or lesss related boxes with unclear responsibilities...
sorry ;) !

So, as I suggested in Vancouver, we should start IMO with a more accurate
description, what we want to achieve, basically we must find a common
terminology.

*So what are the absolute minimal things to model would probably be a good
step (KISS principle):*

   - *sensors*  -> could be classical IOT devices, basically IP based or
   seomething else, basically everything that can emit data.
   - *actuators*  -> could be also sensors listening on events, but they
   can additionally emit things.
   - *events* -> data flowing around in the system

For a minimalistic core system we may not need (much) more, do we?

Basically more* advanced concepts* could be:

   - *observers*, *supervisors*
   - *dataflows*? should we model them explicitly? As a key abstraction or
   as an extension? If modelled explicitly, we may also want model the overall
   *deployment* of a solution?
   - *rules*? Are there different kind of rules?
   - *decisions*/*actions* ? Are decisions simply events?

To identify (or bullet proof) these things must draw the different types of
use cases and deployments we have in focus.
And of course, the bullets above only cover structural concepts, we must
also define the basic behavioral usage scenarios more explicitly.

So IMO, we should
1) define the scope, the "use cases"
2) derive requirements out of it
3) And then start thinking on how these things can be designed as high
level components. Then these components will have clear responsibilities
and can be correctly designed and implementedl.

So I would suggest
1) Open the ASF repo (if not yet done) and starting with some asciidoc,
where we collect the use cases and high level scenarios.
2) In parallel we may do the same with the requirements.
3) And then we can start talking about the basic abstractions in more
detail (with code APIs).
And with that we can step-by-step put together the pieces of our core APIs
in a minimal form and also built up a common terminology.

The question how these things are to be implemented is a secondary one,
including also the discussions of the low level runtime frameworks to be
used (Akka, or something else), first we must clearly define what are the
things we deal with.

Summarizing: IMO starting with the dataflow is not optimal. We should first
discuss the more basic concepts and then proceed to the communications and
orchestration part (which IMO must be splitt up into different areas of
concerns to be handled on top).

Any other thaughts?

- Anatole


-- 
*Anatole Tresch*
PPMC Member Apache Tamaya
JCP Star Spec Lead
*Switzerland, Europe Zurich, GMT+1*

*maketechsimple.wordpress.com <http://maketechsimple.wordpress.com/> *
*Twitter:  @atsticks, @tamayaconf*
*http://tamaya.incubator.apache.org  <http://tamaya.incubator.apache.org>*
http://javamoney.org