You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@karaf.apache.org by Brad Johnson <br...@redhat.com> on 2017/01/13 18:05:29 UTC

Opinionated...

Folks,

 

I wanted to make sure that my promoting CDI, Camel Java DSL, & static
profiles didn't obscure the point I was trying to make.  Whatever mechanics
we choose I'd really like us to be unified behind a common paradigm so that
our documentation, exemplars, archetypes, blogs, libraries, and so on are
all organized the same and use the same mechanics and layouts for projects. 

 

We should promote an idiomatic way to develop software using Karaf Boot.
That's one problem I hear from a lot of clients.  There are such
cross-currents of information about how to develop OSGi-based software that
it gets confusing.  Best or preferred practices are lost in the noise.  I
won't get into all that since I'm sure most of you have dealt this problem.
Not to pick on it but a good example is that the Camel in Action book
recommends using Pojos instead of using Processors/Exchanges.  It is on
somewhere near the back of the book in a few pages. I don't know how many
examples on the web site actually use the Processor/Exchange but it is a
lot. Then there are examples with Spring, Blueprint, Java DSL, Scala, etc.
There are annotations that only work in one environment but not in all of
them.

 

By selecting an idiomatic and "opinionated" way of creating Karaf Boot
microcontainers we could make sure that sort of confusion isn't continued
forward.  It would require a lot less documentation to cover the same ground
and make editing and updating easier.  It would make creating sample and
example projects a lot easier. It would simplify what Karaf Boot appliances
have to support and make sure there aren't concerns that work in one
environment and not in another or that might work differently in a different
environment.

 

I'm personally interested in Karaf Appliances with standard Maven
structures, standard  bundle structures, and reference implementations that
have a good chunk of the basic functionality. I'd say we take a page from
the "convention over configuration" book or, at least, a "conventional
configuration" and likely a bit of both. Because the appliances are focused
on microservices we should get out ahead of the Gartner hype cycle.  Right
now we are at the Peak of Inflated Expectations and in a couple of years
we'll be at the Trough of Disillusionment.  That disillusionment will come
for a number of reasons. Flying Spaghetti Monster topology will be one of
them but, more importantly for a Karaf Appliance, is the consistent problem
of "network fallacies".  Every Karaf Kontainer should have standard OSGi
service interfaces and basic implementations that address each of the
fallacies that apply to a uService.  The Kontainers should insist on it and
not make it optional. If the user doesn't want that functionality they would
then need to disable via configuration.  But the Kontainer will get stuck in
a grace period and then fail if an expected, standard service isn't
available. All of the standard OSGi service APIs would have basic
implementations to start but as more specific Kontainers.  But, because they
are standard services new ones can be developed by the community or by the
end developer.

 

As developers, we've all had to implement functionality and then come back
and deal with error handling, security, etc. I say we simply cut those
services in to the Kontainer right from the get-go.  The Kontainer doesn't
run if it doesn't find the service.  That isn't to say these become a
fundamental part of Karaf but a fundamental part of the Kontainer service
that runs in Karaf.

 

The standard bundles would only implement basic functionality and not do
anything sophisticated.  New bundles and libraries for more sophisticated
implementations could be added later. All of the bundles would likely have
disable flags if the developer found the particular concern irrelevant.  For
example, security might not be relevant. The following aren't meant to be
comprehensive. Just addressing key concerns. Other standards like
LoggingService might be included by default as well. 

 

The intent here isn't to define the exact mechanics but the standard OSGi
service interface that would be _required_ in any implementation of a
Kontainer, even if the implemented bundle is simply a passthrough or can be
disabled, it forces the developer to explicitly deal with the problems or
choose to ignore them altogether.  

 

Because these service interfaces and the bundles that implement them are
standard, the set can be specified by the dependencies specified in the
Maven build, features and/or profiles.

 

1.	The network is reliable.

A standard "Error Handler" OSGi service.  The default bundle would simply
capture errors/exceptions and log them.  Perhaps it would specify retries.
Drop in solutions might include errors going to dead letter queues and so
on. The OSGi service interface is required for Kontainer bootstrap so use
the default or use a standard one or create one of your own.  If they want
to change configuration of this bundle or put in a new one, they know
exactly what it is, where it exists, how it is specified to the build, and
what configuration file is associated with it. No rummaging around through
code.  When the inevitable error, exceptions and problems arise, the
developer isn't left wondering where and how they should add the
functionality to handle it.

 

A standard "Circuit Breaker" service API and basic implemented bundle should
be provided.  Perhaps the standard bundle would simply count errors over a
time frame and shut down if that limit is hit and allow those values to be
configured. Default would be a rather unsophisticated implementation but
provide the convention and automated wiring of a circuit breaker OSGi
service.  Other implementations might fire off emails to Sys Admins or be
combinations. And if it is really undesirable, set a disable flag.

 

2.	Latency is zero.

A standard OSGi Throttling service interface and bundle implementation would
be included.  If you want different behavior, change it.  If you want to
disable it, set the flag. However, there are bigger issues here that I'll
address a bit more down below.

 

3.	Bandwidth is infinite.

Throttling OSGi service again. Ditto to comment 2.

 

4.	The network is secure.

Standard OSGi service to plug in in various authentication/authorization
mechanisms.  By default it might be pass through but also have a different
implementation that uses a simple username/password. Obviously LDAP, JAAS,
and other bundles could be created and dropped into place. 

 

5.	Topology doesn't change.

Back to the Circuit Breaker, logging and perhaps notification mechanism.
Also the transport issue below where I'll mention some configuration.

 

6.	There is one administrator.

//No particular plugin for this but standardized configuration and expected
bundles help and this also relates to the transport discussion.

 

7.	Transport cost is zero.

//Probably not a concern here directly but will be a big issue of uServices.

 

8.	The network is homogeneous.

//I think this issue can be dealt with in our context with many of the
standard libraries but can be abstracted a bit more.

 

Obviously a big issue we'll see, and I've seen in the past, is chained
request/response calls. Service 1 making a REST call to service 2 making a
REST call to service 3.etc.  And all of a sudden the latency is a killer.

 

ServiceMix/Karaf/Camel can already abstract away some of that via property
substitution. I'd suggest we take that one step further and put _all_
transport/protocol information in configuration and create a standardized
URI. As a developer or a senior developer over a group of developers, I
don't want them to be concerned with the fiddly bits of the transport in the
code and routes and I certainly don't want to recompile just to make such
changes.

 

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel
URI could be used and mapped via the configuration files.  So the developer
would always use karaf:// in their routes and configuration mapping would
use the URI specified.  karaf://myserviceName.  In the configuration file
might be mapped a transport.configuration.cfg file.

 

I believe that is important for a lot of reasons.  A mid-level or
junior-level developer shouldn't be involved in configuration like:

"ftp://foo@myserver? <ftp://foo@myserver/?> password=secret&amp;

           recursive=true&amp;

           ftpClient.dataTimeout=30000&amp;

           ftpClientConfig.serverLanguageCode=fr"

 

So the cfg file might look like this:

clientService="ftp://foo@myserver?password=secret&

           recursive=true&

           ftpClient.dataTimeout=30000&

           ftpClientConfig.serverLanguageCode=fr"

(At least properties get rid of the gawdaful escaped ampersands).

 

The code would then say "karaf://clientService"

 

One can do much of that via configuration right now but I think it is
critical to move it completely to configuration so that admins know exactly
what to change and where to find it when topologies change. It also means
that when the backlash from microservice calling microservice calling
microservice being slow happens, that simple mapping would permit things
like going to JMS asynchronous request/response (or other fast, async
mechanisms) that don't swamp the virtual machine's or Karaf instance
resources. It would also allow for easy stubbing or mock testing of the
Kontainer as it will be deployed without using PAX exam or other mechanism.

 

Creating standard OSGi service APIs in an anticipation of these problems
would permit for an evolutionary approach to these problems in the future
and specific solutions when a standard Kontainer is developed. Even standard
error handler service implementations can be created.

 

Once such a basic, standard Kontainer exists, then uKontainers that
implement basic functionality commonly used could be created.  There are JPA
examples already.  But the average developer is going to be given a task to
receive some canonical data model via a REST service and poke it into a
database.  That database model probably won't look like what they are
receiving.  So a uKontainer that has a REST front end they can modify, a
Dozer object mapping file in the middle with a transform, and a call to the
database will be used repeatedly.

 

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different
error handler plugin implementations which are used with the same REST,
mapping, JPA container. Just change the Maven dependency or profile.

 

There are a large number of examples like that.  In the case of that
uKontainer there would like be a JPAErrorService for catching common errors
and another for Dozer errors and for unmarshaling errors.  As a developer
looking to solve very specific problems, I just download the uContainer and
do the Dozer mapping, change some configuration and then test it.

 

That also means, that much like Camel EIPs, open source developers can focus
on hardening these containers, fixing bugs, putting in performance
enhancements and the like.  If a new error is coming from JPA that a user
finds and isn't being handled in a coherent fashion, then a new block or
delegate code is added and released.  Just as we'd do with a Camel endpoint
or component.

 

Having standard error handlers built into uKontainers would also help make
coherent messages from the large and unwieldy stack traces full of
reflection that we commonly see.  The error handler OSGi plugin for a given
problem would be highly focused on identifying and reporting problems with a
specific technology or set of technologies.

 

 

 

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing


RE: Opinionated...

Posted by Brad Johnson <br...@redhat.com>.
Absolutely! I've been consulting for over 5 years and it's rare that I come into a client's sight where they aren't all confused about Camel and especially how to use it with bundles. Once they "get" it from a bit of guidance their reaction is positive.

Or I'll find places where one developer is using XML routes and another is using Java DSL.  Not that either is inherently bad but it makes for confusion in an organization. 

That's why I really like the idea of selecting an idiom - good, bad or indifferent - that promotes a way of coding and understanding that is consistent.

I'd certainly consider contributing examples of coding using a consistent mechanism.  When I say consistent, I'm not even talking at the blueprint/DS level.  A high level integration developer doesn't really care about issues like whether a service is coming up behind a proxy or if it is a direction injection.  We might care about that but if the abstraction is right then implementation details are hidden.

The CDI implementation of service export/reference is with proxies I believe but the annotation hides that away. If it were later determined that a DS implementation were preferred, the user wouldn't really know or care about that level of detail. 

I keep mentioning CDI for a number of reasons.  First it is a consistent mechanism for everything from injection of beans inside a bundle to export/import of services.  One isn't switching paradigms between internal wire up and exporting services. Second is that J2EE developers looking at the stack are going to find it quite natural and easy to understand.  And let's face it, they out number us. 


-----Original Message-----
From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net] 
Sent: Monday, January 16, 2017 4:25 AM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Yeah, agree.

IMHO, the key thing is to avoid bunch of examples. Karaf should embed and provide ready to use examples, based on one framework.

End-users are pretty lost when you have a bunch of different ways to implement the same thing.

Regards
JB

On 01/16/2017 11:20 AM, Christian Schneider wrote:
> Remote Services can help a lot if you can represent the remote calls 
> as a java interface.
> This works well for a lot of transports like SOAP or the fastbin 
> transport from Redhat.
>
> Circuit Breaker could be nicely added to Remote Services in a 
> transparent way. Remote Services have the notion of intents whioch 
> represent a name for a needed feature. So a service could define that 
> it needs a circuit-breaker. Alternatively the remote services provider 
> could define a central config where you could add this intent to all 
> remote services.
>
> Anyway I think remote services could be the standard way in Karaf boot 
> to expose and use services.
> There are some ready to use examples in aries-rsa and cxf-dosgi as 
> well as in ECF.
>
> There are some cases that at least currently do not work perfectly:
> - REST with links. JAX-RS as a pure transport works quite well with 
> CXF-DOSGi and I think also with the ECF CXF provider. The problem is 
> though that good REST style requires that you use http resource links 
> a lot. This is not easy to represent in a pure java interface. Another 
> thing is the notion of JAX-RS Applications. They provide a very nice 
> way to enhance a set of REST services with additonal config but they 
> are not yet supported by CXF-DOSGi.
> Something to keep track for this is the Aries JAX-RS-whiteboard project.
> It implements the upcoming standard for exposing REST services in OSGi.
> I hope to make CXF-DOSGi and the JAX-RS whiteboard work together in 
> the future.
> - One way messaging. I think the purest form of remote communication 
> are one way messages backed by JMS or Kafka or other messaging brokers.
> Unfortunately I think this is only partially supported in Remote 
> Services. I plan to work on a provider that allows one way message 
> based communication in a very simple way but only got some simple 
> prototypes till now.
>
> Christian
>
>> [1] https://www.osgi.org/developer/specifications/
>> [2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse
>>
>> On 1/13/2017 11:19 AM, Brad Johnson wrote:
>>
>>     That is certainly the sort of library that could be used as a
>>     standard. Get an agreement on the standard OSGi service interface
>>     and then use it and others for that implementation.  Which brings
>>     up a good question and issue.  There would have to be some set of
>>     standardized messages and exception types.  The CiruitBreaker
>>     example throws a CircuitBreakingException (naturally enough).  If
>>     there’s an ErrorHandlerService it would have to know the standard
>>     set of exceptions that could be expected or, at least, a set of
>>     parent classes.  Since CircuitBreakingException is a relatively
>>     simple class it would be perfect for a default ErrorHandlerService
>>     to catch for that class of exceptions.
>>
>>
>>
>>     Obviously there will have to be some head scratching and chin
>>     rubbing about how the pieces fit together exactly.  The
>>     CircuitBreakerService (and the others too) could also be more like
>>     container classes that listen and pick up
>>     CircuitBreakerListenerService instances.  So one listener might
>>     just log the circuit breaker exception.  But you might instantiate
>>     an SMTPCircuitBreakerNotifcationService that implements the
>>     CircuitBreakerListenerService and fires off an email to an admin
>>     email address if the breaker is tripped.
>>
>>
>>
>>     That CircuitBreakerService might also be picked by the Kontainer
>>     instance which listens for on/off control events from the outside
>>     world.  Some thinking to do there but they are tractable problems
>>     with services and events.
>>
>>
>>
>>     The main services like CircuitBreakerService and ThrottlerService
>>     might register themselves as providers with the
>>     ErrorHandlerService which would catch the types of exceptions they
>>     throw.  It in turn could listen for custom
>>     ExceptionHandlerListener<T> that listen for and handle specific
>>     exception types. Still thinking and hand waving about this but I
>>     think a sane set of standard services, listeners and events could
>>     be created that would permit a user to create simple handlers to
>>     register.
>>
>>
>>
>>     There would also be the issue of the issue of how to automate
>>     injection of those into the Camel routes.  That doesn’t seem like
>>     it should be a daunting challenge but it would be important.  And
>>     I think very important that those get injected automatically even
>>     if the services only provide basic logging initially with no
>>     client custom code.
>>
>>
>>
>>     *From:*James Carman [mailto:james@carmanconsulting.com]
>>     *Sent:* Friday, January 13, 2017 12:12 PM
>>     *To:* bradjohn@redhat.com <ma...@redhat.com>
>>     *Subject:* Re: Opinionated...
>>
>>
>>
>>     Commons Lang3 has a pretty simple CircuitBreaker implementation
>>     that I used in Microbule:
>>
>>     
>> https://github.com/Microbule/microbule/blob/master/decorator/circuitb
>> reaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBr
>> eakerFilter.java
>>
>>     On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com
>>     <ma...@redhat.com>> wrote:
>>
>>         Folks,
>>
>>
>>
>>         I wanted to make sure that my promoting CDI, Camel Java DSL, &
>>         static profiles didn’t obscure the point I was trying to
>>         make.  Whatever mechanics we choose I’d really like us to be
>>         unified behind a common paradigm so that our documentation,
>>         exemplars, archetypes, blogs, libraries, and so on are all
>>         organized the same and use the same mechanics and layouts for
>>         projects.
>>
>>
>>
>>         We should promote an idiomatic way to develop software using
>>         Karaf Boot.  That’s one problem I hear from a lot of clients.
>>         There are such cross-currents of information about how to
>>         develop OSGi-based software that it gets confusing.  Best or
>>         preferred practices are lost in the noise.  I won’t get into
>>         all that since I’m sure most of you have dealt this problem.
>>         Not to pick on it but a good example is that the Camel in
>>         Action book recommends using Pojos instead of using
>>         Processors/Exchanges.  It is on somewhere near the back of the
>>         book in a few pages. I don’t know how many examples on the web
>>         site actually use the Processor/Exchange but it is a lot. Then
>>         there are examples with Spring, Blueprint, Java DSL, Scala,
>>         etc.  There are annotations that only work in one environment
>>         but not in all of them.
>>
>>
>>
>>         By selecting an idiomatic and “opinionated” way of creating
>>         Karaf Boot microcontainers we could make sure that sort of
>>         confusion isn’t continued forward.  It would require a lot
>>         less documentation to cover the same ground and make editing
>>         and updating easier.  It would make creating sample and
>>         example projects a lot easier. It would simplify what Karaf
>>         Boot appliances have to support and make sure there aren’t
>>         concerns that work in one environment and not in another or
>>         that might work differently in a different environment.
>>
>>
>>
>>         I’m personally interested in Karaf Appliances with standard
>>         Maven structures, standard  bundle structures, and reference
>>         implementations that have a good chunk of the basic
>>         functionality. I’d say we take a page from the “convention
>>         over configuration” book or, at least, a “conventional
>>         configuration” and likely a bit of both. Because the
>>         appliances are focused on microservices we should get out
>>         ahead of the Gartner hype cycle.  Right now we are at the Peak
>>         of Inflated Expectations and in a couple of years we’ll be at
>>         the Trough of Disillusionment.  That disillusionment will come
>>         for a number of reasons. Flying Spaghetti Monster topology
>>         will be one of them but, more importantly for a Karaf
>>         Appliance, is the consistent problem of “network fallacies”.
>>         Every Karaf Kontainer should have standard OSGi service
>>         interfaces and basic implementations that address each of the
>>         fallacies that apply to a uService.  The Kontainers should
>>         insist on it and not make it optional. If the user doesn’t
>>         want that functionality they would then need to disable via
>>         configuration.  But the Kontainer will get stuck in a grace
>>         period and then fail if an expected, standard service isn’t
>>         available. All of the standard OSGi service APIs would have
>>         basic implementations to start but as more specific
>>         Kontainers.  But, because they are standard services new ones
>>         can be developed by the community or by the end developer.
>>
>>
>>
>>         As developers, we’ve all had to implement functionality and
>>         then come back and deal with error handling, security, etc. I
>>         say we simply cut those services in to the Kontainer right
>>         from the get-go.  The Kontainer doesn’t run if it doesn’t find
>>         the service.  That isn’t to say these become a fundamental
>>         part of Karaf but a fundamental part of the Kontainer service
>>         that runs in Karaf.
>>
>>
>>
>>         The standard bundles would only implement basic functionality
>>         and not do anything sophisticated.  New bundles and libraries
>>         for more sophisticated implementations could be added later.
>>         All of the bundles would likely have disable flags if the
>>         developer found the particular concern irrelevant.  For
>>         example, security might not be relevant. The following aren’t
>>         meant to be comprehensive. Just addressing key concerns. Other
>>         standards like LoggingService might be included by default as
>>         well.
>>
>>
>>
>>         The intent here isn’t to define the exact mechanics but the
>>         standard OSGi service interface that would be _/required/_ in
>>         any implementation of a Kontainer, even if the implemented
>>         bundle is simply a passthrough or can be disabled, it forces
>>         the developer to explicitly deal with the problems or choose
>>         to ignore them altogether.
>>
>>
>>
>>         Because these service interfaces and the bundles that
>>         implement them are standard, the set can be specified by the
>>         dependencies specified in the Maven build, features and/or
>>         profiles.
>>
>>
>>
>>          1. The network is reliable.
>>
>>         A standard “Error Handler” OSGi service.  The default bundle
>>         would simply capture errors/exceptions and log them.  Perhaps
>>         it would specify retries. Drop in solutions might include
>>         errors going to dead letter queues and so on. The OSGi service
>>         interface is required for Kontainer bootstrap so use the
>>         default or use a standard one or create one of your own.  If
>>         they want to change configuration of this bundle or put in a
>>         new one, they know exactly what it is, where it exists, how it
>>         is specified to the build, and what configuration file is
>>         associated with it. No rummaging around through code.  When
>>         the inevitable error, exceptions and problems arise, the
>>         developer isn’t left wondering where and how they should add
>>         the functionality to handle it.
>>
>>
>>
>>         A standard “Circuit Breaker” service API and basic implemented
>>         bundle should be provided.  Perhaps the standard bundle would
>>         simply count errors over a time frame and shut down if that
>>         limit is hit and allow those values to be configured. Default
>>         would be a rather unsophisticated implementation but provide
>>         the convention and automated wiring of a circuit breaker OSGi
>>         service.  Other implementations might fire off emails to Sys
>>         Admins or be combinations. And if it is really undesirable,
>>         set a disable flag.
>>
>>
>>
>>          2. Latency is zero.
>>
>>         A standard OSGi Throttling service interface and bundle
>>         implementation would be included.  If you want different
>>         behavior, change it.  If you want to disable it, set the flag.
>>         However, there are bigger issues here that I’ll address a bit
>>         more down below.
>>
>>
>>
>>          3. Bandwidth is infinite.
>>
>>         Throttling OSGi service again. Ditto to comment 2.
>>
>>
>>
>>          4. The network is secure.
>>
>>         Standard OSGi service to plug in in various
>>         authentication/authorization mechanisms.  By default it might
>>         be pass through but also have a different implementation that
>>         uses a simple username/password. Obviously LDAP, JAAS, and
>>         other bundles could be created and dropped into place.
>>
>>
>>
>>          5. Topology doesn't change.
>>
>>         Back to the Circuit Breaker, logging and perhaps notification
>>         mechanism.  Also the transport issue below where I’ll mention
>>         some configuration.
>>
>>
>>
>>          6. There is one administrator.
>>
>>         //No particular plugin for this but standardized configuration
>>         and expected bundles help and this also relates to the
>>         transport discussion.
>>
>>
>>
>>          7. Transport cost is zero.
>>
>>         //Probably not a concern here directly but will be a big issue
>>         of uServices.
>>
>>
>>
>>          8. The network is homogeneous.
>>
>>         //I think this issue can be dealt with in our context with
>>         many of the standard libraries but can be abstracted a bit more.
>>
>>
>>
>>         Obviously a big issue we’ll see, and I’ve seen in the past, is
>>         chained request/response calls. Service 1 making a REST call
>>         to service 2 making a REST call to service 3…etc.  And all of
>>         a sudden the latency is a killer.
>>
>>
>>
>>         ServiceMix/Karaf/Camel can already abstract away some of that
>>         via property substitution. I’d suggest we take that one step
>>         further and put _/all/_ transport/protocol information in
>>         configuration and create a standardized URI. As a developer or
>>         a senior developer over a group of developers, I don’t want
>>         them to be concerned with the fiddly bits of the transport in
>>         the code and routes and I certainly don’t want to recompile
>>         just to make such changes.
>>
>>
>>
>>         Akka, for  example, uses local URIs like akka://.  But a
>>         similar Karaf/Camel URI could be used and mapped via the
>>         configuration files.  So the developer would always use
>>         karaf:// in their routes and configuration mapping would use
>>         the URI specified.  karaf://myserviceName.  In the
>>         configuration file might be mapped a
>>         transport.configuration.cfg file.
>>
>>
>>
>>         I believe that is important for a lot of reasons.  A mid-level
>>         or junior-level developer shouldn’t be involved in
>>         configuration like:
>>
>>         
>> "<ftp://foo@myserver/?>ftp://foo@myserver?password=secret&amp;
>>
>>                    recursive=true&amp;
>>
>>                    ftpClient.dataTimeout=30000&amp;
>>
>>                    ftpClientConfig.serverLanguageCode=fr"
>>
>>
>>
>>         So the cfg file might look like this:
>>
>>         
>> clientService="<ftp://foo@myserver?password=secret&>ftp://foo@myserve
>> r?password=secret&
>>
>>                    recursive=true&
>>
>>                    ftpClient.dataTimeout=30000&
>>
>>                    ftpClientConfig.serverLanguageCode=fr"
>>
>>         (At least properties get rid of the gawdaful escaped ampersands).
>>
>>
>>
>>         The code would then say “karaf://clientService”
>>
>>
>>
>>         One can do much of that via configuration right now but I
>>         think it is critical to move it completely to configuration so
>>         that admins know exactly what to change and where to find it
>>         when topologies change. It also means that when the backlash
>>         from microservice calling microservice calling microservice
>>         being slow happens, that simple mapping would permit things
>>         like going to JMS asynchronous request/response (or other
>>         fast, async mechanisms) that don’t swamp the virtual machine’s
>>         or Karaf instance resources. It would also allow for easy
>>         stubbing or mock testing of the Kontainer as it will be
>>         deployed without using PAX exam or other mechanism.
>>
>>
>>
>>         Creating standard OSGi service APIs in an anticipation of
>>         these problems would permit for an evolutionary approach to
>>         these problems in the future and specific solutions when a
>>         standard Kontainer is developed. Even standard error handler
>>         service implementations can be created.
>>
>>
>>
>>         Once such a basic, standard Kontainer exists, then uKontainers
>>         that implement basic functionality commonly used could be
>>         created.  There are JPA examples already.  But the average
>>         developer is going to be given a task to receive some
>>         canonical data model via a REST service and poke it into a
>>         database.  That database model probably won’t look like what
>>         they are receiving.  So a uKontainer that has a REST front end
>>         they can modify, a Dozer object mapping file in the middle
>>         with a transform, and a call to the database will be used
>>         repeatedly.
>>
>>
>>
>>         It may be that Oracle, MySQL, BerklyDB, and so on each endup
>>         with different error handler plugin implementations which are
>>         used with the same REST, mapping, JPA container. Just change
>>         the Maven dependency or profile.
>>
>>
>>
>>         There are a large number of examples like that.  In the case
>>         of that uKontainer there would like be a JPAErrorService for
>>         catching common errors and another for Dozer errors and for
>>         unmarshaling errors.  As a developer looking to solve very
>>         specific problems, I just download the uContainer and do the
>>         Dozer mapping, change some configuration and then test it.
>>
>>
>>
>>         That also means, that much like Camel EIPs, open source
>>         developers can focus on hardening these containers, fixing
>>         bugs, putting in performance enhancements and the like.  If a
>>         new error is coming from JPA that a user finds and isn’t being
>>         handled in a coherent fashion, then a new block or delegate
>>         code is added and released.  Just as we’d do with a Camel
>>         endpoint or component.
>>
>>
>>
>>         Having standard error handlers built into uKontainers would
>>         also help make coherent messages from the large and unwieldy
>>         stack traces full of reflection that we commonly see.  The
>>         error handler OSGi plugin for a given problem would be highly
>>         focused on identifying and reporting problems with a specific
>>         technology or set of technologies.
>>
>>
>>
>>
>>
>>
>>
>>         
>> <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>htt
>> ps://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
>>
>>
>>
>
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>

--
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com


Re: Opinionated...

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
Yeah, agree.

IMHO, the key thing is to avoid bunch of examples. Karaf should embed 
and provide ready to use examples, based on one framework.

End-users are pretty lost when you have a bunch of different ways to 
implement the same thing.

Regards
JB

On 01/16/2017 11:20 AM, Christian Schneider wrote:
> Remote Services can help a lot if you can represent the remote calls as
> a java interface.
> This works well for a lot of transports like SOAP or the fastbin
> transport from Redhat.
>
> Circuit Breaker could be nicely added to Remote Services in a
> transparent way. Remote Services have the notion of intents whioch
> represent a name for a needed feature. So a service could define that it
> needs a circuit-breaker. Alternatively the remote services provider
> could define a central config where you could add this intent to all
> remote services.
>
> Anyway I think remote services could be the standard way in Karaf boot
> to expose and use services.
> There are some ready to use examples in aries-rsa and cxf-dosgi as well
> as in ECF.
>
> There are some cases that at least currently do not work perfectly:
> - REST with links. JAX-RS as a pure transport works quite well with
> CXF-DOSGi and I think also with the ECF CXF provider. The problem is
> though that good REST style requires that you use http resource links a
> lot. This is not easy to represent in a pure java interface. Another
> thing is the notion of JAX-RS Applications. They provide a very nice way
> to enhance a set of REST services with additonal config but they are not
> yet supported by CXF-DOSGi.
> Something to keep track for this is the Aries JAX-RS-whiteboard project.
> It implements the upcoming standard for exposing REST services in OSGi.
> I hope to make CXF-DOSGi and the JAX-RS whiteboard work together in the
> future.
> - One way messaging. I think the purest form of remote communication are
> one way messages backed by JMS or Kafka or other messaging brokers.
> Unfortunately I think this is only partially supported in Remote
> Services. I plan to work on a provider that allows one way message based
> communication in a very simple way but only got some simple prototypes
> till now.
>
> Christian
>
>> [1] https://www.osgi.org/developer/specifications/
>> [2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse
>>
>> On 1/13/2017 11:19 AM, Brad Johnson wrote:
>>
>>     That is certainly the sort of library that could be used as a
>>     standard. Get an agreement on the standard OSGi service interface
>>     and then use it and others for that implementation.  Which brings
>>     up a good question and issue.  There would have to be some set of
>>     standardized messages and exception types.  The CiruitBreaker
>>     example throws a CircuitBreakingException (naturally enough).  If
>>     there\u2019s an ErrorHandlerService it would have to know the standard
>>     set of exceptions that could be expected or, at least, a set of
>>     parent classes.  Since CircuitBreakingException is a relatively
>>     simple class it would be perfect for a default ErrorHandlerService
>>     to catch for that class of exceptions.
>>
>>
>>
>>     Obviously there will have to be some head scratching and chin
>>     rubbing about how the pieces fit together exactly.  The
>>     CircuitBreakerService (and the others too) could also be more like
>>     container classes that listen and pick up
>>     CircuitBreakerListenerService instances.  So one listener might
>>     just log the circuit breaker exception.  But you might instantiate
>>     an SMTPCircuitBreakerNotifcationService that implements the
>>     CircuitBreakerListenerService and fires off an email to an admin
>>     email address if the breaker is tripped.
>>
>>
>>
>>     That CircuitBreakerService might also be picked by the Kontainer
>>     instance which listens for on/off control events from the outside
>>     world.  Some thinking to do there but they are tractable problems
>>     with services and events.
>>
>>
>>
>>     The main services like CircuitBreakerService and ThrottlerService
>>     might register themselves as providers with the
>>     ErrorHandlerService which would catch the types of exceptions they
>>     throw.  It in turn could listen for custom
>>     ExceptionHandlerListener<T> that listen for and handle specific
>>     exception types. Still thinking and hand waving about this but I
>>     think a sane set of standard services, listeners and events could
>>     be created that would permit a user to create simple handlers to
>>     register.
>>
>>
>>
>>     There would also be the issue of the issue of how to automate
>>     injection of those into the Camel routes.  That doesn\u2019t seem like
>>     it should be a daunting challenge but it would be important.  And
>>     I think very important that those get injected automatically even
>>     if the services only provide basic logging initially with no
>>     client custom code.
>>
>>
>>
>>     *From:*James Carman [mailto:james@carmanconsulting.com]
>>     *Sent:* Friday, January 13, 2017 12:12 PM
>>     *To:* bradjohn@redhat.com <ma...@redhat.com>
>>     *Subject:* Re: Opinionated...
>>
>>
>>
>>     Commons Lang3 has a pretty simple CircuitBreaker implementation
>>     that I used in Microbule:
>>
>>     https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java
>>
>>     On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com
>>     <ma...@redhat.com>> wrote:
>>
>>         Folks,
>>
>>
>>
>>         I wanted to make sure that my promoting CDI, Camel Java DSL, &
>>         static profiles didn\u2019t obscure the point I was trying to
>>         make.  Whatever mechanics we choose I\u2019d really like us to be
>>         unified behind a common paradigm so that our documentation,
>>         exemplars, archetypes, blogs, libraries, and so on are all
>>         organized the same and use the same mechanics and layouts for
>>         projects.
>>
>>
>>
>>         We should promote an idiomatic way to develop software using
>>         Karaf Boot.  That\u2019s one problem I hear from a lot of clients.
>>         There are such cross-currents of information about how to
>>         develop OSGi-based software that it gets confusing.  Best or
>>         preferred practices are lost in the noise.  I won\u2019t get into
>>         all that since I\u2019m sure most of you have dealt this problem.
>>         Not to pick on it but a good example is that the Camel in
>>         Action book recommends using Pojos instead of using
>>         Processors/Exchanges.  It is on somewhere near the back of the
>>         book in a few pages. I don\u2019t know how many examples on the web
>>         site actually use the Processor/Exchange but it is a lot. Then
>>         there are examples with Spring, Blueprint, Java DSL, Scala,
>>         etc.  There are annotations that only work in one environment
>>         but not in all of them.
>>
>>
>>
>>         By selecting an idiomatic and \u201copinionated\u201d way of creating
>>         Karaf Boot microcontainers we could make sure that sort of
>>         confusion isn\u2019t continued forward.  It would require a lot
>>         less documentation to cover the same ground and make editing
>>         and updating easier.  It would make creating sample and
>>         example projects a lot easier. It would simplify what Karaf
>>         Boot appliances have to support and make sure there aren\u2019t
>>         concerns that work in one environment and not in another or
>>         that might work differently in a different environment.
>>
>>
>>
>>         I\u2019m personally interested in Karaf Appliances with standard
>>         Maven structures, standard  bundle structures, and reference
>>         implementations that have a good chunk of the basic
>>         functionality. I\u2019d say we take a page from the \u201cconvention
>>         over configuration\u201d book or, at least, a \u201cconventional
>>         configuration\u201d and likely a bit of both. Because the
>>         appliances are focused on microservices we should get out
>>         ahead of the Gartner hype cycle.  Right now we are at the Peak
>>         of Inflated Expectations and in a couple of years we\u2019ll be at
>>         the Trough of Disillusionment.  That disillusionment will come
>>         for a number of reasons. Flying Spaghetti Monster topology
>>         will be one of them but, more importantly for a Karaf
>>         Appliance, is the consistent problem of \u201cnetwork fallacies\u201d.
>>         Every Karaf Kontainer should have standard OSGi service
>>         interfaces and basic implementations that address each of the
>>         fallacies that apply to a uService.  The Kontainers should
>>         insist on it and not make it optional. If the user doesn\u2019t
>>         want that functionality they would then need to disable via
>>         configuration.  But the Kontainer will get stuck in a grace
>>         period and then fail if an expected, standard service isn\u2019t
>>         available. All of the standard OSGi service APIs would have
>>         basic implementations to start but as more specific
>>         Kontainers.  But, because they are standard services new ones
>>         can be developed by the community or by the end developer.
>>
>>
>>
>>         As developers, we\u2019ve all had to implement functionality and
>>         then come back and deal with error handling, security, etc. I
>>         say we simply cut those services in to the Kontainer right
>>         from the get-go.  The Kontainer doesn\u2019t run if it doesn\u2019t find
>>         the service.  That isn\u2019t to say these become a fundamental
>>         part of Karaf but a fundamental part of the Kontainer service
>>         that runs in Karaf.
>>
>>
>>
>>         The standard bundles would only implement basic functionality
>>         and not do anything sophisticated.  New bundles and libraries
>>         for more sophisticated implementations could be added later.
>>         All of the bundles would likely have disable flags if the
>>         developer found the particular concern irrelevant.  For
>>         example, security might not be relevant. The following aren\u2019t
>>         meant to be comprehensive. Just addressing key concerns. Other
>>         standards like LoggingService might be included by default as
>>         well.
>>
>>
>>
>>         The intent here isn\u2019t to define the exact mechanics but the
>>         standard OSGi service interface that would be _/required/_ in
>>         any implementation of a Kontainer, even if the implemented
>>         bundle is simply a passthrough or can be disabled, it forces
>>         the developer to explicitly deal with the problems or choose
>>         to ignore them altogether.
>>
>>
>>
>>         Because these service interfaces and the bundles that
>>         implement them are standard, the set can be specified by the
>>         dependencies specified in the Maven build, features and/or
>>         profiles.
>>
>>
>>
>>          1. The network is reliable.
>>
>>         A standard \u201cError Handler\u201d OSGi service.  The default bundle
>>         would simply capture errors/exceptions and log them.  Perhaps
>>         it would specify retries. Drop in solutions might include
>>         errors going to dead letter queues and so on. The OSGi service
>>         interface is required for Kontainer bootstrap so use the
>>         default or use a standard one or create one of your own.  If
>>         they want to change configuration of this bundle or put in a
>>         new one, they know exactly what it is, where it exists, how it
>>         is specified to the build, and what configuration file is
>>         associated with it. No rummaging around through code.  When
>>         the inevitable error, exceptions and problems arise, the
>>         developer isn\u2019t left wondering where and how they should add
>>         the functionality to handle it.
>>
>>
>>
>>         A standard \u201cCircuit Breaker\u201d service API and basic implemented
>>         bundle should be provided.  Perhaps the standard bundle would
>>         simply count errors over a time frame and shut down if that
>>         limit is hit and allow those values to be configured. Default
>>         would be a rather unsophisticated implementation but provide
>>         the convention and automated wiring of a circuit breaker OSGi
>>         service.  Other implementations might fire off emails to Sys
>>         Admins or be combinations. And if it is really undesirable,
>>         set a disable flag.
>>
>>
>>
>>          2. Latency is zero.
>>
>>         A standard OSGi Throttling service interface and bundle
>>         implementation would be included.  If you want different
>>         behavior, change it.  If you want to disable it, set the flag.
>>         However, there are bigger issues here that I\u2019ll address a bit
>>         more down below.
>>
>>
>>
>>          3. Bandwidth is infinite.
>>
>>         Throttling OSGi service again. Ditto to comment 2.
>>
>>
>>
>>          4. The network is secure.
>>
>>         Standard OSGi service to plug in in various
>>         authentication/authorization mechanisms.  By default it might
>>         be pass through but also have a different implementation that
>>         uses a simple username/password. Obviously LDAP, JAAS, and
>>         other bundles could be created and dropped into place.
>>
>>
>>
>>          5. Topology doesn't change.
>>
>>         Back to the Circuit Breaker, logging and perhaps notification
>>         mechanism.  Also the transport issue below where I\u2019ll mention
>>         some configuration.
>>
>>
>>
>>          6. There is one administrator.
>>
>>         //No particular plugin for this but standardized configuration
>>         and expected bundles help and this also relates to the
>>         transport discussion.
>>
>>
>>
>>          7. Transport cost is zero.
>>
>>         //Probably not a concern here directly but will be a big issue
>>         of uServices.
>>
>>
>>
>>          8. The network is homogeneous.
>>
>>         //I think this issue can be dealt with in our context with
>>         many of the standard libraries but can be abstracted a bit more.
>>
>>
>>
>>         Obviously a big issue we\u2019ll see, and I\u2019ve seen in the past, is
>>         chained request/response calls. Service 1 making a REST call
>>         to service 2 making a REST call to service 3\u2026etc.  And all of
>>         a sudden the latency is a killer.
>>
>>
>>
>>         ServiceMix/Karaf/Camel can already abstract away some of that
>>         via property substitution. I\u2019d suggest we take that one step
>>         further and put _/all/_ transport/protocol information in
>>         configuration and create a standardized URI. As a developer or
>>         a senior developer over a group of developers, I don\u2019t want
>>         them to be concerned with the fiddly bits of the transport in
>>         the code and routes and I certainly don\u2019t want to recompile
>>         just to make such changes.
>>
>>
>>
>>         Akka, for  example, uses local URIs like akka://.  But a
>>         similar Karaf/Camel URI could be used and mapped via the
>>         configuration files.  So the developer would always use
>>         karaf:// in their routes and configuration mapping would use
>>         the URI specified.  karaf://myserviceName.  In the
>>         configuration file might be mapped a
>>         transport.configuration.cfg file.
>>
>>
>>
>>         I believe that is important for a lot of reasons.  A mid-level
>>         or junior-level developer shouldn\u2019t be involved in
>>         configuration like:
>>
>>         "<ftp://foo@myserver/?>ftp://foo@myserver?password=secret&amp;
>>
>>                    recursive=true&amp;
>>
>>                    ftpClient.dataTimeout=30000&amp;
>>
>>                    ftpClientConfig.serverLanguageCode=fr"
>>
>>
>>
>>         So the cfg file might look like this:
>>
>>         clientService="<ftp://foo@myserver?password=secret&>ftp://foo@myserver?password=secret&
>>
>>                    recursive=true&
>>
>>                    ftpClient.dataTimeout=30000&
>>
>>                    ftpClientConfig.serverLanguageCode=fr"
>>
>>         (At least properties get rid of the gawdaful escaped ampersands).
>>
>>
>>
>>         The code would then say \u201ckaraf://clientService\u201d
>>
>>
>>
>>         One can do much of that via configuration right now but I
>>         think it is critical to move it completely to configuration so
>>         that admins know exactly what to change and where to find it
>>         when topologies change. It also means that when the backlash
>>         from microservice calling microservice calling microservice
>>         being slow happens, that simple mapping would permit things
>>         like going to JMS asynchronous request/response (or other
>>         fast, async mechanisms) that don\u2019t swamp the virtual machine\u2019s
>>         or Karaf instance resources. It would also allow for easy
>>         stubbing or mock testing of the Kontainer as it will be
>>         deployed without using PAX exam or other mechanism.
>>
>>
>>
>>         Creating standard OSGi service APIs in an anticipation of
>>         these problems would permit for an evolutionary approach to
>>         these problems in the future and specific solutions when a
>>         standard Kontainer is developed. Even standard error handler
>>         service implementations can be created.
>>
>>
>>
>>         Once such a basic, standard Kontainer exists, then uKontainers
>>         that implement basic functionality commonly used could be
>>         created.  There are JPA examples already.  But the average
>>         developer is going to be given a task to receive some
>>         canonical data model via a REST service and poke it into a
>>         database.  That database model probably won\u2019t look like what
>>         they are receiving.  So a uKontainer that has a REST front end
>>         they can modify, a Dozer object mapping file in the middle
>>         with a transform, and a call to the database will be used
>>         repeatedly.
>>
>>
>>
>>         It may be that Oracle, MySQL, BerklyDB, and so on each endup
>>         with different error handler plugin implementations which are
>>         used with the same REST, mapping, JPA container. Just change
>>         the Maven dependency or profile.
>>
>>
>>
>>         There are a large number of examples like that.  In the case
>>         of that uKontainer there would like be a JPAErrorService for
>>         catching common errors and another for Dozer errors and for
>>         unmarshaling errors.  As a developer looking to solve very
>>         specific problems, I just download the uContainer and do the
>>         Dozer mapping, change some configuration and then test it.
>>
>>
>>
>>         That also means, that much like Camel EIPs, open source
>>         developers can focus on hardening these containers, fixing
>>         bugs, putting in performance enhancements and the like.  If a
>>         new error is coming from JPA that a user finds and isn\u2019t being
>>         handled in a coherent fashion, then a new block or delegate
>>         code is added and released.  Just as we\u2019d do with a Camel
>>         endpoint or component.
>>
>>
>>
>>         Having standard error handlers built into uKontainers would
>>         also help make coherent messages from the large and unwieldy
>>         stack traces full of reflection that we commonly see.  The
>>         error handler OSGi plugin for a given problem would be highly
>>         focused on identifying and reporting problems with a specific
>>         technology or set of technologies.
>>
>>
>>
>>
>>
>>
>>
>>         <https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing>https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
>>
>>
>>
>
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>

-- 
Jean-Baptiste Onofr�
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Re: Opinionated...

Posted by Nick Baker <nb...@pentaho.com>.
Nothing much worth seeing from a Karaf OSGI perspective yet. We're working on some stuff for the next release that's not going to be open until then, but you can see the API now. We have one main Producer interface:


https://github.com/pentaho/pentaho-kettle/blob/ael/pdi-execution-engine/api/src/main/java/org/pentaho/di/engine/api/reporting/IProgressReporting.java


________________________________
From: Nick Baker
Sent: Monday, January 16, 2017 3:58:04 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Nothing much worth seeing from a Karaf OSGI perspective yet. We're working on some stuff for the next release that's not going to be open until then, but you can see the API now:

________________________________
From: David Daniel <da...@gmail.com>
Sent: Monday, January 16, 2017 1:56:10 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Nick do you have a link to pentaho where you are doing some of this.  I am guessing you are using flow instead of the OSGI pushstreams api when you say that streaming was considered for the OSGI standards.

David Daniel

On Mon, Jan 16, 2017 at 1:36 PM, Nick Baker <nb...@pentaho.com>> wrote:
The event bus model has served us well for certain things, broadcasting application events consumed by unknown plugins for instance. It's certainly extensible and easy from the consumer and producer standpoint.

That said, we are basing much of our new work on Reactive Streams APIs. This provides backpressure for composed streams inside the application and ensures that no one is wasting time putting things on the bus which aren't actually listened to.

We're RxJava internally at the moment. Remote subscriptions is something we're just now dealing with which is prompting a look at Akka Streams.

I know Streaming is something that was considered for the OSGi standards, but with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I would encourage us to look forward to that. In combination with remote services and a simple event bus like Guava which can be remoted, I think you have a pretty competent set of utilities to work with.

-Nick Baker
________________________________
From: Scott Lewis <sl...@composent.com>>
Sent: Monday, January 16, 2017 12:49:31 PM
To: user@karaf.apache.org<ma...@karaf.apache.org>
Subject: Re: Opinionated...

On 1/16/2017 2:20 AM, Christian Schneider wrote:
> <stuff deleted>

> - One way messaging. I think the purest form of remote communication
> are one way messages backed by JMS or Kafka or other messaging
> brokers. Unfortunately I think this is only partially supported in
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158
[1], but I don't know what's planned for that now by the EEG.  There was
also some work on push streams and perhaps that's somehow absorbed the
distributed eventing.   Someone on this list currently on the EEG can
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin
implementation for a very long time [2].   The description at [2] is
based upon ActiveMQ, but like ECF's remote services implementation, a
provider approach allows the substitution of other pub/sub
providers...for example mqtt [3], and others (e.g. Camel...and plenty of
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider




Re: Opinionated...

Posted by Nick Baker <nb...@pentaho.com>.
Nothing much worth seeing from a Karaf OSGI perspective yet. We're working on some stuff for the next release that's not going to be open until then, but you can see the API now:

________________________________
From: David Daniel <da...@gmail.com>
Sent: Monday, January 16, 2017 1:56:10 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

Nick do you have a link to pentaho where you are doing some of this.  I am guessing you are using flow instead of the OSGI pushstreams api when you say that streaming was considered for the OSGI standards.

David Daniel

On Mon, Jan 16, 2017 at 1:36 PM, Nick Baker <nb...@pentaho.com>> wrote:
The event bus model has served us well for certain things, broadcasting application events consumed by unknown plugins for instance. It's certainly extensible and easy from the consumer and producer standpoint.

That said, we are basing much of our new work on Reactive Streams APIs. This provides backpressure for composed streams inside the application and ensures that no one is wasting time putting things on the bus which aren't actually listened to.

We're RxJava internally at the moment. Remote subscriptions is something we're just now dealing with which is prompting a look at Akka Streams.

I know Streaming is something that was considered for the OSGi standards, but with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I would encourage us to look forward to that. In combination with remote services and a simple event bus like Guava which can be remoted, I think you have a pretty competent set of utilities to work with.

-Nick Baker
________________________________
From: Scott Lewis <sl...@composent.com>>
Sent: Monday, January 16, 2017 12:49:31 PM
To: user@karaf.apache.org<ma...@karaf.apache.org>
Subject: Re: Opinionated...

On 1/16/2017 2:20 AM, Christian Schneider wrote:
> <stuff deleted>

> - One way messaging. I think the purest form of remote communication
> are one way messages backed by JMS or Kafka or other messaging
> brokers. Unfortunately I think this is only partially supported in
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158
[1], but I don't know what's planned for that now by the EEG.  There was
also some work on push streams and perhaps that's somehow absorbed the
distributed eventing.   Someone on this list currently on the EEG can
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin
implementation for a very long time [2].   The description at [2] is
based upon ActiveMQ, but like ECF's remote services implementation, a
provider approach allows the substitution of other pub/sub
providers...for example mqtt [3], and others (e.g. Camel...and plenty of
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider




Re: Opinionated...

Posted by David Daniel <da...@gmail.com>.
Nick do you have a link to pentaho where you are doing some of this.  I am
guessing you are using flow instead of the OSGI pushstreams api when you
say that streaming was considered for the OSGI standards.

David Daniel

On Mon, Jan 16, 2017 at 1:36 PM, Nick Baker <nb...@pentaho.com> wrote:

> The event bus model has served us well for certain things, broadcasting
> application events consumed by unknown plugins for instance. It's certainly
> extensible and easy from the consumer and producer standpoint.
>
> That said, we are basing much of our new work on Reactive Streams APIs.
> This provides backpressure for composed streams inside the application and
> ensures that no one is wasting time putting things on the bus which aren't
> actually listened to.
>
> We're RxJava internally at the moment. Remote subscriptions is something
> we're just now dealing with which is prompting a look at Akka Streams.
>
> I know Streaming is something that was considered for the OSGi standards,
> but with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I
> would encourage us to look forward to that. In combination with remote
> services and a simple event bus like Guava which can be remoted, I think
> you have a pretty competent set of utilities to work with.
>
> -Nick Baker
> ------------------------------
> *From:* Scott Lewis <sl...@composent.com>
> *Sent:* Monday, January 16, 2017 12:49:31 PM
> *To:* user@karaf.apache.org
> *Subject:* Re: Opinionated...
>
> On 1/16/2017 2:20 AM, Christian Schneider wrote:
> > <stuff deleted>
>
> > - One way messaging. I think the purest form of remote communication
> > are one way messages backed by JMS or Kafka or other messaging
> > brokers. Unfortunately I think this is only partially supported in
> > Remote Services.
>
> and Nick Baker just wrote:
>
>  >Where is Event Admin?
>
> In terms of standardization, there was a DistributedEventing rfp 158
> [1], but I don't know what's planned for that now by the EEG.  There was
> also some work on push streams and perhaps that's somehow absorbed the
> distributed eventing.   Someone on this list currently on the EEG can
> probably speak to the state of standardization.
>
> In terms of implementation, ECF has had a DistributedEventAdmin
> implementation for a very long time [2].   The description at [2] is
> based upon ActiveMQ, but like ECF's remote services implementation, a
> provider approach allows the substitution of other pub/sub
> providers...for example mqtt [3], and others (e.g. Camel...and plenty of
> others).
>
> Scott
>
> [1] https://github.com/osgi/design
>
> [2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
> [3] https://github.com/ECF/Mqtt-Provider
>
>
>

Re: Opinionated...

Posted by Nick Baker <nb...@pentaho.com>.
The event bus model has served us well for certain things, broadcasting application events consumed by unknown plugins for instance. It's certainly extensible and easy from the consumer and producer standpoint.

That said, we are basing much of our new work on Reactive Streams APIs. This provides backpressure for composed streams inside the application and ensures that no one is wasting time putting things on the bus which aren't actually listened to.

We're RxJava internally at the moment. Remote subscriptions is something we're just now dealing with which is prompting a look at Akka Streams.

I know Streaming is something that was considered for the OSGi standards, but with Java 9 looking to adopt Reactive Streams as the new "Flow" API, I would encourage us to look forward to that. In combination with remote services and a simple event bus like Guava which can be remoted, I think you have a pretty competent set of utilities to work with.

-Nick Baker
________________________________
From: Scott Lewis <sl...@composent.com>
Sent: Monday, January 16, 2017 12:49:31 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

On 1/16/2017 2:20 AM, Christian Schneider wrote:
> <stuff deleted>

> - One way messaging. I think the purest form of remote communication
> are one way messages backed by JMS or Kafka or other messaging
> brokers. Unfortunately I think this is only partially supported in
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158
[1], but I don't know what's planned for that now by the EEG.  There was
also some work on push streams and perhaps that's somehow absorbed the
distributed eventing.   Someone on this list currently on the EEG can
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin
implementation for a very long time [2].   The description at [2] is
based upon ActiveMQ, but like ECF's remote services implementation, a
provider approach allows the substitution of other pub/sub
providers...for example mqtt [3], and others (e.g. Camel...and plenty of
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider



Re: Opinionated...

Posted by Scott Lewis <sl...@composent.com>.
On 1/16/2017 2:20 AM, Christian Schneider wrote:
> <stuff deleted>

> - One way messaging. I think the purest form of remote communication 
> are one way messages backed by JMS or Kafka or other messaging 
> brokers. Unfortunately I think this is only partially supported in 
> Remote Services.

and Nick Baker just wrote:

 >Where is Event Admin?

In terms of standardization, there was a DistributedEventing rfp 158 
[1], but I don't know what's planned for that now by the EEG.  There was 
also some work on push streams and perhaps that's somehow absorbed the 
distributed eventing.   Someone on this list currently on the EEG can 
probably speak to the state of standardization.

In terms of implementation, ECF has had a DistributedEventAdmin 
implementation for a very long time [2].   The description at [2] is 
based upon ActiveMQ, but like ECF's remote services implementation, a 
provider approach allows the substitution of other pub/sub 
providers...for example mqtt [3], and others (e.g. Camel...and plenty of 
others).

Scott

[1] https://github.com/osgi/design

[2] https://wiki.eclipse.org/EIG:Distributed_EventAdmin_Service
[3] https://github.com/ECF/Mqtt-Provider



RE: Opinionated...

Posted by Brad Johnson <br...@redhat.com>.
It sounds like the abstraction is well under way then. It certainly would make a great deal of sense to abstract away the details about connections.  We no longer code at high level with raw sockets but use libraries.  It seems the frameworks could take that to the next level.

 

Brad

 

From: Christian Schneider [mailto:cschneider111@gmail.com] On Behalf Of Christian Schneider
Sent: Monday, January 16, 2017 4:21 AM
To: user@karaf.apache.org
Subject: Re: Opinionated...

 

Remote Services can help a lot if you can represent the remote calls as a java interface.
This works well for a lot of transports like SOAP or the fastbin transport from Redhat. 

Circuit Breaker could be nicely added to Remote Services in a transparent way. Remote Services have the notion of intents whioch represent a name for a needed feature. So a service could define that it needs a circuit-breaker. Alternatively the remote services provider could define a central config where you could add this intent to all remote services.

Anyway I think remote services could be the standard way in Karaf boot to expose and use services.
There are some ready to use examples in aries-rsa and cxf-dosgi as well as in ECF.

There are some cases that at least currently do not work perfectly:
- REST with links. JAX-RS as a pure transport works quite well with CXF-DOSGi and I think also with the ECF CXF provider. The problem is though that good REST style requires that you use http resource links a lot. This is not easy to represent in a pure java interface. Another thing is the notion of JAX-RS Applications. They provide a very nice way to enhance a set of REST services with additonal config but they are not yet supported by CXF-DOSGi.
Something to keep track for this is the Aries JAX-RS-whiteboard project. It implements the upcoming standard for exposing REST services in OSGi. I hope to make CXF-DOSGi and the JAX-RS whiteboard work together in the future.
- One way messaging. I think the purest form of remote communication are one way messages backed by JMS or Kafka or other messaging brokers. Unfortunately I think this is only partially supported in Remote Services. I plan to work on a provider that allows one way message based communication in a very simple way but only got some simple prototypes till now.

Christian

[1] https://www.osgi.org/developer/specifications/
[2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse

On 1/13/2017 11:19 AM, Brad Johnson wrote:

That is certainly the sort of library that could be used as a standard. Get an agreement on the standard OSGi service interface and then use it and others for that implementation.  Which brings up a good question and issue.  There would have to be some set of standardized messages and exception types.  The CiruitBreaker example throws a CircuitBreakingException (naturally enough).  If there’s an ErrorHandlerService it would have to know the standard set of exceptions that could be expected or, at least, a set of parent classes.  Since CircuitBreakingException is a relatively simple class it would be perfect for a default ErrorHandlerService to catch for that class of exceptions.  

 

Obviously there will have to be some head scratching and chin rubbing about how the pieces fit together exactly.  The CircuitBreakerService (and the others too) could also be more like container classes that listen and pick up CircuitBreakerListenerService instances.  So one listener might just log the circuit breaker exception.  But you might instantiate an SMTPCircuitBreakerNotifcationService that implements the CircuitBreakerListenerService and fires off an email to an admin email address if the breaker is tripped. 

 

That CircuitBreakerService might also be picked by the Kontainer instance which listens for on/off control events from the outside world.  Some thinking to do there but they are tractable problems with services and events.

 

The main services like CircuitBreakerService and ThrottlerService might register themselves as providers with the ErrorHandlerService which would catch the types of exceptions they throw.  It in turn could listen for custom ExceptionHandlerListener<T> that listen for and handle specific exception types. Still thinking and hand waving about this but I think a sane set of standard services, listeners and events could be created that would permit a user to create simple handlers to register.

 

There would also be the issue of the issue of how to automate injection of those into the Camel routes.  That doesn’t seem like it should be a daunting challenge but it would be important.  And I think very important that those get injected automatically even if the services only provide basic logging initially with no client custom code.

 

From: James Carman [mailto:james@carmanconsulting.com] 
Sent: Friday, January 13, 2017 12:12 PM
To: bradjohn@redhat.com <ma...@redhat.com> 
Subject: Re: Opinionated...

 

Commons Lang3 has a pretty simple CircuitBreaker implementation that I used in Microbule:

https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java

On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com <ma...@redhat.com> > wrote:

Folks,

 

I wanted to make sure that my promoting CDI, Camel Java DSL, & static profiles didn’t obscure the point I was trying to make.  Whatever mechanics we choose I’d really like us to be unified behind a common paradigm so that our documentation, exemplars, archetypes, blogs, libraries, and so on are all organized the same and use the same mechanics and layouts for projects. 

 

We should promote an idiomatic way to develop software using Karaf Boot.  That’s one problem I hear from a lot of clients.  There are such cross-currents of information about how to develop OSGi-based software that it gets confusing.  Best or preferred practices are lost in the noise.  I won’t get into all that since I’m sure most of you have dealt this problem. Not to pick on it but a good example is that the Camel in Action book recommends using Pojos instead of using Processors/Exchanges.  It is on somewhere near the back of the book in a few pages. I don’t know how many examples on the web site actually use the Processor/Exchange but it is a lot. Then there are examples with Spring, Blueprint, Java DSL, Scala, etc.  There are annotations that only work in one environment but not in all of them.

 

By selecting an idiomatic and “opinionated” way of creating Karaf Boot microcontainers we could make sure that sort of confusion isn’t continued forward.  It would require a lot less documentation to cover the same ground and make editing and updating easier.  It would make creating sample and example projects a lot easier. It would simplify what Karaf Boot appliances have to support and make sure there aren’t concerns that work in one environment and not in another or that might work differently in a different environment.

 

I’m personally interested in Karaf Appliances with standard Maven structures, standard  bundle structures, and reference implementations that have a good chunk of the basic functionality. I’d say we take a page from the “convention over configuration” book or, at least, a “conventional configuration” and likely a bit of both. Because the appliances are focused on microservices we should get out ahead of the Gartner hype cycle.  Right now we are at the Peak of Inflated Expectations and in a couple of years we’ll be at the Trough of Disillusionment.  That disillusionment will come for a number of reasons. Flying Spaghetti Monster topology will be one of them but, more importantly for a Karaf Appliance, is the consistent problem of “network fallacies”.  Every Karaf Kontainer should have standard OSGi service interfaces and basic implementations that address each of the fallacies that apply to a uService.  The Kontainers should insist on it and not make it optional. If the user doesn’t want that functionality they would then need to disable via configuration.  But the Kontainer will get stuck in a grace period and then fail if an expected, standard service isn’t available. All of the standard OSGi service APIs would have basic implementations to start but as more specific Kontainers.  But, because they are standard services new ones can be developed by the community or by the end developer.

 

As developers, we’ve all had to implement functionality and then come back and deal with error handling, security, etc. I say we simply cut those services in to the Kontainer right from the get-go.  The Kontainer doesn’t run if it doesn’t find the service.  That isn’t to say these become a fundamental part of Karaf but a fundamental part of the Kontainer service that runs in Karaf.

 

The standard bundles would only implement basic functionality and not do anything sophisticated.  New bundles and libraries for more sophisticated implementations could be added later. All of the bundles would likely have disable flags if the developer found the particular concern irrelevant.  For example, security might not be relevant. The following aren’t meant to be comprehensive. Just addressing key concerns. Other standards like LoggingService might be included by default as well. 

 

The intent here isn’t to define the exact mechanics but the standard OSGi service interface that would be _required_ in any implementation of a Kontainer, even if the implemented bundle is simply a passthrough or can be disabled, it forces the developer to explicitly deal with the problems or choose to ignore them altogether.  

 

Because these service interfaces and the bundles that implement them are standard, the set can be specified by the dependencies specified in the Maven build, features and/or profiles.

 

1.	The network is reliable.

A standard “Error Handler” OSGi service.  The default bundle would simply capture errors/exceptions and log them.  Perhaps it would specify retries. Drop in solutions might include errors going to dead letter queues and so on. The OSGi service interface is required for Kontainer bootstrap so use the default or use a standard one or create one of your own.  If they want to change configuration of this bundle or put in a new one, they know exactly what it is, where it exists, how it is specified to the build, and what configuration file is associated with it. No rummaging around through code.  When the inevitable error, exceptions and problems arise, the developer isn’t left wondering where and how they should add the functionality to handle it.

 

A standard “Circuit Breaker” service API and basic implemented bundle should be provided.  Perhaps the standard bundle would simply count errors over a time frame and shut down if that limit is hit and allow those values to be configured. Default would be a rather unsophisticated implementation but provide the convention and automated wiring of a circuit breaker OSGi service.  Other implementations might fire off emails to Sys Admins or be combinations. And if it is really undesirable, set a disable flag.

 

2.	Latency is zero.

A standard OSGi Throttling service interface and bundle implementation would be included.  If you want different behavior, change it.  If you want to disable it, set the flag. However, there are bigger issues here that I’ll address a bit more down below.

 

3.	Bandwidth is infinite.

Throttling OSGi service again. Ditto to comment 2.

 

4.	The network is secure.

Standard OSGi service to plug in in various authentication/authorization mechanisms.  By default it might be pass through but also have a different implementation that uses a simple username/password. Obviously LDAP, JAAS, and other bundles could be created and dropped into place. 

 

5.	Topology doesn't change.

Back to the Circuit Breaker, logging and perhaps notification mechanism.  Also the transport issue below where I’ll mention some configuration.

 

6.	There is one administrator.

//No particular plugin for this but standardized configuration and expected bundles help and this also relates to the transport discussion.

 

7.	Transport cost is zero.

//Probably not a concern here directly but will be a big issue of uServices.

 

8.	The network is homogeneous.

//I think this issue can be dealt with in our context with many of the standard libraries but can be abstracted a bit more.

 

Obviously a big issue we’ll see, and I’ve seen in the past, is chained request/response calls. Service 1 making a REST call to service 2 making a REST call to service 3…etc.  And all of a sudden the latency is a killer.

 

ServiceMix/Karaf/Camel can already abstract away some of that via property substitution. I’d suggest we take that one step further and put _all_ transport/protocol information in configuration and create a standardized URI. As a developer or a senior developer over a group of developers, I don’t want them to be concerned with the fiddly bits of the transport in the code and routes and I certainly don’t want to recompile just to make such changes.

 

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel URI could be used and mapped via the configuration files.  So the developer would always use karaf:// in their routes and configuration mapping would use the URI specified.  karaf://myserviceName.  In the configuration file might be mapped a transport.configuration.cfg file.

 

I believe that is important for a lot of reasons.  A mid-level or junior-level developer shouldn’t be involved in configuration like:

"ftp://foo@myserver?password=secret&amp;

           recursive=true&amp;

           ftpClient.dataTimeout=30000&amp;

           ftpClientConfig.serverLanguageCode=fr"

 

So the cfg file might look like this:

clientService="ftp://foo@myserver?password=secret <ftp://foo@myserver?password=secret&> &

           recursive=true&

           ftpClient.dataTimeout=30000&

           ftpClientConfig.serverLanguageCode=fr"

(At least properties get rid of the gawdaful escaped ampersands).

 

The code would then say “karaf://clientService”

 

One can do much of that via configuration right now but I think it is critical to move it completely to configuration so that admins know exactly what to change and where to find it when topologies change. It also means that when the backlash from microservice calling microservice calling microservice being slow happens, that simple mapping would permit things like going to JMS asynchronous request/response (or other fast, async mechanisms) that don’t swamp the virtual machine’s or Karaf instance resources. It would also allow for easy stubbing or mock testing of the Kontainer as it will be deployed without using PAX exam or other mechanism.

 

Creating standard OSGi service APIs in an anticipation of these problems would permit for an evolutionary approach to these problems in the future and specific solutions when a standard Kontainer is developed. Even standard error handler service implementations can be created.

 

Once such a basic, standard Kontainer exists, then uKontainers that implement basic functionality commonly used could be created.  There are JPA examples already.  But the average developer is going to be given a task to receive some canonical data model via a REST service and poke it into a database.  That database model probably won’t look like what they are receiving.  So a uKontainer that has a REST front end they can modify, a Dozer object mapping file in the middle with a transform, and a call to the database will be used repeatedly.

 

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different error handler plugin implementations which are used with the same REST, mapping, JPA container. Just change the Maven dependency or profile.

 

There are a large number of examples like that.  In the case of that uKontainer there would like be a JPAErrorService for catching common errors and another for Dozer errors and for unmarshaling errors.  As a developer looking to solve very specific problems, I just download the uContainer and do the Dozer mapping, change some configuration and then test it.

 

That also means, that much like Camel EIPs, open source developers can focus on hardening these containers, fixing bugs, putting in performance enhancements and the like.  If a new error is coming from JPA that a user finds and isn’t being handled in a coherent fashion, then a new block or delegate code is added and released.  Just as we’d do with a Camel endpoint or component.

 

Having standard error handlers built into uKontainers would also help make coherent messages from the large and unwieldy stack traces full of reflection that we commonly see.  The error handler OSGi plugin for a given problem would be highly focused on identifying and reporting problems with a specific technology or set of technologies.

 

 

 

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing

 

 

 

-- 
Christian Schneider
http://www.liquid-reality.de
 
Open Source Architect
http://www.talend.com

Re: Opinionated...

Posted by Christian Schneider <ch...@die-schneider.net>.
Remote Services can help a lot if you can represent the remote calls as 
a java interface.
This works well for a lot of transports like SOAP or the fastbin 
transport from Redhat.

Circuit Breaker could be nicely added to Remote Services in a 
transparent way. Remote Services have the notion of intents whioch 
represent a name for a needed feature. So a service could define that it 
needs a circuit-breaker. Alternatively the remote services provider 
could define a central config where you could add this intent to all 
remote services.

Anyway I think remote services could be the standard way in Karaf boot 
to expose and use services.
There are some ready to use examples in aries-rsa and cxf-dosgi as well 
as in ECF.

There are some cases that at least currently do not work perfectly:
- REST with links. JAX-RS as a pure transport works quite well with 
CXF-DOSGi and I think also with the ECF CXF provider. The problem is 
though that good REST style requires that you use http resource links a 
lot. This is not easy to represent in a pure java interface. Another 
thing is the notion of JAX-RS Applications. They provide a very nice way 
to enhance a set of REST services with additonal config but they are not 
yet supported by CXF-DOSGi.
Something to keep track for this is the Aries JAX-RS-whiteboard project. 
It implements the upcoming standard for exposing REST services in OSGi. 
I hope to make CXF-DOSGi and the JAX-RS whiteboard work together in the 
future.
- One way messaging. I think the purest form of remote communication are 
one way messages backed by JMS or Kafka or other messaging brokers. 
Unfortunately I think this is only partially supported in Remote 
Services. I plan to work on a provider that allows one way message based 
communication in a very simple way but only got some simple prototypes 
till now.

Christian

> [1] https://www.osgi.org/developer/specifications/
> [2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse
>
> On 1/13/2017 11:19 AM, Brad Johnson wrote:
>
>     That is certainly the sort of library that could be used as a
>     standard. Get an agreement on the standard OSGi service interface
>     and then use it and others for that implementation.  Which brings
>     up a good question and issue.  There would have to be some set of
>     standardized messages and exception types.  The CiruitBreaker
>     example throws a CircuitBreakingException (naturally enough).  If
>     there\u2019s an ErrorHandlerService it would have to know the standard
>     set of exceptions that could be expected or, at least, a set of
>     parent classes.  Since CircuitBreakingException is a relatively
>     simple class it would be perfect for a default ErrorHandlerService
>     to catch for that class of exceptions.
>
>     Obviously there will have to be some head scratching and chin
>     rubbing about how the pieces fit together exactly.  The
>     CircuitBreakerService (and the others too) could also be more like
>     container classes that listen and pick up
>     CircuitBreakerListenerService instances.  So one listener might
>     just log the circuit breaker exception.  But you might instantiate
>     an SMTPCircuitBreakerNotifcationService that implements the
>     CircuitBreakerListenerService and fires off an email to an admin
>     email address if the breaker is tripped.
>
>     That CircuitBreakerService might also be picked by the Kontainer
>     instance which listens for on/off control events from the outside
>     world.  Some thinking to do there but they are tractable problems
>     with services and events.
>
>     The main services like CircuitBreakerService and ThrottlerService
>     might register themselves as providers with the
>     ErrorHandlerService which would catch the types of exceptions they
>     throw.  It in turn could listen for custom
>     ExceptionHandlerListener<T> that listen for and handle specific
>     exception types. Still thinking and hand waving about this but I
>     think a sane set of standard services, listeners and events could
>     be created that would permit a user to create simple handlers to
>     register.
>
>     There would also be the issue of the issue of how to automate
>     injection of those into the Camel routes.  That doesn\u2019t seem like
>     it should be a daunting challenge but it would be important.  And
>     I think very important that those get injected automatically even
>     if the services only provide basic logging initially with no
>     client custom code.
>
>     *From:*James Carman [mailto:james@carmanconsulting.com]
>     *Sent:* Friday, January 13, 2017 12:12 PM
>     *To:* bradjohn@redhat.com <ma...@redhat.com>
>     *Subject:* Re: Opinionated...
>
>     Commons Lang3 has a pretty simple CircuitBreaker implementation
>     that I used in Microbule:
>
>     https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java
>
>     On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com
>     <ma...@redhat.com>> wrote:
>
>         Folks,
>
>         I wanted to make sure that my promoting CDI, Camel Java DSL, &
>         static profiles didn\u2019t obscure the point I was trying to
>         make.  Whatever mechanics we choose I\u2019d really like us to be
>         unified behind a common paradigm so that our documentation,
>         exemplars, archetypes, blogs, libraries, and so on are all
>         organized the same and use the same mechanics and layouts for
>         projects.
>
>         We should promote an idiomatic way to develop software using
>         Karaf Boot.  That\u2019s one problem I hear from a lot of clients. 
>         There are such cross-currents of information about how to
>         develop OSGi-based software that it gets confusing.  Best or
>         preferred practices are lost in the noise.  I won\u2019t get into
>         all that since I\u2019m sure most of you have dealt this problem.
>         Not to pick on it but a good example is that the Camel in
>         Action book recommends using Pojos instead of using
>         Processors/Exchanges.  It is on somewhere near the back of the
>         book in a few pages. I don\u2019t know how many examples on the web
>         site actually use the Processor/Exchange but it is a lot. Then
>         there are examples with Spring, Blueprint, Java DSL, Scala,
>         etc.  There are annotations that only work in one environment
>         but not in all of them.
>
>         By selecting an idiomatic and \u201copinionated\u201d way of creating
>         Karaf Boot microcontainers we could make sure that sort of
>         confusion isn\u2019t continued forward.  It would require a lot
>         less documentation to cover the same ground and make editing
>         and updating easier.  It would make creating sample and
>         example projects a lot easier. It would simplify what Karaf
>         Boot appliances have to support and make sure there aren\u2019t
>         concerns that work in one environment and not in another or
>         that might work differently in a different environment.
>
>         I\u2019m personally interested in Karaf Appliances with standard
>         Maven structures, standard  bundle structures, and reference
>         implementations that have a good chunk of the basic
>         functionality. I\u2019d say we take a page from the \u201cconvention
>         over configuration\u201d book or, at least, a \u201cconventional
>         configuration\u201d and likely a bit of both. Because the
>         appliances are focused on microservices we should get out
>         ahead of the Gartner hype cycle. Right now we are at the Peak
>         of Inflated Expectations and in a couple of years we\u2019ll be at
>         the Trough of Disillusionment.  That disillusionment will come
>         for a number of reasons. Flying Spaghetti Monster topology
>         will be one of them but, more importantly for a Karaf
>         Appliance, is the consistent problem of \u201cnetwork fallacies\u201d.
>         Every Karaf Kontainer should have standard OSGi service
>         interfaces and basic implementations that address each of the
>         fallacies that apply to a uService.  The Kontainers should
>         insist on it and not make it optional. If the user doesn\u2019t
>         want that functionality they would then need to disable via
>         configuration.  But the Kontainer will get stuck in a grace
>         period and then fail if an expected, standard service isn\u2019t
>         available. All of the standard OSGi service APIs would have
>         basic implementations to start but as more specific
>         Kontainers.  But, because they are standard services new ones
>         can be developed by the community or by the end developer.
>
>         As developers, we\u2019ve all had to implement functionality and
>         then come back and deal with error handling, security, etc. I
>         say we simply cut those services in to the Kontainer right
>         from the get-go.  The Kontainer doesn\u2019t run if it doesn\u2019t find
>         the service.  That isn\u2019t to say these become a fundamental
>         part of Karaf but a fundamental part of the Kontainer service
>         that runs in Karaf.
>
>         The standard bundles would only implement basic functionality
>         and not do anything sophisticated. New bundles and libraries
>         for more sophisticated implementations could be added later.
>         All of the bundles would likely have disable flags if the
>         developer found the particular concern irrelevant.  For
>         example, security might not be relevant. The following aren\u2019t
>         meant to be comprehensive. Just addressing key concerns. Other
>         standards like LoggingService might be included by default as
>         well.
>
>         The intent here isn\u2019t to define the exact mechanics but the
>         standard OSGi service interface that would be _/required/_ in
>         any implementation of a Kontainer, even if the implemented
>         bundle is simply a passthrough or can be disabled, it forces
>         the developer to explicitly deal with the problems or choose
>         to ignore them altogether.
>
>         Because these service interfaces and the bundles that
>         implement them are standard, the set can be specified by the
>         dependencies specified in the Maven build, features and/or
>         profiles.
>
>          1. The network is reliable.
>
>         A standard \u201cError Handler\u201d OSGi service.  The default bundle
>         would simply capture errors/exceptions and log them.  Perhaps
>         it would specify retries. Drop in solutions might include
>         errors going to dead letter queues and so on. The OSGi service
>         interface is required for Kontainer bootstrap so use the
>         default or use a standard one or create one of your own.  If
>         they want to change configuration of this bundle or put in a
>         new one, they know exactly what it is, where it exists, how it
>         is specified to the build, and what configuration file is
>         associated with it. No rummaging around through code.  When
>         the inevitable error, exceptions and problems arise, the
>         developer isn\u2019t left wondering where and how they should add
>         the functionality to handle it.
>
>         A standard \u201cCircuit Breaker\u201d service API and basic implemented
>         bundle should be provided.  Perhaps the standard bundle would
>         simply count errors over a time frame and shut down if that
>         limit is hit and allow those values to be configured. Default
>         would be a rather unsophisticated implementation but provide
>         the convention and automated wiring of a circuit breaker OSGi
>         service.  Other implementations might fire off emails to Sys
>         Admins or be combinations. And if it is really undesirable,
>         set a disable flag.
>
>          2. Latency is zero.
>
>         A standard OSGi Throttling service interface and bundle
>         implementation would be included.  If you want different
>         behavior, change it.  If you want to disable it, set the flag.
>         However, there are bigger issues here that I\u2019ll address a bit
>         more down below.
>
>          3. Bandwidth is infinite.
>
>         Throttling OSGi service again. Ditto to comment 2.
>
>          4. The network is secure.
>
>         Standard OSGi service to plug in in various
>         authentication/authorization mechanisms.  By default it might
>         be pass through but also have a different implementation that
>         uses a simple username/password. Obviously LDAP, JAAS, and
>         other bundles could be created and dropped into place.
>
>          5. Topology doesn't change.
>
>         Back to the Circuit Breaker, logging and perhaps notification
>         mechanism.  Also the transport issue below where I\u2019ll mention
>         some configuration.
>
>          6. There is one administrator.
>
>         //No particular plugin for this but standardized configuration
>         and expected bundles help and this also relates to the
>         transport discussion.
>
>          7. Transport cost is zero.
>
>         //Probably not a concern here directly but will be a big issue
>         of uServices.
>
>          8. The network is homogeneous.
>
>         //I think this issue can be dealt with in our context with
>         many of the standard libraries but can be abstracted a bit more.
>
>         Obviously a big issue we\u2019ll see, and I\u2019ve seen in the past, is
>         chained request/response calls. Service 1 making a REST call
>         to service 2 making a REST call to service 3\u2026etc.  And all of
>         a sudden the latency is a killer.
>
>         ServiceMix/Karaf/Camel can already abstract away some of that
>         via property substitution. I\u2019d suggest we take that one step
>         further and put _/all/_ transport/protocol information in
>         configuration and create a standardized URI. As a developer or
>         a senior developer over a group of developers, I don\u2019t want
>         them to be concerned with the fiddly bits of the transport in
>         the code and routes and I certainly don\u2019t want to recompile
>         just to make such changes.
>
>         Akka, for  example, uses local URIs like akka://.  But a
>         similar Karaf/Camel URI could be used and mapped via the
>         configuration files.  So the developer would always use
>         karaf:// in their routes and configuration mapping would use
>         the URI specified.  karaf://myserviceName.  In the
>         configuration file might be mapped a
>         transport.configuration.cfg file.
>
>         I believe that is important for a lot of reasons.  A mid-level
>         or junior-level developer shouldn\u2019t be involved in
>         configuration like:
>
>         "ftp://foo@myserver? <ftp://foo@myserver/?>password=secret&amp;
>
>                    recursive=true&amp;
>
>                    ftpClient.dataTimeout=30000&amp;
>
>                    ftpClientConfig.serverLanguageCode=fr"
>
>         So the cfg file might look like this:
>
>         clientService="ftp://foo@myserver?password=secret&
>
>         recursive=true&
>
>         ftpClient.dataTimeout=30000&
>
>         ftpClientConfig.serverLanguageCode=fr"
>
>         (At least properties get rid of the gawdaful escaped ampersands).
>
>         The code would then say \u201ckaraf://clientService\u201d
>
>         One can do much of that via configuration right now but I
>         think it is critical to move it completely to configuration so
>         that admins know exactly what to change and where to find it
>         when topologies change. It also means that when the backlash
>         from microservice calling microservice calling microservice
>         being slow happens, that simple mapping would permit things
>         like going to JMS asynchronous request/response (or other
>         fast, async mechanisms) that don\u2019t swamp the virtual machine\u2019s
>         or Karaf instance resources. It would also allow for easy
>         stubbing or mock testing of the Kontainer as it will be
>         deployed without using PAX exam or other mechanism.
>
>         Creating standard OSGi service APIs in an anticipation of
>         these problems would permit for an evolutionary approach to
>         these problems in the future and specific solutions when a
>         standard Kontainer is developed. Even standard error handler
>         service implementations can be created.
>
>         Once such a basic, standard Kontainer exists, then uKontainers
>         that implement basic functionality commonly used could be
>         created.  There are JPA examples already.  But the average
>         developer is going to be given a task to receive some
>         canonical data model via a REST service and poke it into a
>         database.  That database model probably won\u2019t look like what
>         they are receiving.  So a uKontainer that has a REST front end
>         they can modify, a Dozer object mapping file in the middle
>         with a transform, and a call to the database will be used
>         repeatedly.
>
>         It may be that Oracle, MySQL, BerklyDB, and so on each endup
>         with different error handler plugin implementations which are
>         used with the same REST, mapping, JPA container. Just change
>         the Maven dependency or profile.
>
>         There are a large number of examples like that.  In the case
>         of that uKontainer there would like be a JPAErrorService for
>         catching common errors and another for Dozer errors and for
>         unmarshaling errors.  As a developer looking to solve very
>         specific problems, I just download the uContainer and do the
>         Dozer mapping, change some configuration and then test it.
>
>         That also means, that much like Camel EIPs, open source
>         developers can focus on hardening these containers, fixing
>         bugs, putting in performance enhancements and the like.  If a
>         new error is coming from JPA that a user finds and isn\u2019t being
>         handled in a coherent fashion, then a new block or delegate
>         code is added and released.  Just as we\u2019d do with a Camel
>         endpoint or component.
>
>         Having standard error handlers built into uKontainers would
>         also help make coherent messages from the large and unwieldy
>         stack traces full of reflection that we commonly see.  The
>         error handler OSGi plugin for a given problem would be highly
>         focused on identifying and reporting problems with a specific
>         technology or set of technologies.
>
>         https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
>


-- 
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com


Re: Opinionated...

Posted by Scott Lewis <sl...@composent.com>.
On 1/14/2017 5:19 AM, Brad Johnson wrote:
>
> Scott,
>
> It\u2019s funny that you mention OSGi Remote Services as that was sort of 
> in the back of my mind.   I think I recall Christian said he was 
> working on a Remote Services implementation as well. But I don\u2019t know 
> enough about it yet to include it in the discussion.  I suspect what 
> I\u2019m going to do is set up a GitHub project with a basic project and 
> then a set of appliances for a variety of enterprise integration 
> purposes.  That baseline has to be in place before lower levels can be 
> addressed
>
> But I do think that a communications abstraction is going to be 
> necessary.  A next generation of communications pipes should abstract 
> protocols away from the OSGi programmer. Or as the great and powerful 
> Oz put it \u201cnever mind the little man behind the curtain\u201d.
>

I don't think Remote Services would be accurately characterized as 
attempting to make networking transparent.   The spec does go to some 
trouble to provide standardized meta-data about a service to allow 
service implementers to characterize the service for consumers...i.e. in 
terms of it's network characteristics (e.g. whether it's a remote 
service or local service to begin with).

In any event, although I would argue that rpc is not the only 
communications abstraction, it is a useful and common one...and 
re-building a service n times with different transport implementations 
doesn't seem to me like a good way to go either.

WRT Camel, it could (and probably should) be used to implement a remote 
services distribution provider.   It would be a simple matter to do so.

Scott


RE: Opinionated...

Posted by Brad Johnson <br...@redhat.com>.
Some of the mechanisms with the observable/subscriber.  But I do not know enough about it to comment in full.

 

I’d want to make sure though that under the covers whatever the mechanism is capable of talking with “legacy” systems like REST, SOAP, JMS, etc.  Just a standardized way to abstract away the details of the transports in the Camel from/to calls themselves.  

 

If you think about Camel itself and direct-vm/SEDA, one really doesn’t have to care or know much about how those are done in memory.  Or with JMS request/response queues is another example. One thinks about them almost as if they are a single request/response call while under the covers something different is happening.

 

What if something like that existed that had an interface or mechanism like the RxJava seems to promote but under the covers permitted configuration to/from other services. From a Camel route perspective, that would simply be a request/response call or an async call.

 

While a REST call by its very nature and transport is request/response, that doesn’t mean that I necessarily care about the response when making the call (other than failures which might be normalized).

 

The configuration would still have to happen at some level, obviously, but not at the level of Camel route definition. 

 

from(“camel-async:myReceivingEndpoint”)

from(“camel-sync:myOtherReceivingEndpoint”)

//same for “to” endpoints.

 

Obviously the various Camel “pipes” would need to be responsible for handling the mechanics of converting whatever that underlying mechanism is using. Some of that could be accomplished via current Camel routes inside those pipes. 

 

But it would push the configuration down one level and out of the routes themselves.  It would make switching from one type of transport to another a configuration change.  One example of the utility is in switching from test mode where async/sync might simply be direct-vm/SEDA mechanisms.  But when deployed they’d be the actual underlying REST, SOAP or JMS or ?? pipes. 

 

Right now this is hand waving on my part and until I actually have (or get) some time to sit down and prototype what that might be like to implement it’s difficult to say how practical or easy that would be to implement.

 

Some would have to be able to understand misconfiguration.  What do you do when a JMS request/response is mapped to an async call?  That wouldn’t make sense.  One could potentially misconfigure that so does the “pipe” where it just throws away a returned payload away?  But that problem doesn’t get easier just by making all the connection details and flags explicit in the Camel route itself. If anything, those configuration details on from/to endpoints obscure the nature of what’s happening more than they help.  One can still have a from(“seda:xxx”).to(“someRequestReponse”).  That obviously doesn’t make sense. It might if there was some bean in between handling the returned data.

 

But if all I see in Camel routes is async or sync calls I can easily scan it to determine if something doesn’t look right and if the route isn’t behaving as I expect I know to look for the culprit in the pipe configuration itself.

 

 

 

 

From: Pratt, Jason [mailto:Jason.Pratt@windriver.com] 
Sent: Tuesday, January 17, 2017 12:58 PM
To: user@karaf.apache.org
Subject: RE: Opinionated...

 

Brad are you looking at doing some sort of RxJava here?

 

From: Brad Johnson [mailto:bradjohn@redhat.com] 
Sent: Saturday, January 14, 2017 5:20 AM
To: user@karaf.apache.org <ma...@karaf.apache.org> 
Subject: RE: Opinionated...

 

Scott,

 

It’s funny that you mention OSGi Remote Services as that was sort of in the back of my mind.   I think I recall Christian said he was working on a Remote Services implementation as well. But I don’t know enough about it yet to include it in the discussion.  I suspect what I’m going to do is set up a GitHub project with a basic project and then a set of appliances for a variety of enterprise integration purposes.  That baseline has to be in place before lower levels can be addressed

 

But I do think that a communications abstraction is going to be necessary.  A next generation of communications pipes should abstract protocols away from the OSGi programmer. Or as the great and powerful Oz put it “never mind the little man behind the curtain”.

 

When writing code in a Camel route in an OSGi bundle the programmer should either be thinking about using a named request/response pipe or an event pipe – IPs, ports, transports, and so on should happen via straight configuration files, parsed, configured and then registered as services to be picked up in bundles.  There isn’t anything that stands in the way of doing that now other than a little elbow grease.

 

So I’ll definitely give the Remote Services a deeper look.

 

 

From: Scott Lewis [mailto:slewis@composent.com] 
Sent: Friday, January 13, 2017 7:16 PM
To: user@karaf.apache.org <ma...@karaf.apache.org> 
Subject: Re: Opinionated...

 

Hi Brad,

You might be interested in the OSGi Remote Services specification...which mentions the distributed computing fallacies in the introduction.   It's chapter 100 in the enterprise spec [1].

A big part of Remote Services is the ability to use OSGi service dynamics to 'deal-with' distributed systems issues like partial failure (i.e. network is reliable).   For example, one way to represent the failure of a remote service would be to make the local service proxy go away.  Note that with OSGi service dynamics and (e.g.) DS, the consequences of such a thing on dependent services can be easily handled without introducing special mechanism.

IMO another advantage of Remote Services is that the OSGi service contract/impl separation also decouples the service from the distribution system.   This allows the service designer to create remote services (API and impl) that are independent of the distribution system's serialization format (e.g json, xml, obj serialization, etc) and comm approach/protocol (e.g. http/rest, pub/sub messaging, mqtt, tcp, etc).   As an example of this, I've created three Karaf-hosted remote services that allow remote monitoring and management of Karaf bundles, services, and install/uninstall of Karaf features...and these services can be accessed from remote Eclipse via an mqtt broker, or via server-based tcp, or via other distribution systems without changing the service APIs or implementation.

Scott

[1] https://www.osgi.org/developer/specifications/
[2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse

On 1/13/2017 11:19 AM, Brad Johnson wrote:

That is certainly the sort of library that could be used as a standard. Get an agreement on the standard OSGi service interface and then use it and others for that implementation.  Which brings up a good question and issue.  There would have to be some set of standardized messages and exception types.  The CiruitBreaker example throws a CircuitBreakingException (naturally enough).  If there’s an ErrorHandlerService it would have to know the standard set of exceptions that could be expected or, at least, a set of parent classes.  Since CircuitBreakingException is a relatively simple class it would be perfect for a default ErrorHandlerService to catch for that class of exceptions.  

 

Obviously there will have to be some head scratching and chin rubbing about how the pieces fit together exactly.  The CircuitBreakerService (and the others too) could also be more like container classes that listen and pick up CircuitBreakerListenerService instances.  So one listener might just log the circuit breaker exception.  But you might instantiate an SMTPCircuitBreakerNotifcationService that implements the CircuitBreakerListenerService and fires off an email to an admin email address if the breaker is tripped. 

 

That CircuitBreakerService might also be picked by the Kontainer instance which listens for on/off control events from the outside world.  Some thinking to do there but they are tractable problems with services and events.

 

The main services like CircuitBreakerService and ThrottlerService might register themselves as providers with the ErrorHandlerService which would catch the types of exceptions they throw.  It in turn could listen for custom ExceptionHandlerListener<T> that listen for and handle specific exception types. Still thinking and hand waving about this but I think a sane set of standard services, listeners and events could be created that would permit a user to create simple handlers to register.

 

There would also be the issue of the issue of how to automate injection of those into the Camel routes.  That doesn’t seem like it should be a daunting challenge but it would be important.  And I think very important that those get injected automatically even if the services only provide basic logging initially with no client custom code.

 

From: James Carman [mailto:james@carmanconsulting.com] 
Sent: Friday, January 13, 2017 12:12 PM
To: bradjohn@redhat.com <ma...@redhat.com> 
Subject: Re: Opinionated...

 

Commons Lang3 has a pretty simple CircuitBreaker implementation that I used in Microbule:

https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java

On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com <ma...@redhat.com> > wrote:

Folks,

 

I wanted to make sure that my promoting CDI, Camel Java DSL, & static profiles didn’t obscure the point I was trying to make.  Whatever mechanics we choose I’d really like us to be unified behind a common paradigm so that our documentation, exemplars, archetypes, blogs, libraries, and so on are all organized the same and use the same mechanics and layouts for projects. 

 

We should promote an idiomatic way to develop software using Karaf Boot.  That’s one problem I hear from a lot of clients.  There are such cross-currents of information about how to develop OSGi-based software that it gets confusing.  Best or preferred practices are lost in the noise.  I won’t get into all that since I’m sure most of you have dealt this problem. Not to pick on it but a good example is that the Camel in Action book recommends using Pojos instead of using Processors/Exchanges.  It is on somewhere near the back of the book in a few pages. I don’t know how many examples on the web site actually use the Processor/Exchange but it is a lot. Then there are examples with Spring, Blueprint, Java DSL, Scala, etc.  There are annotations that only work in one environment but not in all of them.

 

By selecting an idiomatic and “opinionated” way of creating Karaf Boot microcontainers we could make sure that sort of confusion isn’t continued forward.  It would require a lot less documentation to cover the same ground and make editing and updating easier.  It would make creating sample and example projects a lot easier. It would simplify what Karaf Boot appliances have to support and make sure there aren’t concerns that work in one environment and not in another or that might work differently in a different environment.

 

I’m personally interested in Karaf Appliances with standard Maven structures, standard  bundle structures, and reference implementations that have a good chunk of the basic functionality. I’d say we take a page from the “convention over configuration” book or, at least, a “conventional configuration” and likely a bit of both. Because the appliances are focused on microservices we should get out ahead of the Gartner hype cycle.  Right now we are at the Peak of Inflated Expectations and in a couple of years we’ll be at the Trough of Disillusionment.  That disillusionment will come for a number of reasons. Flying Spaghetti Monster topology will be one of them but, more importantly for a Karaf Appliance, is the consistent problem of “network fallacies”.  Every Karaf Kontainer should have standard OSGi service interfaces and basic implementations that address each of the fallacies that apply to a uService.  The Kontainers should insist on it and not make it optional. If the user doesn’t want that functionality they would then need to disable via configuration.  But the Kontainer will get stuck in a grace period and then fail if an expected, standard service isn’t available. All of the standard OSGi service APIs would have basic implementations to start but as more specific Kontainers.  But, because they are standard services new ones can be developed by the community or by the end developer.

 

As developers, we’ve all had to implement functionality and then come back and deal with error handling, security, etc. I say we simply cut those services in to the Kontainer right from the get-go.  The Kontainer doesn’t run if it doesn’t find the service.  That isn’t to say these become a fundamental part of Karaf but a fundamental part of the Kontainer service that runs in Karaf.

 

The standard bundles would only implement basic functionality and not do anything sophisticated.  New bundles and libraries for more sophisticated implementations could be added later. All of the bundles would likely have disable flags if the developer found the particular concern irrelevant.  For example, security might not be relevant. The following aren’t meant to be comprehensive. Just addressing key concerns. Other standards like LoggingService might be included by default as well. 

 

The intent here isn’t to define the exact mechanics but the standard OSGi service interface that would be _required_ in any implementation of a Kontainer, even if the implemented bundle is simply a passthrough or can be disabled, it forces the developer to explicitly deal with the problems or choose to ignore them altogether.  

 

Because these service interfaces and the bundles that implement them are standard, the set can be specified by the dependencies specified in the Maven build, features and/or profiles.

 

1.	The network is reliable.

A standard “Error Handler” OSGi service.  The default bundle would simply capture errors/exceptions and log them.  Perhaps it would specify retries. Drop in solutions might include errors going to dead letter queues and so on. The OSGi service interface is required for Kontainer bootstrap so use the default or use a standard one or create one of your own.  If they want to change configuration of this bundle or put in a new one, they know exactly what it is, where it exists, how it is specified to the build, and what configuration file is associated with it. No rummaging around through code.  When the inevitable error, exceptions and problems arise, the developer isn’t left wondering where and how they should add the functionality to handle it.

 

A standard “Circuit Breaker” service API and basic implemented bundle should be provided.  Perhaps the standard bundle would simply count errors over a time frame and shut down if that limit is hit and allow those values to be configured. Default would be a rather unsophisticated implementation but provide the convention and automated wiring of a circuit breaker OSGi service.  Other implementations might fire off emails to Sys Admins or be combinations. And if it is really undesirable, set a disable flag.

 

2.	Latency is zero.

A standard OSGi Throttling service interface and bundle implementation would be included.  If you want different behavior, change it.  If you want to disable it, set the flag. However, there are bigger issues here that I’ll address a bit more down below.

 

3.	Bandwidth is infinite.

Throttling OSGi service again. Ditto to comment 2.

 

4.	The network is secure.

Standard OSGi service to plug in in various authentication/authorization mechanisms.  By default it might be pass through but also have a different implementation that uses a simple username/password. Obviously LDAP, JAAS, and other bundles could be created and dropped into place. 

 

5.	Topology doesn't change.

Back to the Circuit Breaker, logging and perhaps notification mechanism.  Also the transport issue below where I’ll mention some configuration.

 

6.	There is one administrator.

//No particular plugin for this but standardized configuration and expected bundles help and this also relates to the transport discussion.

 

7.	Transport cost is zero.

//Probably not a concern here directly but will be a big issue of uServices.

 

8.	The network is homogeneous.

//I think this issue can be dealt with in our context with many of the standard libraries but can be abstracted a bit more.

 

Obviously a big issue we’ll see, and I’ve seen in the past, is chained request/response calls. Service 1 making a REST call to service 2 making a REST call to service 3…etc.  And all of a sudden the latency is a killer.

 

ServiceMix/Karaf/Camel can already abstract away some of that via property substitution. I’d suggest we take that one step further and put _all_ transport/protocol information in configuration and create a standardized URI. As a developer or a senior developer over a group of developers, I don’t want them to be concerned with the fiddly bits of the transport in the code and routes and I certainly don’t want to recompile just to make such changes.

 

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel URI could be used and mapped via the configuration files.  So the developer would always use karaf:// in their routes and configuration mapping would use the URI specified.  karaf://myserviceName.  In the configuration file might be mapped a transport.configuration.cfg file.

 

I believe that is important for a lot of reasons.  A mid-level or junior-level developer shouldn’t be involved in configuration like:

" <ftp://foo@myserver/?> ftp://foo@myserver?password=secret&amp;

           recursive=true&amp;

           ftpClient.dataTimeout=30000&amp;

           ftpClientConfig.serverLanguageCode=fr"

 

So the cfg file might look like this:

clientService="ftp://foo@myserver?password=secret <ftp://foo@myserver?password=secret&> &

           recursive=true&

           ftpClient.dataTimeout=30000&

           ftpClientConfig.serverLanguageCode=fr"

(At least properties get rid of the gawdaful escaped ampersands).

 

The code would then say “karaf://clientService”

 

One can do much of that via configuration right now but I think it is critical to move it completely to configuration so that admins know exactly what to change and where to find it when topologies change. It also means that when the backlash from microservice calling microservice calling microservice being slow happens, that simple mapping would permit things like going to JMS asynchronous request/response (or other fast, async mechanisms) that don’t swamp the virtual machine’s or Karaf instance resources. It would also allow for easy stubbing or mock testing of the Kontainer as it will be deployed without using PAX exam or other mechanism.

 

Creating standard OSGi service APIs in an anticipation of these problems would permit for an evolutionary approach to these problems in the future and specific solutions when a standard Kontainer is developed. Even standard error handler service implementations can be created.

 

Once such a basic, standard Kontainer exists, then uKontainers that implement basic functionality commonly used could be created.  There are JPA examples already.  But the average developer is going to be given a task to receive some canonical data model via a REST service and poke it into a database.  That database model probably won’t look like what they are receiving.  So a uKontainer that has a REST front end they can modify, a Dozer object mapping file in the middle with a transform, and a call to the database will be used repeatedly.

 

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different error handler plugin implementations which are used with the same REST, mapping, JPA container. Just change the Maven dependency or profile.

 

There are a large number of examples like that.  In the case of that uKontainer there would like be a JPAErrorService for catching common errors and another for Dozer errors and for unmarshaling errors.  As a developer looking to solve very specific problems, I just download the uContainer and do the Dozer mapping, change some configuration and then test it.

 

That also means, that much like Camel EIPs, open source developers can focus on hardening these containers, fixing bugs, putting in performance enhancements and the like.  If a new error is coming from JPA that a user finds and isn’t being handled in a coherent fashion, then a new block or delegate code is added and released.  Just as we’d do with a Camel endpoint or component.

 

Having standard error handlers built into uKontainers would also help make coherent messages from the large and unwieldy stack traces full of reflection that we commonly see.  The error handler OSGi plugin for a given problem would be highly focused on identifying and reporting problems with a specific technology or set of technologies.

 

 

 

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing

 


RE: Opinionated...

Posted by "Pratt, Jason" <Ja...@windriver.com>.
Brad are you looking at doing some sort of RxJava here?

From: Brad Johnson [mailto:bradjohn@redhat.com]
Sent: Saturday, January 14, 2017 5:20 AM
To: user@karaf.apache.org
Subject: RE: Opinionated...

Scott,

It’s funny that you mention OSGi Remote Services as that was sort of in the back of my mind.   I think I recall Christian said he was working on a Remote Services implementation as well. But I don’t know enough about it yet to include it in the discussion.  I suspect what I’m going to do is set up a GitHub project with a basic project and then a set of appliances for a variety of enterprise integration purposes.  That baseline has to be in place before lower levels can be addressed

But I do think that a communications abstraction is going to be necessary.  A next generation of communications pipes should abstract protocols away from the OSGi programmer. Or as the great and powerful Oz put it “never mind the little man behind the curtain”.

When writing code in a Camel route in an OSGi bundle the programmer should either be thinking about using a named request/response pipe or an event pipe – IPs, ports, transports, and so on should happen via straight configuration files, parsed, configured and then registered as services to be picked up in bundles.  There isn’t anything that stands in the way of doing that now other than a little elbow grease.

So I’ll definitely give the Remote Services a deeper look.


From: Scott Lewis [mailto:slewis@composent.com]
Sent: Friday, January 13, 2017 7:16 PM
To: user@karaf.apache.org<ma...@karaf.apache.org>
Subject: Re: Opinionated...

Hi Brad,

You might be interested in the OSGi Remote Services specification...which mentions the distributed computing fallacies in the introduction.   It's chapter 100 in the enterprise spec [1].

A big part of Remote Services is the ability to use OSGi service dynamics to 'deal-with' distributed systems issues like partial failure (i.e. network is reliable).   For example, one way to represent the failure of a remote service would be to make the local service proxy go away.  Note that with OSGi service dynamics and (e.g.) DS, the consequences of such a thing on dependent services can be easily handled without introducing special mechanism.

IMO another advantage of Remote Services is that the OSGi service contract/impl separation also decouples the service from the distribution system.   This allows the service designer to create remote services (API and impl) that are independent of the distribution system's serialization format (e.g json, xml, obj serialization, etc) and comm approach/protocol (e.g. http/rest, pub/sub messaging, mqtt, tcp, etc).   As an example of this, I've created three Karaf-hosted remote services that allow remote monitoring and management of Karaf bundles, services, and install/uninstall of Karaf features...and these services can be accessed from remote Eclipse via an mqtt broker, or via server-based tcp, or via other distribution systems without changing the service APIs or implementation.

Scott

[1] https://www.osgi.org/developer/specifications/
[2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse

On 1/13/2017 11:19 AM, Brad Johnson wrote:
That is certainly the sort of library that could be used as a standard. Get an agreement on the standard OSGi service interface and then use it and others for that implementation.  Which brings up a good question and issue.  There would have to be some set of standardized messages and exception types.  The CiruitBreaker example throws a CircuitBreakingException (naturally enough).  If there’s an ErrorHandlerService it would have to know the standard set of exceptions that could be expected or, at least, a set of parent classes.  Since CircuitBreakingException is a relatively simple class it would be perfect for a default ErrorHandlerService to catch for that class of exceptions.

Obviously there will have to be some head scratching and chin rubbing about how the pieces fit together exactly.  The CircuitBreakerService (and the others too) could also be more like container classes that listen and pick up CircuitBreakerListenerService instances.  So one listener might just log the circuit breaker exception.  But you might instantiate an SMTPCircuitBreakerNotifcationService that implements the CircuitBreakerListenerService and fires off an email to an admin email address if the breaker is tripped.

That CircuitBreakerService might also be picked by the Kontainer instance which listens for on/off control events from the outside world.  Some thinking to do there but they are tractable problems with services and events.

The main services like CircuitBreakerService and ThrottlerService might register themselves as providers with the ErrorHandlerService which would catch the types of exceptions they throw.  It in turn could listen for custom ExceptionHandlerListener<T> that listen for and handle specific exception types. Still thinking and hand waving about this but I think a sane set of standard services, listeners and events could be created that would permit a user to create simple handlers to register.

There would also be the issue of the issue of how to automate injection of those into the Camel routes.  That doesn’t seem like it should be a daunting challenge but it would be important.  And I think very important that those get injected automatically even if the services only provide basic logging initially with no client custom code.

From: James Carman [mailto:james@carmanconsulting.com]
Sent: Friday, January 13, 2017 12:12 PM
To: bradjohn@redhat.com<ma...@redhat.com>
Subject: Re: Opinionated...

Commons Lang3 has a pretty simple CircuitBreaker implementation that I used in Microbule:

https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java
On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <br...@redhat.com>> wrote:
Folks,

I wanted to make sure that my promoting CDI, Camel Java DSL, & static profiles didn’t obscure the point I was trying to make.  Whatever mechanics we choose I’d really like us to be unified behind a common paradigm so that our documentation, exemplars, archetypes, blogs, libraries, and so on are all organized the same and use the same mechanics and layouts for projects.

We should promote an idiomatic way to develop software using Karaf Boot.  That’s one problem I hear from a lot of clients.  There are such cross-currents of information about how to develop OSGi-based software that it gets confusing.  Best or preferred practices are lost in the noise.  I won’t get into all that since I’m sure most of you have dealt this problem. Not to pick on it but a good example is that the Camel in Action book recommends using Pojos instead of using Processors/Exchanges.  It is on somewhere near the back of the book in a few pages. I don’t know how many examples on the web site actually use the Processor/Exchange but it is a lot. Then there are examples with Spring, Blueprint, Java DSL, Scala, etc.  There are annotations that only work in one environment but not in all of them.

By selecting an idiomatic and “opinionated” way of creating Karaf Boot microcontainers we could make sure that sort of confusion isn’t continued forward.  It would require a lot less documentation to cover the same ground and make editing and updating easier.  It would make creating sample and example projects a lot easier. It would simplify what Karaf Boot appliances have to support and make sure there aren’t concerns that work in one environment and not in another or that might work differently in a different environment.

I’m personally interested in Karaf Appliances with standard Maven structures, standard  bundle structures, and reference implementations that have a good chunk of the basic functionality. I’d say we take a page from the “convention over configuration” book or, at least, a “conventional configuration” and likely a bit of both. Because the appliances are focused on microservices we should get out ahead of the Gartner hype cycle.  Right now we are at the Peak of Inflated Expectations and in a couple of years we’ll be at the Trough of Disillusionment.  That disillusionment will come for a number of reasons. Flying Spaghetti Monster topology will be one of them but, more importantly for a Karaf Appliance, is the consistent problem of “network fallacies”.  Every Karaf Kontainer should have standard OSGi service interfaces and basic implementations that address each of the fallacies that apply to a uService.  The Kontainers should insist on it and not make it optional. If the user doesn’t want that functionality they would then need to disable via configuration.  But the Kontainer will get stuck in a grace period and then fail if an expected, standard service isn’t available. All of the standard OSGi service APIs would have basic implementations to start but as more specific Kontainers.  But, because they are standard services new ones can be developed by the community or by the end developer.

As developers, we’ve all had to implement functionality and then come back and deal with error handling, security, etc. I say we simply cut those services in to the Kontainer right from the get-go.  The Kontainer doesn’t run if it doesn’t find the service.  That isn’t to say these become a fundamental part of Karaf but a fundamental part of the Kontainer service that runs in Karaf.

The standard bundles would only implement basic functionality and not do anything sophisticated.  New bundles and libraries for more sophisticated implementations could be added later. All of the bundles would likely have disable flags if the developer found the particular concern irrelevant.  For example, security might not be relevant. The following aren’t meant to be comprehensive. Just addressing key concerns. Other standards like LoggingService might be included by default as well.

The intent here isn’t to define the exact mechanics but the standard OSGi service interface that would be _required_ in any implementation of a Kontainer, even if the implemented bundle is simply a passthrough or can be disabled, it forces the developer to explicitly deal with the problems or choose to ignore them altogether.

Because these service interfaces and the bundles that implement them are standard, the set can be specified by the dependencies specified in the Maven build, features and/or profiles.


  1.  The network is reliable.
A standard “Error Handler” OSGi service.  The default bundle would simply capture errors/exceptions and log them.  Perhaps it would specify retries. Drop in solutions might include errors going to dead letter queues and so on. The OSGi service interface is required for Kontainer bootstrap so use the default or use a standard one or create one of your own.  If they want to change configuration of this bundle or put in a new one, they know exactly what it is, where it exists, how it is specified to the build, and what configuration file is associated with it. No rummaging around through code.  When the inevitable error, exceptions and problems arise, the developer isn’t left wondering where and how they should add the functionality to handle it.

A standard “Circuit Breaker” service API and basic implemented bundle should be provided.  Perhaps the standard bundle would simply count errors over a time frame and shut down if that limit is hit and allow those values to be configured. Default would be a rather unsophisticated implementation but provide the convention and automated wiring of a circuit breaker OSGi service.  Other implementations might fire off emails to Sys Admins or be combinations. And if it is really undesirable, set a disable flag.


  1.  Latency is zero.
A standard OSGi Throttling service interface and bundle implementation would be included.  If you want different behavior, change it.  If you want to disable it, set the flag. However, there are bigger issues here that I’ll address a bit more down below.


  1.  Bandwidth is infinite.
Throttling OSGi service again. Ditto to comment 2.


  1.  The network is secure.
Standard OSGi service to plug in in various authentication/authorization mechanisms.  By default it might be pass through but also have a different implementation that uses a simple username/password. Obviously LDAP, JAAS, and other bundles could be created and dropped into place.


  1.  Topology doesn't change.
Back to the Circuit Breaker, logging and perhaps notification mechanism.  Also the transport issue below where I’ll mention some configuration.


  1.  There is one administrator.
//No particular plugin for this but standardized configuration and expected bundles help and this also relates to the transport discussion.


  1.  Transport cost is zero.
//Probably not a concern here directly but will be a big issue of uServices.


  1.  The network is homogeneous.
//I think this issue can be dealt with in our context with many of the standard libraries but can be abstracted a bit more.

Obviously a big issue we’ll see, and I’ve seen in the past, is chained request/response calls. Service 1 making a REST call to service 2 making a REST call to service 3…etc.  And all of a sudden the latency is a killer.

ServiceMix/Karaf/Camel can already abstract away some of that via property substitution. I’d suggest we take that one step further and put _all_ transport/protocol information in configuration and create a standardized URI. As a developer or a senior developer over a group of developers, I don’t want them to be concerned with the fiddly bits of the transport in the code and routes and I certainly don’t want to recompile just to make such changes.

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel URI could be used and mapped via the configuration files.  So the developer would always use karaf:// in their routes and configuration mapping would use the URI specified.  karaf://myserviceName.  In the configuration file might be mapped a transport.configuration.cfg file.

I believe that is important for a lot of reasons.  A mid-level or junior-level developer shouldn’t be involved in configuration like:
"ftp://foo@myserver?<ftp://foo@myserver/?>password=secret&amp;
           recursive=true&amp;
           ftpClient.dataTimeout=30000&amp;
           ftpClientConfig.serverLanguageCode=fr"

So the cfg file might look like this:
clientService="ftp://foo@myserver?password=secret&
           recursive=true&
           ftpClient.dataTimeout=30000&
           ftpClientConfig.serverLanguageCode=fr"
(At least properties get rid of the gawdaful escaped ampersands).

The code would then say “karaf://clientService”

One can do much of that via configuration right now but I think it is critical to move it completely to configuration so that admins know exactly what to change and where to find it when topologies change. It also means that when the backlash from microservice calling microservice calling microservice being slow happens, that simple mapping would permit things like going to JMS asynchronous request/response (or other fast, async mechanisms) that don’t swamp the virtual machine’s or Karaf instance resources. It would also allow for easy stubbing or mock testing of the Kontainer as it will be deployed without using PAX exam or other mechanism.

Creating standard OSGi service APIs in an anticipation of these problems would permit for an evolutionary approach to these problems in the future and specific solutions when a standard Kontainer is developed. Even standard error handler service implementations can be created.

Once such a basic, standard Kontainer exists, then uKontainers that implement basic functionality commonly used could be created.  There are JPA examples already.  But the average developer is going to be given a task to receive some canonical data model via a REST service and poke it into a database.  That database model probably won’t look like what they are receiving.  So a uKontainer that has a REST front end they can modify, a Dozer object mapping file in the middle with a transform, and a call to the database will be used repeatedly.

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different error handler plugin implementations which are used with the same REST, mapping, JPA container. Just change the Maven dependency or profile.

There are a large number of examples like that.  In the case of that uKontainer there would like be a JPAErrorService for catching common errors and another for Dozer errors and for unmarshaling errors.  As a developer looking to solve very specific problems, I just download the uContainer and do the Dozer mapping, change some configuration and then test it.

That also means, that much like Camel EIPs, open source developers can focus on hardening these containers, fixing bugs, putting in performance enhancements and the like.  If a new error is coming from JPA that a user finds and isn’t being handled in a coherent fashion, then a new block or delegate code is added and released.  Just as we’d do with a Camel endpoint or component.

Having standard error handlers built into uKontainers would also help make coherent messages from the large and unwieldy stack traces full of reflection that we commonly see.  The error handler OSGi plugin for a given problem would be highly focused on identifying and reporting problems with a specific technology or set of technologies.



https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing



RE: Opinionated...

Posted by Brad Johnson <br...@redhat.com>.
Scott,

 

It’s funny that you mention OSGi Remote Services as that was sort of in the back of my mind.   I think I recall Christian said he was working on a Remote Services implementation as well. But I don’t know enough about it yet to include it in the discussion.  I suspect what I’m going to do is set up a GitHub project with a basic project and then a set of appliances for a variety of enterprise integration purposes.  That baseline has to be in place before lower levels can be addressed

 

But I do think that a communications abstraction is going to be necessary.  A next generation of communications pipes should abstract protocols away from the OSGi programmer. Or as the great and powerful Oz put it “never mind the little man behind the curtain”.

 

When writing code in a Camel route in an OSGi bundle the programmer should either be thinking about using a named request/response pipe or an event pipe – IPs, ports, transports, and so on should happen via straight configuration files, parsed, configured and then registered as services to be picked up in bundles.  There isn’t anything that stands in the way of doing that now other than a little elbow grease.

 

So I’ll definitely give the Remote Services a deeper look.

 

 

From: Scott Lewis [mailto:slewis@composent.com] 
Sent: Friday, January 13, 2017 7:16 PM
To: user@karaf.apache.org
Subject: Re: Opinionated...

 

Hi Brad,

You might be interested in the OSGi Remote Services specification...which mentions the distributed computing fallacies in the introduction.   It's chapter 100 in the enterprise spec [1].

A big part of Remote Services is the ability to use OSGi service dynamics to 'deal-with' distributed systems issues like partial failure (i.e. network is reliable).   For example, one way to represent the failure of a remote service would be to make the local service proxy go away.  Note that with OSGi service dynamics and (e.g.) DS, the consequences of such a thing on dependent services can be easily handled without introducing special mechanism.

IMO another advantage of Remote Services is that the OSGi service contract/impl separation also decouples the service from the distribution system.   This allows the service designer to create remote services (API and impl) that are independent of the distribution system's serialization format (e.g json, xml, obj serialization, etc) and comm approach/protocol (e.g. http/rest, pub/sub messaging, mqtt, tcp, etc).   As an example of this, I've created three Karaf-hosted remote services that allow remote monitoring and management of Karaf bundles, services, and install/uninstall of Karaf features...and these services can be accessed from remote Eclipse via an mqtt broker, or via server-based tcp, or via other distribution systems without changing the service APIs or implementation.

Scott

[1] https://www.osgi.org/developer/specifications/
[2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse

On 1/13/2017 11:19 AM, Brad Johnson wrote:

That is certainly the sort of library that could be used as a standard. Get an agreement on the standard OSGi service interface and then use it and others for that implementation.  Which brings up a good question and issue.  There would have to be some set of standardized messages and exception types.  The CiruitBreaker example throws a CircuitBreakingException (naturally enough).  If there’s an ErrorHandlerService it would have to know the standard set of exceptions that could be expected or, at least, a set of parent classes.  Since CircuitBreakingException is a relatively simple class it would be perfect for a default ErrorHandlerService to catch for that class of exceptions.  

 

Obviously there will have to be some head scratching and chin rubbing about how the pieces fit together exactly.  The CircuitBreakerService (and the others too) could also be more like container classes that listen and pick up CircuitBreakerListenerService instances.  So one listener might just log the circuit breaker exception.  But you might instantiate an SMTPCircuitBreakerNotifcationService that implements the CircuitBreakerListenerService and fires off an email to an admin email address if the breaker is tripped. 

 

That CircuitBreakerService might also be picked by the Kontainer instance which listens for on/off control events from the outside world.  Some thinking to do there but they are tractable problems with services and events.

 

The main services like CircuitBreakerService and ThrottlerService might register themselves as providers with the ErrorHandlerService which would catch the types of exceptions they throw.  It in turn could listen for custom ExceptionHandlerListener<T> that listen for and handle specific exception types. Still thinking and hand waving about this but I think a sane set of standard services, listeners and events could be created that would permit a user to create simple handlers to register.

 

There would also be the issue of the issue of how to automate injection of those into the Camel routes.  That doesn’t seem like it should be a daunting challenge but it would be important.  And I think very important that those get injected automatically even if the services only provide basic logging initially with no client custom code.

 

From: James Carman [mailto:james@carmanconsulting.com] 
Sent: Friday, January 13, 2017 12:12 PM
To: bradjohn@redhat.com <ma...@redhat.com> 
Subject: Re: Opinionated...

 

Commons Lang3 has a pretty simple CircuitBreaker implementation that I used in Microbule:

https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java

On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com <ma...@redhat.com> > wrote:

Folks,

 

I wanted to make sure that my promoting CDI, Camel Java DSL, & static profiles didn’t obscure the point I was trying to make.  Whatever mechanics we choose I’d really like us to be unified behind a common paradigm so that our documentation, exemplars, archetypes, blogs, libraries, and so on are all organized the same and use the same mechanics and layouts for projects. 

 

We should promote an idiomatic way to develop software using Karaf Boot.  That’s one problem I hear from a lot of clients.  There are such cross-currents of information about how to develop OSGi-based software that it gets confusing.  Best or preferred practices are lost in the noise.  I won’t get into all that since I’m sure most of you have dealt this problem. Not to pick on it but a good example is that the Camel in Action book recommends using Pojos instead of using Processors/Exchanges.  It is on somewhere near the back of the book in a few pages. I don’t know how many examples on the web site actually use the Processor/Exchange but it is a lot. Then there are examples with Spring, Blueprint, Java DSL, Scala, etc.  There are annotations that only work in one environment but not in all of them.

 

By selecting an idiomatic and “opinionated” way of creating Karaf Boot microcontainers we could make sure that sort of confusion isn’t continued forward.  It would require a lot less documentation to cover the same ground and make editing and updating easier.  It would make creating sample and example projects a lot easier. It would simplify what Karaf Boot appliances have to support and make sure there aren’t concerns that work in one environment and not in another or that might work differently in a different environment.

 

I’m personally interested in Karaf Appliances with standard Maven structures, standard  bundle structures, and reference implementations that have a good chunk of the basic functionality. I’d say we take a page from the “convention over configuration” book or, at least, a “conventional configuration” and likely a bit of both. Because the appliances are focused on microservices we should get out ahead of the Gartner hype cycle.  Right now we are at the Peak of Inflated Expectations and in a couple of years we’ll be at the Trough of Disillusionment.  That disillusionment will come for a number of reasons. Flying Spaghetti Monster topology will be one of them but, more importantly for a Karaf Appliance, is the consistent problem of “network fallacies”.  Every Karaf Kontainer should have standard OSGi service interfaces and basic implementations that address each of the fallacies that apply to a uService.  The Kontainers should insist on it and not make it optional. If the user doesn’t want that functionality they would then need to disable via configuration.  But the Kontainer will get stuck in a grace period and then fail if an expected, standard service isn’t available. All of the standard OSGi service APIs would have basic implementations to start but as more specific Kontainers.  But, because they are standard services new ones can be developed by the community or by the end developer.

 

As developers, we’ve all had to implement functionality and then come back and deal with error handling, security, etc. I say we simply cut those services in to the Kontainer right from the get-go.  The Kontainer doesn’t run if it doesn’t find the service.  That isn’t to say these become a fundamental part of Karaf but a fundamental part of the Kontainer service that runs in Karaf.

 

The standard bundles would only implement basic functionality and not do anything sophisticated.  New bundles and libraries for more sophisticated implementations could be added later. All of the bundles would likely have disable flags if the developer found the particular concern irrelevant.  For example, security might not be relevant. The following aren’t meant to be comprehensive. Just addressing key concerns. Other standards like LoggingService might be included by default as well. 

 

The intent here isn’t to define the exact mechanics but the standard OSGi service interface that would be _required_ in any implementation of a Kontainer, even if the implemented bundle is simply a passthrough or can be disabled, it forces the developer to explicitly deal with the problems or choose to ignore them altogether.  

 

Because these service interfaces and the bundles that implement them are standard, the set can be specified by the dependencies specified in the Maven build, features and/or profiles.

 

1.	The network is reliable.

A standard “Error Handler” OSGi service.  The default bundle would simply capture errors/exceptions and log them.  Perhaps it would specify retries. Drop in solutions might include errors going to dead letter queues and so on. The OSGi service interface is required for Kontainer bootstrap so use the default or use a standard one or create one of your own.  If they want to change configuration of this bundle or put in a new one, they know exactly what it is, where it exists, how it is specified to the build, and what configuration file is associated with it. No rummaging around through code.  When the inevitable error, exceptions and problems arise, the developer isn’t left wondering where and how they should add the functionality to handle it.

 

A standard “Circuit Breaker” service API and basic implemented bundle should be provided.  Perhaps the standard bundle would simply count errors over a time frame and shut down if that limit is hit and allow those values to be configured. Default would be a rather unsophisticated implementation but provide the convention and automated wiring of a circuit breaker OSGi service.  Other implementations might fire off emails to Sys Admins or be combinations. And if it is really undesirable, set a disable flag.

 

2.	Latency is zero.

A standard OSGi Throttling service interface and bundle implementation would be included.  If you want different behavior, change it.  If you want to disable it, set the flag. However, there are bigger issues here that I’ll address a bit more down below.

 

3.	Bandwidth is infinite.

Throttling OSGi service again. Ditto to comment 2.

 

4.	The network is secure.

Standard OSGi service to plug in in various authentication/authorization mechanisms.  By default it might be pass through but also have a different implementation that uses a simple username/password. Obviously LDAP, JAAS, and other bundles could be created and dropped into place. 

 

5.	Topology doesn't change.

Back to the Circuit Breaker, logging and perhaps notification mechanism.  Also the transport issue below where I’ll mention some configuration.

 

6.	There is one administrator.

//No particular plugin for this but standardized configuration and expected bundles help and this also relates to the transport discussion.

 

7.	Transport cost is zero.

//Probably not a concern here directly but will be a big issue of uServices.

 

8.	The network is homogeneous.

//I think this issue can be dealt with in our context with many of the standard libraries but can be abstracted a bit more.

 

Obviously a big issue we’ll see, and I’ve seen in the past, is chained request/response calls. Service 1 making a REST call to service 2 making a REST call to service 3…etc.  And all of a sudden the latency is a killer.

 

ServiceMix/Karaf/Camel can already abstract away some of that via property substitution. I’d suggest we take that one step further and put _all_ transport/protocol information in configuration and create a standardized URI. As a developer or a senior developer over a group of developers, I don’t want them to be concerned with the fiddly bits of the transport in the code and routes and I certainly don’t want to recompile just to make such changes.

 

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel URI could be used and mapped via the configuration files.  So the developer would always use karaf:// in their routes and configuration mapping would use the URI specified.  karaf://myserviceName.  In the configuration file might be mapped a transport.configuration.cfg file.

 

I believe that is important for a lot of reasons.  A mid-level or junior-level developer shouldn’t be involved in configuration like:

" <ftp://foo@myserver/?> ftp://foo@myserver?password=secret&amp;

           recursive=true&amp;

           ftpClient.dataTimeout=30000&amp;

           ftpClientConfig.serverLanguageCode=fr"

 

So the cfg file might look like this:

clientService="ftp://foo@myserver?password=secret <ftp://foo@myserver?password=secret&> &

           recursive=true&

           ftpClient.dataTimeout=30000&

           ftpClientConfig.serverLanguageCode=fr"

(At least properties get rid of the gawdaful escaped ampersands).

 

The code would then say “karaf://clientService”

 

One can do much of that via configuration right now but I think it is critical to move it completely to configuration so that admins know exactly what to change and where to find it when topologies change. It also means that when the backlash from microservice calling microservice calling microservice being slow happens, that simple mapping would permit things like going to JMS asynchronous request/response (or other fast, async mechanisms) that don’t swamp the virtual machine’s or Karaf instance resources. It would also allow for easy stubbing or mock testing of the Kontainer as it will be deployed without using PAX exam or other mechanism.

 

Creating standard OSGi service APIs in an anticipation of these problems would permit for an evolutionary approach to these problems in the future and specific solutions when a standard Kontainer is developed. Even standard error handler service implementations can be created.

 

Once such a basic, standard Kontainer exists, then uKontainers that implement basic functionality commonly used could be created.  There are JPA examples already.  But the average developer is going to be given a task to receive some canonical data model via a REST service and poke it into a database.  That database model probably won’t look like what they are receiving.  So a uKontainer that has a REST front end they can modify, a Dozer object mapping file in the middle with a transform, and a call to the database will be used repeatedly.

 

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different error handler plugin implementations which are used with the same REST, mapping, JPA container. Just change the Maven dependency or profile.

 

There are a large number of examples like that.  In the case of that uKontainer there would like be a JPAErrorService for catching common errors and another for Dozer errors and for unmarshaling errors.  As a developer looking to solve very specific problems, I just download the uContainer and do the Dozer mapping, change some configuration and then test it.

 

That also means, that much like Camel EIPs, open source developers can focus on hardening these containers, fixing bugs, putting in performance enhancements and the like.  If a new error is coming from JPA that a user finds and isn’t being handled in a coherent fashion, then a new block or delegate code is added and released.  Just as we’d do with a Camel endpoint or component.

 

Having standard error handlers built into uKontainers would also help make coherent messages from the large and unwieldy stack traces full of reflection that we commonly see.  The error handler OSGi plugin for a given problem would be highly focused on identifying and reporting problems with a specific technology or set of technologies.

 

 

 

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing

 


Re: Opinionated...

Posted by Scott Lewis <sl...@composent.com>.
Hi Brad,

You might be interested in the OSGi Remote Services 
specification...which mentions the distributed computing fallacies in 
the introduction.   It's chapter 100 in the enterprise spec [1].

A big part of Remote Services is the ability to use OSGi service 
dynamics to 'deal-with' distributed systems issues like partial failure 
(i.e. network is reliable).   For example, one way to represent the 
failure of a remote service would be to make the local service proxy go 
away.  Note that with OSGi service dynamics and (e.g.) DS, the 
consequences of such a thing on dependent services can be easily handled 
without introducing special mechanism.

IMO another advantage of Remote Services is that the OSGi service 
contract/impl separation also decouples the service from the 
distribution system.   This allows the service designer to create remote 
services (API and impl) that are independent of the distribution 
system's serialization format (e.g json, xml, obj serialization, etc) 
and comm approach/protocol (e.g. http/rest, pub/sub messaging, mqtt, 
tcp, etc).   As an example of this, I've created three Karaf-hosted 
remote services that allow remote monitoring and management of Karaf 
bundles, services, and install/uninstall of Karaf features...and these 
services can be accessed from remote Eclipse via an mqtt broker, or via 
server-based tcp, or via other distribution systems without changing the 
service APIs or implementation.

Scott

[1] https://www.osgi.org/developer/specifications/
[2] https://wiki.eclipse.org/Karaf_Remote_Management_with_Eclipse

On 1/13/2017 11:19 AM, Brad Johnson wrote:
>
> That is certainly the sort of library that could be used as a 
> standard. Get an agreement on the standard OSGi service interface and 
> then use it and others for that implementation.  Which brings up a 
> good question and issue. There would have to be some set of 
> standardized messages and exception types.  The CiruitBreaker example 
> throws a CircuitBreakingException (naturally enough).  If there\u2019s an 
> ErrorHandlerService it would have to know the standard set of 
> exceptions that could be expected or, at least, a set of parent 
> classes.  Since CircuitBreakingException is a relatively simple class 
> it would be perfect for a default ErrorHandlerService to catch for 
> that class of exceptions.
>
> Obviously there will have to be some head scratching and chin rubbing 
> about how the pieces fit together exactly.  The CircuitBreakerService 
> (and the others too) could also be more like container classes that 
> listen and pick up CircuitBreakerListenerService instances.  So one 
> listener might just log the circuit breaker exception.  But you might 
> instantiate an SMTPCircuitBreakerNotifcationService that implements 
> the CircuitBreakerListenerService and fires off an email to an admin 
> email address if the breaker is tripped.
>
> That CircuitBreakerService might also be picked by the Kontainer 
> instance which listens for on/off control events from the outside 
> world.  Some thinking to do there but they are tractable problems with 
> services and events.
>
> The main services like CircuitBreakerService and ThrottlerService 
> might register themselves as providers with the ErrorHandlerService 
> which would catch the types of exceptions they throw.  It in turn 
> could listen for custom ExceptionHandlerListener<T> that listen for 
> and handle specific exception types. Still thinking and hand waving 
> about this but I think a sane set of standard services, listeners and 
> events could be created that would permit a user to create simple 
> handlers to register.
>
> There would also be the issue of the issue of how to automate 
> injection of those into the Camel routes.  That doesn\u2019t seem like it 
> should be a daunting challenge but it would be important.  And I think 
> very important that those get injected automatically even if the 
> services only provide basic logging initially with no client custom code.
>
> *From:*James Carman [mailto:james@carmanconsulting.com]
> *Sent:* Friday, January 13, 2017 12:12 PM
> *To:* bradjohn@redhat.com
> *Subject:* Re: Opinionated...
>
> Commons Lang3 has a pretty simple CircuitBreaker implementation that I 
> used in Microbule:
>
> https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java
>
> On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com 
> <ma...@redhat.com>> wrote:
>
>     Folks,
>
>     I wanted to make sure that my promoting CDI, Camel Java DSL, &
>     static profiles didn\u2019t obscure the point I was trying to make. 
>     Whatever mechanics we choose I\u2019d really like us to be unified
>     behind a common paradigm so that our documentation, exemplars,
>     archetypes, blogs, libraries, and so on are all organized the same
>     and use the same mechanics and layouts for projects.
>
>     We should promote an idiomatic way to develop software using Karaf
>     Boot.  That\u2019s one problem I hear from a lot of clients.  There are
>     such cross-currents of information about how to develop OSGi-based
>     software that it gets confusing.  Best or preferred practices are
>     lost in the noise.  I won\u2019t get into all that since I\u2019m sure most
>     of you have dealt this problem. Not to pick on it but a good
>     example is that the Camel in Action book recommends using Pojos
>     instead of using Processors/Exchanges.  It is on somewhere near
>     the back of the book in a few pages. I don\u2019t know how many
>     examples on the web site actually use the Processor/Exchange but
>     it is a lot. Then there are examples with Spring, Blueprint, Java
>     DSL, Scala, etc.  There are annotations that only work in one
>     environment but not in all of them.
>
>     By selecting an idiomatic and \u201copinionated\u201d way of creating Karaf
>     Boot microcontainers we could make sure that sort of confusion
>     isn\u2019t continued forward.  It would require a lot less
>     documentation to cover the same ground and make editing and
>     updating easier.  It would make creating sample and example
>     projects a lot easier. It would simplify what Karaf Boot
>     appliances have to support and make sure there aren\u2019t concerns
>     that work in one environment and not in another or that might work
>     differently in a different environment.
>
>     I\u2019m personally interested in Karaf Appliances with standard Maven
>     structures, standard  bundle structures, and reference
>     implementations that have a good chunk of the basic functionality.
>     I\u2019d say we take a page from the \u201cconvention over configuration\u201d
>     book or, at least, a \u201cconventional configuration\u201d and likely a bit
>     of both. Because the appliances are focused on microservices we
>     should get out ahead of the Gartner hype cycle.  Right now we are
>     at the Peak of Inflated Expectations and in a couple of years
>     we\u2019ll be at the Trough of Disillusionment. That disillusionment
>     will come for a number of reasons. Flying Spaghetti Monster
>     topology will be one of them but, more importantly for a Karaf
>     Appliance, is the consistent problem of \u201cnetwork fallacies\u201d. 
>     Every Karaf Kontainer should have standard OSGi service interfaces
>     and basic implementations that address each of the fallacies that
>     apply to a uService.  The Kontainers should insist on it and not
>     make it optional. If the user doesn\u2019t want that functionality they
>     would then need to disable via configuration.  But the Kontainer
>     will get stuck in a grace period and then fail if an expected,
>     standard service isn\u2019t available. All of the standard OSGi service
>     APIs would have basic implementations to start but as more
>     specific Kontainers.  But, because they are standard services new
>     ones can be developed by the community or by the end developer.
>
>     As developers, we\u2019ve all had to implement functionality and then
>     come back and deal with error handling, security, etc. I say we
>     simply cut those services in to the Kontainer right from the
>     get-go.  The Kontainer doesn\u2019t run if it doesn\u2019t find the
>     service.  That isn\u2019t to say these become a fundamental part of
>     Karaf but a fundamental part of the Kontainer service that runs in
>     Karaf.
>
>     The standard bundles would only implement basic functionality and
>     not do anything sophisticated. New bundles and libraries for more
>     sophisticated implementations could be added later. All of the
>     bundles would likely have disable flags if the developer found the
>     particular concern irrelevant. For example, security might not be
>     relevant. The following aren\u2019t meant to be comprehensive. Just
>     addressing key concerns. Other standards like LoggingService might
>     be included by default as well.
>
>     The intent here isn\u2019t to define the exact mechanics but the
>     standard OSGi service interface that would be _/required/_ in any
>     implementation of a Kontainer, even if the implemented bundle is
>     simply a passthrough or can be disabled, it forces the developer
>     to explicitly deal with the problems or choose to ignore them
>     altogether.
>
>     Because these service interfaces and the bundles that implement
>     them are standard, the set can be specified by the dependencies
>     specified in the Maven build, features and/or profiles.
>
>      1. The network is reliable.
>
>     A standard \u201cError Handler\u201d OSGi service.  The default bundle would
>     simply capture errors/exceptions and log them.  Perhaps it would
>     specify retries. Drop in solutions might include errors going to
>     dead letter queues and so on. The OSGi service interface is
>     required for Kontainer bootstrap so use the default or use a
>     standard one or create one of your own.  If they want to change
>     configuration of this bundle or put in a new one, they know
>     exactly what it is, where it exists, how it is specified to the
>     build, and what configuration file is associated with it. No
>     rummaging around through code.  When the inevitable error,
>     exceptions and problems arise, the developer isn\u2019t left wondering
>     where and how they should add the functionality to handle it.
>
>     A standard \u201cCircuit Breaker\u201d service API and basic implemented
>     bundle should be provided.  Perhaps the standard bundle would
>     simply count errors over a time frame and shut down if that limit
>     is hit and allow those values to be configured. Default would be a
>     rather unsophisticated implementation but provide the convention
>     and automated wiring of a circuit breaker OSGi service.  Other
>     implementations might fire off emails to Sys Admins or be
>     combinations. And if it is really undesirable, set a disable flag.
>
>      2. Latency is zero.
>
>     A standard OSGi Throttling service interface and bundle
>     implementation would be included.  If you want different behavior,
>     change it.  If you want to disable it, set the flag. However,
>     there are bigger issues here that I\u2019ll address a bit more down below.
>
>      3. Bandwidth is infinite.
>
>     Throttling OSGi service again. Ditto to comment 2.
>
>      4. The network is secure.
>
>     Standard OSGi service to plug in in various
>     authentication/authorization mechanisms.  By default it might be
>     pass through but also have a different implementation that uses a
>     simple username/password. Obviously LDAP, JAAS, and other bundles
>     could be created and dropped into place.
>
>      5. Topology doesn't change.
>
>     Back to the Circuit Breaker, logging and perhaps notification
>     mechanism.  Also the transport issue below where I\u2019ll mention some
>     configuration.
>
>      6. There is one administrator.
>
>     //No particular plugin for this but standardized configuration and
>     expected bundles help and this also relates to the transport
>     discussion.
>
>      7. Transport cost is zero.
>
>     //Probably not a concern here directly but will be a big issue of
>     uServices.
>
>      8. The network is homogeneous.
>
>     //I think this issue can be dealt with in our context with many of
>     the standard libraries but can be abstracted a bit more.
>
>     Obviously a big issue we\u2019ll see, and I\u2019ve seen in the past, is
>     chained request/response calls. Service 1 making a REST call to
>     service 2 making a REST call to service 3\u2026etc.  And all of a
>     sudden the latency is a killer.
>
>     ServiceMix/Karaf/Camel can already abstract away some of that via
>     property substitution. I\u2019d suggest we take that one step further
>     and put _/all/_ transport/protocol information in configuration
>     and create a standardized URI. As a developer or a senior
>     developer over a group of developers, I don\u2019t want them to be
>     concerned with the fiddly bits of the transport in the code and
>     routes and I certainly don\u2019t want to recompile just to make such
>     changes.
>
>     Akka, for  example, uses local URIs like akka://.  But a similar
>     Karaf/Camel URI could be used and mapped via the configuration
>     files.  So the developer would always use karaf:// in their routes
>     and configuration mapping would use the URI specified.
>     karaf://myserviceName.  In the configuration file might be mapped
>     a transport.configuration.cfg file.
>
>     I believe that is important for a lot of reasons.  A mid-level or
>     junior-level developer shouldn\u2019t be involved in configuration like:
>
>     "ftp://foo@myserver? <ftp://foo@myserver/?>password=secret&amp;
>
>                recursive=true&amp;
>
>                ftpClient.dataTimeout=30000&amp;
>
>                ftpClientConfig.serverLanguageCode=fr"
>
>     So the cfg file might look like this:
>
>     clientService="ftp://foo@myserver?password=secret&
>
>     recursive=true&
>
>     ftpClient.dataTimeout=30000&
>
>     ftpClientConfig.serverLanguageCode=fr"
>
>     (At least properties get rid of the gawdaful escaped ampersands).
>
>     The code would then say \u201ckaraf://clientService\u201d
>
>     One can do much of that via configuration right now but I think it
>     is critical to move it completely to configuration so that admins
>     know exactly what to change and where to find it when topologies
>     change. It also means that when the backlash from microservice
>     calling microservice calling microservice being slow happens, that
>     simple mapping would permit things like going to JMS asynchronous
>     request/response (or other fast, async mechanisms) that don\u2019t
>     swamp the virtual machine\u2019s or Karaf instance resources. It would
>     also allow for easy stubbing or mock testing of the Kontainer as
>     it will be deployed without using PAX exam or other mechanism.
>
>     Creating standard OSGi service APIs in an anticipation of these
>     problems would permit for an evolutionary approach to these
>     problems in the future and specific solutions when a standard
>     Kontainer is developed. Even standard error handler service
>     implementations can be created.
>
>     Once such a basic, standard Kontainer exists, then uKontainers
>     that implement basic functionality commonly used could be
>     created.  There are JPA examples already.  But the average
>     developer is going to be given a task to receive some canonical
>     data model via a REST service and poke it into a database.  That
>     database model probably won\u2019t look like what they are receiving. 
>     So a uKontainer that has a REST front end they can modify, a Dozer
>     object mapping file in the middle with a transform, and a call to
>     the database will be used repeatedly.
>
>     It may be that Oracle, MySQL, BerklyDB, and so on each endup with
>     different error handler plugin implementations which are used with
>     the same REST, mapping, JPA container. Just change the Maven
>     dependency or profile.
>
>     There are a large number of examples like that.  In the case of
>     that uKontainer there would like be a JPAErrorService for catching
>     common errors and another for Dozer errors and for unmarshaling
>     errors.  As a developer looking to solve very specific problems, I
>     just download the uContainer and do the Dozer mapping, change some
>     configuration and then test it.
>
>     That also means, that much like Camel EIPs, open source developers
>     can focus on hardening these containers, fixing bugs, putting in
>     performance enhancements and the like.  If a new error is coming
>     from JPA that a user finds and isn\u2019t being handled in a coherent
>     fashion, then a new block or delegate code is added and released. 
>     Just as we\u2019d do with a Camel endpoint or component.
>
>     Having standard error handlers built into uKontainers would also
>     help make coherent messages from the large and unwieldy stack
>     traces full of reflection that we commonly see.  The error handler
>     OSGi plugin for a given problem would be highly focused on
>     identifying and reporting problems with a specific technology or
>     set of technologies.
>
>     https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
>


RE: Opinionated...

Posted by Brad Johnson <br...@redhat.com>.
That is certainly the sort of library that could be used as a standard. Get an agreement on the standard OSGi service interface and then use it and others for that implementation.  Which brings up a good question and issue.  There would have to be some set of standardized messages and exception types.  The CiruitBreaker example throws a CircuitBreakingException (naturally enough).  If there’s an ErrorHandlerService it would have to know the standard set of exceptions that could be expected or, at least, a set of parent classes.  Since CircuitBreakingException is a relatively simple class it would be perfect for a default ErrorHandlerService to catch for that class of exceptions.  

 

Obviously there will have to be some head scratching and chin rubbing about how the pieces fit together exactly.  The CircuitBreakerService (and the others too) could also be more like container classes that listen and pick up CircuitBreakerListenerService instances.  So one listener might just log the circuit breaker exception.  But you might instantiate an SMTPCircuitBreakerNotifcationService that implements the CircuitBreakerListenerService and fires off an email to an admin email address if the breaker is tripped. 

 

That CircuitBreakerService might also be picked by the Kontainer instance which listens for on/off control events from the outside world.  Some thinking to do there but they are tractable problems with services and events.

 

The main services like CircuitBreakerService and ThrottlerService might register themselves as providers with the ErrorHandlerService which would catch the types of exceptions they throw.  It in turn could listen for custom ExceptionHandlerListener<T> that listen for and handle specific exception types. Still thinking and hand waving about this but I think a sane set of standard services, listeners and events could be created that would permit a user to create simple handlers to register.

 

There would also be the issue of the issue of how to automate injection of those into the Camel routes.  That doesn’t seem like it should be a daunting challenge but it would be important.  And I think very important that those get injected automatically even if the services only provide basic logging initially with no client custom code.

 

From: James Carman [mailto:james@carmanconsulting.com] 
Sent: Friday, January 13, 2017 12:12 PM
To: bradjohn@redhat.com
Subject: Re: Opinionated...

 

Commons Lang3 has a pretty simple CircuitBreaker implementation that I used in Microbule:

https://github.com/Microbule/microbule/blob/master/decorator/circuitbreaker/src/main/java/org/microbule/decorator/circuitbreaker/CircuitBreakerFilter.java

On Fri, Jan 13, 2017 at 1:05 PM Brad Johnson <bradjohn@redhat.com <ma...@redhat.com> > wrote:

Folks,

 

I wanted to make sure that my promoting CDI, Camel Java DSL, & static profiles didn’t obscure the point I was trying to make.  Whatever mechanics we choose I’d really like us to be unified behind a common paradigm so that our documentation, exemplars, archetypes, blogs, libraries, and so on are all organized the same and use the same mechanics and layouts for projects. 

 

We should promote an idiomatic way to develop software using Karaf Boot.  That’s one problem I hear from a lot of clients.  There are such cross-currents of information about how to develop OSGi-based software that it gets confusing.  Best or preferred practices are lost in the noise.  I won’t get into all that since I’m sure most of you have dealt this problem. Not to pick on it but a good example is that the Camel in Action book recommends using Pojos instead of using Processors/Exchanges.  It is on somewhere near the back of the book in a few pages. I don’t know how many examples on the web site actually use the Processor/Exchange but it is a lot. Then there are examples with Spring, Blueprint, Java DSL, Scala, etc.  There are annotations that only work in one environment but not in all of them.

 

By selecting an idiomatic and “opinionated” way of creating Karaf Boot microcontainers we could make sure that sort of confusion isn’t continued forward.  It would require a lot less documentation to cover the same ground and make editing and updating easier.  It would make creating sample and example projects a lot easier. It would simplify what Karaf Boot appliances have to support and make sure there aren’t concerns that work in one environment and not in another or that might work differently in a different environment.

 

I’m personally interested in Karaf Appliances with standard Maven structures, standard  bundle structures, and reference implementations that have a good chunk of the basic functionality. I’d say we take a page from the “convention over configuration” book or, at least, a “conventional configuration” and likely a bit of both. Because the appliances are focused on microservices we should get out ahead of the Gartner hype cycle.  Right now we are at the Peak of Inflated Expectations and in a couple of years we’ll be at the Trough of Disillusionment.  That disillusionment will come for a number of reasons. Flying Spaghetti Monster topology will be one of them but, more importantly for a Karaf Appliance, is the consistent problem of “network fallacies”.  Every Karaf Kontainer should have standard OSGi service interfaces and basic implementations that address each of the fallacies that apply to a uService.  The Kontainers should insist on it and not make it optional. If the user doesn’t want that functionality they would then need to disable via configuration.  But the Kontainer will get stuck in a grace period and then fail if an expected, standard service isn’t available. All of the standard OSGi service APIs would have basic implementations to start but as more specific Kontainers.  But, because they are standard services new ones can be developed by the community or by the end developer.

 

As developers, we’ve all had to implement functionality and then come back and deal with error handling, security, etc. I say we simply cut those services in to the Kontainer right from the get-go.  The Kontainer doesn’t run if it doesn’t find the service.  That isn’t to say these become a fundamental part of Karaf but a fundamental part of the Kontainer service that runs in Karaf.

 

The standard bundles would only implement basic functionality and not do anything sophisticated.  New bundles and libraries for more sophisticated implementations could be added later. All of the bundles would likely have disable flags if the developer found the particular concern irrelevant.  For example, security might not be relevant. The following aren’t meant to be comprehensive. Just addressing key concerns. Other standards like LoggingService might be included by default as well. 

 

The intent here isn’t to define the exact mechanics but the standard OSGi service interface that would be _required_ in any implementation of a Kontainer, even if the implemented bundle is simply a passthrough or can be disabled, it forces the developer to explicitly deal with the problems or choose to ignore them altogether.  

 

Because these service interfaces and the bundles that implement them are standard, the set can be specified by the dependencies specified in the Maven build, features and/or profiles.

 

1.	The network is reliable.

A standard “Error Handler” OSGi service.  The default bundle would simply capture errors/exceptions and log them.  Perhaps it would specify retries. Drop in solutions might include errors going to dead letter queues and so on. The OSGi service interface is required for Kontainer bootstrap so use the default or use a standard one or create one of your own.  If they want to change configuration of this bundle or put in a new one, they know exactly what it is, where it exists, how it is specified to the build, and what configuration file is associated with it. No rummaging around through code.  When the inevitable error, exceptions and problems arise, the developer isn’t left wondering where and how they should add the functionality to handle it.

 

A standard “Circuit Breaker” service API and basic implemented bundle should be provided.  Perhaps the standard bundle would simply count errors over a time frame and shut down if that limit is hit and allow those values to be configured. Default would be a rather unsophisticated implementation but provide the convention and automated wiring of a circuit breaker OSGi service.  Other implementations might fire off emails to Sys Admins or be combinations. And if it is really undesirable, set a disable flag.

 

2.	Latency is zero.

A standard OSGi Throttling service interface and bundle implementation would be included.  If you want different behavior, change it.  If you want to disable it, set the flag. However, there are bigger issues here that I’ll address a bit more down below.

 

3.	Bandwidth is infinite.

Throttling OSGi service again. Ditto to comment 2.

 

4.	The network is secure.

Standard OSGi service to plug in in various authentication/authorization mechanisms.  By default it might be pass through but also have a different implementation that uses a simple username/password. Obviously LDAP, JAAS, and other bundles could be created and dropped into place. 

 

5.	Topology doesn't change.

Back to the Circuit Breaker, logging and perhaps notification mechanism.  Also the transport issue below where I’ll mention some configuration.

 

6.	There is one administrator.

//No particular plugin for this but standardized configuration and expected bundles help and this also relates to the transport discussion.

 

7.	Transport cost is zero.

//Probably not a concern here directly but will be a big issue of uServices.

 

8.	The network is homogeneous.

//I think this issue can be dealt with in our context with many of the standard libraries but can be abstracted a bit more.

 

Obviously a big issue we’ll see, and I’ve seen in the past, is chained request/response calls. Service 1 making a REST call to service 2 making a REST call to service 3…etc.  And all of a sudden the latency is a killer.

 

ServiceMix/Karaf/Camel can already abstract away some of that via property substitution. I’d suggest we take that one step further and put _all_ transport/protocol information in configuration and create a standardized URI. As a developer or a senior developer over a group of developers, I don’t want them to be concerned with the fiddly bits of the transport in the code and routes and I certainly don’t want to recompile just to make such changes.

 

Akka, for  example, uses local URIs like akka://.  But a similar Karaf/Camel URI could be used and mapped via the configuration files.  So the developer would always use karaf:// in their routes and configuration mapping would use the URI specified.  karaf://myserviceName.  In the configuration file might be mapped a transport.configuration.cfg file.

 

I believe that is important for a lot of reasons.  A mid-level or junior-level developer shouldn’t be involved in configuration like:

" <ftp://foo@myserver/?> ftp://foo@myserver?password=secret&amp;

           recursive=true&amp;

           ftpClient.dataTimeout=30000&amp;

           ftpClientConfig.serverLanguageCode=fr"

 

So the cfg file might look like this:

clientService="ftp://foo@myserver?password=secret&

           recursive=true&

           ftpClient.dataTimeout=30000&

           ftpClientConfig.serverLanguageCode=fr"

(At least properties get rid of the gawdaful escaped ampersands).

 

The code would then say “karaf://clientService”

 

One can do much of that via configuration right now but I think it is critical to move it completely to configuration so that admins know exactly what to change and where to find it when topologies change. It also means that when the backlash from microservice calling microservice calling microservice being slow happens, that simple mapping would permit things like going to JMS asynchronous request/response (or other fast, async mechanisms) that don’t swamp the virtual machine’s or Karaf instance resources. It would also allow for easy stubbing or mock testing of the Kontainer as it will be deployed without using PAX exam or other mechanism.

 

Creating standard OSGi service APIs in an anticipation of these problems would permit for an evolutionary approach to these problems in the future and specific solutions when a standard Kontainer is developed. Even standard error handler service implementations can be created.

 

Once such a basic, standard Kontainer exists, then uKontainers that implement basic functionality commonly used could be created.  There are JPA examples already.  But the average developer is going to be given a task to receive some canonical data model via a REST service and poke it into a database.  That database model probably won’t look like what they are receiving.  So a uKontainer that has a REST front end they can modify, a Dozer object mapping file in the middle with a transform, and a call to the database will be used repeatedly.

 

It may be that Oracle, MySQL, BerklyDB, and so on each endup with different error handler plugin implementations which are used with the same REST, mapping, JPA container. Just change the Maven dependency or profile.

 

There are a large number of examples like that.  In the case of that uKontainer there would like be a JPAErrorService for catching common errors and another for Dozer errors and for unmarshaling errors.  As a developer looking to solve very specific problems, I just download the uContainer and do the Dozer mapping, change some configuration and then test it.

 

That also means, that much like Camel EIPs, open source developers can focus on hardening these containers, fixing bugs, putting in performance enhancements and the like.  If a new error is coming from JPA that a user finds and isn’t being handled in a coherent fashion, then a new block or delegate code is added and released.  Just as we’d do with a Camel endpoint or component.

 

Having standard error handlers built into uKontainers would also help make coherent messages from the large and unwieldy stack traces full of reflection that we commonly see.  The error handler OSGi plugin for a given problem would be highly focused on identifying and reporting problems with a specific technology or set of technologies.

 

 

 

https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing