You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@karaf.apache.org by Mike Hummel <mh...@mhus.de> on 2020/04/30 21:20:10 UTC

Karaf as micro service container

Hello,

referencing to Dimitrys presentation I understand that it's useful to group a set of services in the same osgi container. This will reduce complexity and work while updates, IC and more.

following some ideas to become karaf more kubernetes ready ...

- That's currently possible without changes. It's a pure design decision.

- What currently is missing is a useful docker image supporting configuration via environment variables, changing user id and supporting sidecars like filebeat or loki promtail. - I already created a docker image with features like this. I can contribute it if wanted.

- Even support for tracing collectors like jeager would help to integrate karaf in a kubernetes environment.

- I already created a health and readiness check servlet using felix health framework and watching log messages. Maybe this could become part of karaf core.

- Maybe it's possible to combine karaf with existing mesh tools to create fast kubernetes configurations with sidecars ect.

- A common topic for clustering is a common cache and locking. It's possible with hazelcast. But I was not able use hazelcasts caching service - lots of class loader issues. I was able to use ehcache (no locking) and redis for this features. It's a shame that specially 'java caching api' is not possible in OSGi using hazelcast. It's also not easy with ehcache but I found a workaround.

- Also common cron jobs are needed - should be executed only once in a cluster. Maybe it's possible to generate a kubernetes cronjob config instead of using a scheduler (maybe by using annotations).

My background:

I'm woking for a telecomunication company even for process automatisation. Four years ago we decided to use karaf and JMS to create a decentralised environment. The control center is using vaadin as UI framework. Currently I'm planing to migrate the environment in a kubernetes cloud.

I found the presentation of Dimitry very helpful. I will try not to divide each service in a separate container. In this scenario it's also possible to share database entities in the same JVM engine - this will even boost the performance - instead of using it via rest all the time.

Also interesting is the support of API gateways. For me a big problem is the service discovery. It's absolutely stupid to configure each service manually in the API gateway. A more comfortable way would be some kind of discovery. Karaf could automatically configure/update the gateway depending on the provided jax rs resources.

cu

Mike









Re: Karaf as micro service container

Posted by Mike Hummel <mh...@mhus.de>.

> On 1. May 2020, at 06:51, Jean-Baptiste Onofre <jb...@nanthrax.net> wrote:
> 
> 
> Thanks for sharing Mike and I like your ideas. About API Gateway, we started a Karaf Vineyard PoC with discovery. I know that some companies are working on a Gateway with discovery and pattern as well (Yupiik is working on Galaxy gateway for instance).
> I would be more than happy to chat with you to move forward on your points and improve Karaf !

Sorry, did not response to this.

A talk would be great. I'm available next week from Wednesday. How to reach you? slack is not possible for me, haven't a @apache.org mail account to join.




Re: Karaf as micro service container

Posted by Mike Hummel <mh...@mhus.de>.
Hi ...

> On 1. May 2020, at 06:51, Jean-Baptiste Onofre <jb...@nanthrax.net> wrote:
> 
> [...]
>> 
>> - What currently is missing is a useful docker image supporting configuration via environment variables, changing user id and supporting sidecars like filebeat or loki promtail. - I already created a docker image with features like this. I can contribute it if wanted.
> 
> It’s already possible: you can populate and overwrite any config (located in etc originally) with env variables. The format is pid:property=value, for instance -Dorg.apache.karaf.log:foo=bar. I worked on this feature, the PR will be open soon.
> 

But it's only possible with java environment (System.get...) and not System.env() --- ?

Default behaviour is to give environment variables or more comfortable included file (wordpress image is a good example). e.g. load all files in a special folder.

Even important is to change user id in start time - is needed. mac and linux have different default user ids (501 - 1000) and it's not possible to mount and write external resources easily.


>> 
>> - Even support for tracing collectors like jeager would help to integrate karaf in a kubernetes environment.
> 
> Not sure, but you can already use zipkin, dropwizard, etc, with Karaf Decanter.
> 

I was trying decanter but did not see the goal if I already use ELK or loki to collect log files directly from karaf/data/log. But maybe I only didn't see it :(


>> 
>> - I already created a health and readiness check servlet using felix health framework and watching log messages. Maybe this could become part of karaf core.
> 
> Most probably part of Decanter IMHO.
> 
>> 
>> - Maybe it's possible to combine karaf with existing mesh tools to create fast kubernetes configurations with sidecars ect.
> 
> It sounds like good idea, +1.
> 
Could give you more information in few weeks about possibilities - currently I need to investigate in this topic.

>> 
>> - A common topic for clustering is a common cache and locking. It's possible with hazelcast. But I was not able use hazelcasts caching service - lots of class loader issues. I was able to use ehcache (no locking) and redis for this features. It's a shame that specially 'java caching api' is not possible in OSGi using hazelcast. It's also not easy with ehcache but I found a workaround.
> 
> Hazelcast works fine (we are using it in Cellar). Apache Ignite also provides Karaf support. I think it makes sense to add an example/documentation/blog about that. +1
> 
1) I tried out cellar using the tutorial but it was not working proper - Tried to use config and bundle synchronising in docker containers. Nodes were connected and see each other but not syncing.

2) I used the hazelcast delivered with cellar but it was not possible to use caching mechanisms. (see mail to Scott).

>> 
>> - Also common cron jobs are needed - should be executed only once in a cluster. Maybe it's possible to generate a kubernetes cronjob config instead of using a scheduler (maybe by using annotations).
>> 
> 
> Karaf includes a scheduler for cron and trigger (it’s powered by quartz behind the hood). It exposes a service, so, we can add/improve new implementation of the Karaf scheduler for k8s cronjob.
> 
It's a little bit complicated, some tasks need to run in every instance separately (e.g. cleanup of internal resources), others are alowed only once in the cluster. Solutions could be

1) A separate system with flexible control implementation for single (connected to local scheduler) and cluster (connected to k8s).

2) Add a separate annotation hint to the existing scheduled service. And the same behaviour as described in (1).

I prefer (2) to have separated clean services.

>> My background:
>> 
>> I'm woking for a telecomunication company even for process automatisation. Four years ago we decided to use karaf and JMS to create a decentralised environment. The control center is using vaadin as UI framework. Currently I'm planing to migrate the environment in a kubernetes cloud.
>> 
>> I found the presentation of Dimitry very helpful. I will try not to divide each service in a separate container. In this scenario it's also possible to share database entities in the same JVM engine - this will even boost the performance - instead of using it via rest all the time.
>> 
>> Also interesting is the support of API gateways. For me a big problem is the service discovery. It's absolutely stupid to configure each service manually in the API gateway. A more comfortable way would be some kind of discovery. Karaf could automatically configure/update the gateway depending on the provided jax rs resources.
> 
> 
> Thanks for sharing Mike and I like your ideas. About API Gateway, we started a Karaf Vineyard PoC with discovery. I know that some companies are working on a Gateway with discovery and pattern as well (Yupiik is working on Galaxy gateway for instance).
> I would be more than happy to chat with you to move forward on your points and improve Karaf !
> 
Currently I think about a project collecting registered JAX-RS and configure TYK automatically. All with flexible API. This solution would use existing resources. But this is a separate project outside of karaf. I will discuss this next week with my team. I could give you an update.


> By the way, I liked the tag line used by Toni: Karaf is modulith runtime (meaning the next iteration after monolith and micro services) ;)

Yeah - concept of single services in the same modular engine - the practical way instead of evangelical.

Regards,
Mike

> Thanks again !
> Regards
> JB
> 
>> 
>> cu
>> 
>> Mike
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 


Re: Karaf as micro service container

Posted by Jean-Baptiste Onofre <jb...@nanthrax.net>.
Hi Mike,

See my answer inline

> Le 30 avr. 2020 à 23:20, Mike Hummel <mh...@mhus.de> a écrit :
> 
> Hello,
> 
> referencing to Dimitrys presentation I understand that it's useful to group a set of services in the same osgi container. This will reduce complexity and work while updates, IC and more.
> 
> following some ideas to become karaf more kubernetes ready ...
> 
> - That's currently possible without changes. It's a pure design decision.

Yes, Karaf is already ready for that.

> 
> - What currently is missing is a useful docker image supporting configuration via environment variables, changing user id and supporting sidecars like filebeat or loki promtail. - I already created a docker image with features like this. I can contribute it if wanted.

It’s already possible: you can populate and overwrite any config (located in etc originally) with env variables. The format is pid:property=value, for instance -Dorg.apache.karaf.log:foo=bar. I worked on this feature, the PR will be open soon.

> 
> - Even support for tracing collectors like jeager would help to integrate karaf in a kubernetes environment.

Not sure, but you can already use zipkin, dropwizard, etc, with Karaf Decanter.

> 
> - I already created a health and readiness check servlet using felix health framework and watching log messages. Maybe this could become part of karaf core.

Most probably part of Decanter IMHO.

> 
> - Maybe it's possible to combine karaf with existing mesh tools to create fast kubernetes configurations with sidecars ect.

It sounds like good idea, +1.

> 
> - A common topic for clustering is a common cache and locking. It's possible with hazelcast. But I was not able use hazelcasts caching service - lots of class loader issues. I was able to use ehcache (no locking) and redis for this features. It's a shame that specially 'java caching api' is not possible in OSGi using hazelcast. It's also not easy with ehcache but I found a workaround.

Hazelcast works fine (we are using it in Cellar). Apache Ignite also provides Karaf support. I think it makes sense to add an example/documentation/blog about that. +1

> 
> - Also common cron jobs are needed - should be executed only once in a cluster. Maybe it's possible to generate a kubernetes cronjob config instead of using a scheduler (maybe by using annotations).
> 

Karaf includes a scheduler for cron and trigger (it’s powered by quartz behind the hood). It exposes a service, so, we can add/improve new implementation of the Karaf scheduler for k8s cronjob.

> My background:
> 
> I'm woking for a telecomunication company even for process automatisation. Four years ago we decided to use karaf and JMS to create a decentralised environment. The control center is using vaadin as UI framework. Currently I'm planing to migrate the environment in a kubernetes cloud.
> 
> I found the presentation of Dimitry very helpful. I will try not to divide each service in a separate container. In this scenario it's also possible to share database entities in the same JVM engine - this will even boost the performance - instead of using it via rest all the time.
> 
> Also interesting is the support of API gateways. For me a big problem is the service discovery. It's absolutely stupid to configure each service manually in the API gateway. A more comfortable way would be some kind of discovery. Karaf could automatically configure/update the gateway depending on the provided jax rs resources.


Thanks for sharing Mike and I like your ideas. About API Gateway, we started a Karaf Vineyard PoC with discovery. I know that some companies are working on a Gateway with discovery and pattern as well (Yupiik is working on Galaxy gateway for instance).
I would be more than happy to chat with you to move forward on your points and improve Karaf !

Thanks again !
Regards
JB

> 
> cu
> 
> Mike
> 
> 
> 
> 
> 
> 
> 
> 


Re: Karaf as micro service container

Posted by Steinar Bang <sb...@dod.no>.
>>>>> Mike Hummel <mh...@mhus.de>:

> Hi,
> thank you for this line

> ${env:JDBC_DRIVER_FEATURE:-postgresql} \

> So it is be possible to use env directly.

Yep. And "-postgresql" means that it defaults to the string "postgresql"
when the environment variable isn't set.

Note in the other file that I dropped defaults for JDBC username and
password, to make it possible to have empty username and password
(eg. h2, that I tested with, has "sa" as the default username with no
password).

See this for more info:
 https://svn.apache.org/repos/asf/karaf/site/production/manual/latest/configuration.html#_files


Re: Karaf as micro service container

Posted by Mike Hummel <mh...@mhus.de>.
Hi,


thank you for this line

${env:JDBC_DRIVER_FEATURE:-postgresql} \


So it is be possible to use env directly.

Mike

> On 1. May 2020, at 11:51, Steinar Bang <sb...@dod.no> wrote:
> 
>>>>>> Mike Hummel <mh...@mhus.de>:
> 
>> - What currently is missing is a useful docker image supporting configuration via environment variables, changing user id and supporting sidecars like filebeat or loki promtail. - I already created a docker image with features like this. I can contribute it if wanted.
> 
> FWIW configuring a docker image using environment variables wasn't hard
> to do with the existing 4.2.8 official karaf image:
> https://github.com/steinarb/sonar-collector#jdbc-config-that-can-be-set-with-environment-variables
> 
> I did it by using config files expanding environment variables, and let
> the Dockerfile put them in the karaf etc directory:
> https://gist.github.com/steinarb/1e4bddfb54a9da8c3517fd28befe8a0a
> https://gist.github.com/steinarb/efa89d3b2311bc69298c32a315e005aa#file-org-apache-karaf-features-cfg-L33
> https://gist.github.com/steinarb/9bcc2906e34c06d2c9231b1bf2e8bd61
> 


Re: Karaf as micro service container

Posted by Steinar Bang <sb...@dod.no>.
>>>>> Mike Hummel <mh...@mhus.de>:

> - What currently is missing is a useful docker image supporting configuration via environment variables, changing user id and supporting sidecars like filebeat or loki promtail. - I already created a docker image with features like this. I can contribute it if wanted.

FWIW configuring a docker image using environment variables wasn't hard
to do with the existing 4.2.8 official karaf image:
 https://github.com/steinarb/sonar-collector#jdbc-config-that-can-be-set-with-environment-variables

I did it by using config files expanding environment variables, and let
the Dockerfile put them in the karaf etc directory:
 https://gist.github.com/steinarb/1e4bddfb54a9da8c3517fd28befe8a0a
 https://gist.github.com/steinarb/efa89d3b2311bc69298c32a315e005aa#file-org-apache-karaf-features-cfg-L33
 https://gist.github.com/steinarb/9bcc2906e34c06d2c9231b1bf2e8bd61


Re: Karaf as micro service container

Posted by Maurice Betzel <m....@gaston-schul.com>.
Hi Mike,

maybe some input from my experience is useful.
Within our Karaf cluster OSGi services are grouped by functionality like one
Karaf node for Mariadb facades, one for Oracle facades, two for integration
projects, one facade for ERP interaction etc...
The Karaf is a homebrew dynamic one covering the basics every node needs to
have in the cluster, like DOSGi (Cellar).
The nodes with the specific functionalities build on a node specific basic
multi maven module project, inheriting from the home-brew Karaf pom. This
basic project brings the BOM for the functionality for that node like Aries
JPA, Hibernate, MariaDB driver and PAX JDBC for the MariaDB node. Every
MariaDB database then gets its own multi maven module facade project
building on that basic project for the node.
This makes it easy extendable and up-gradable per project / facade /
database / node adding more and more methods to the facade as we go along.
Think Symantic versioning.
Because our backend nodes do not have WAN access every multi maven module
project gets packaged in KAR files containing al the jar files needed for
installment. The one disadvantage is that the Karaf maven plugin packaging
the KAR file cannot extract specific dependency parts of other feature XML
files like the camel-ftp feature. So this is a manual copy past action.
Karaf Cellar builds the cluster on runtime and besides cluster provisioning,
exposes specific marked services within the cluster as if they are local to
every node on the cluster (DOSGi), even the event admin can send events into
the cluster. 
Cron-Jobs are mostly used within our integration projects on the integration
nodes and that is where we use Apache Camel a lot, having all the EIP
functionality you can wish for and a powerful DSL.

Hope this inside view helps and improvements and discussions of my
architecture are always welcome.



--
Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html

Re: Karaf as micro service container

Posted by Scott Lewis <sl...@composent.com>.
<stuff deleted>
>>>
>>> - A common topic for clustering is a common cache and locking. It's 
>>> possible with hazelcast. But I was not able use hazelcasts caching 
>>> service - lots of class loader issues. I was able to use ehcache (no 
>>> locking) and redis for this features. It's a shame that specially 
>>> 'java caching api' is not possible in OSGi using hazelcast. It's 
>>> also not easy with ehcache but I found a workaround.
>>
>> [Scott] It seems possible to me that remote services could be useful 
>> for creating such a 'java caching api', and ECF has a distribution 
>> provider based upon hazelcast [1]...allowing a nice separation 
>> between 'caching api' (set of interacting osgi services), 
>> implementation (OSGi service impls) and distribution (Hazelcast 
>> and/or other distribution providers).
>>
> [Mike] hazelcast implement JSR 107, but it's not possible to use it in 
> OSGi. I tried it here 
> https://github.com/mhus/mhus-osgi-dev/blob/master/dev-cache/src/main/java/de/mhus/osgi/dev/cache/CmdDevHazelcast.java but 
> it's not working because of class loading issues. Using ehcache was 
> also not easy, I had to copy some classes because classes are not 
> public - example is in the same repo. But its working with the hack. - 
> I used the hazelcast coming with cellar.
>
>
[Scott] I'm not familiar with JSR 107 so can't comment on it.

But I do know that RS/RSA specifications deal with all 
classloading+versioning issues in the spec itself.   For impls of these 
specs like ECF remote services (with hazelcast distribution provider 
[1]), this means that all spec compliant impls have well-specified 
classloading and versioning behavior:

i.e.

Remote Services: 
https://osgi.org/specification/osgi.cmpn/7.0.0/service.remoteservices.html

Remote Services Admin (where much of the classloading+versioning 
behavior is defined): 
https://docs.osgi.org/specification/osgi.cmpn/7.0.0/service.remoteserviceadmin.html

IMHO having specified behavior for classloading and versioning is one of 
the major advantages of remote services/rsa.

Scott

[1] https://github.com/ECF/HazelcastProvider


Re: Karaf as micro service container

Posted by Mike Hummel <mh...@mhus.de>.

> On 1. May 2020, at 01:43, Scott Lewis <sl...@composent.com> wrote:
> 
> Howdy,
> 
> Apologies if I'm misinterpreting Mike's comments below as I was not able to attend this presentation (but will watch it when available).
> 
> On 4/30/2020 2:20 PM, Mike Hummel wrote:
>> Hello,
>> 
>> <stuff deleted>
>> 
>> - A common topic for clustering is a common cache and locking. It's possible with hazelcast. But I was not able use hazelcasts caching service - lots of class loader issues. I was able to use ehcache (no locking) and redis for this features. It's a shame that specially 'java caching api' is not possible in OSGi using hazelcast. It's also not easy with ehcache but I found a workaround.
> 
> [Scott] It seems possible to me that remote services could be useful for creating such a 'java caching api', and ECF has a distribution provider based upon hazelcast [1]...allowing a nice separation between 'caching api' (set of interacting osgi services), implementation (OSGi service impls) and distribution (Hazelcast and/or other distribution providers).
> 
[Mike] hazelcast implement JSR 107, but it's not possible to use it in OSGi. I tried it here https://github.com/mhus/mhus-osgi-dev/blob/master/dev-cache/src/main/java/de/mhus/osgi/dev/cache/CmdDevHazelcast.java <https://github.com/mhus/mhus-osgi-dev/blob/master/dev-cache/src/main/java/de/mhus/osgi/dev/cache/CmdDevHazelcast.java> but it's not working because of class loading issues. Using ehcache was also not easy, I had to copy some classes because classes are not public - example is in the same repo. But its working with the hack. - I used the hazelcast coming with cellar.



>> <stuff deleted>
>> 
>> I found the presentation of Dimitry very helpful. I will try not to divide each service in a separate container. In this scenario it's also possible to share database entities in the same JVM engine - this will even boost the performance - instead of using it via rest all the time.
>> 
>> Also interesting is the support of API gateways. For me a big problem is the service discovery. It's absolutely stupid to configure each service manually in the API gateway. A more comfortable way would be some kind of discovery. Karaf could automatically configure/update the gateway depending on the provided jax rs resources.
> 
> WRT service discovery...remote services specifies service discovery using an arbitrary comm protocol, and ECF has a discovery provider based upon etcd [1], which is also used in Kubernetes I believe.   With this etcd provider [2], I expect that creating a Kubernetes remote service publish/discovery would be straightforward.
> 

Thanks for the https://github.com/ECF <https://github.com/ECF> hint. I need to discover it next week. Seams to have solutions for some common problems.

Regards,
Mike

> 

> Scott
> 
> [1] https://github.com/ECF/HazelcastProvider
> 
> [2] https://github.com/ECF/etcd-provider
> 
> 


Re: Karaf as micro service container

Posted by Scott Lewis <sl...@composent.com>.
Howdy,

Apologies if I'm misinterpreting Mike's comments below as I was not able 
to attend this presentation (but will watch it when available).

On 4/30/2020 2:20 PM, Mike Hummel wrote:
> Hello,
>
> <stuff deleted>
>
> - A common topic for clustering is a common cache and locking. It's possible with hazelcast. But I was not able use hazelcasts caching service - lots of class loader issues. I was able to use ehcache (no locking) and redis for this features. It's a shame that specially 'java caching api' is not possible in OSGi using hazelcast. It's also not easy with ehcache but I found a workaround.

[Scott] It seems possible to me that remote services could be useful for 
creating such a 'java caching api', and ECF has a distribution provider 
based upon hazelcast [1]...allowing a nice separation between 'caching 
api' (set of interacting osgi services), implementation (OSGi service 
impls) and distribution (Hazelcast and/or other distribution providers).

> <stuff deleted>
>
> I found the presentation of Dimitry very helpful. I will try not to divide each service in a separate container. In this scenario it's also possible to share database entities in the same JVM engine - this will even boost the performance - instead of using it via rest all the time.
>
> Also interesting is the support of API gateways. For me a big problem is the service discovery. It's absolutely stupid to configure each service manually in the API gateway. A more comfortable way would be some kind of discovery. Karaf could automatically configure/update the gateway depending on the provided jax rs resources.

WRT service discovery...remote services specifies service discovery 
using an arbitrary comm protocol, and ECF has a discovery provider based 
upon etcd [1], which is also used in Kubernetes I believe.   With this 
etcd provider [2], I expect that creating a Kubernetes remote service 
publish/discovery would be straightforward.

Scott

[1] https://github.com/ECF/HazelcastProvider

[2] https://github.com/ECF/etcd-provider



Re: Karaf as micro service container

Posted by Jean-Baptiste Onofre <jb...@nanthrax.net>.
By the way, I liked the tag line used by Toni: Karaf is modulith runtime (meaning the next iteration after monolith and micro services) ;)

Regards
JB

> Le 30 avr. 2020 à 23:20, Mike Hummel <mh...@mhus.de> a écrit :
> 
> Hello,
> 
> referencing to Dimitrys presentation I understand that it's useful to group a set of services in the same osgi container. This will reduce complexity and work while updates, IC and more.
> 
> following some ideas to become karaf more kubernetes ready ...
> 
> - That's currently possible without changes. It's a pure design decision.
> 
> - What currently is missing is a useful docker image supporting configuration via environment variables, changing user id and supporting sidecars like filebeat or loki promtail. - I already created a docker image with features like this. I can contribute it if wanted.
> 
> - Even support for tracing collectors like jeager would help to integrate karaf in a kubernetes environment.
> 
> - I already created a health and readiness check servlet using felix health framework and watching log messages. Maybe this could become part of karaf core.
> 
> - Maybe it's possible to combine karaf with existing mesh tools to create fast kubernetes configurations with sidecars ect.
> 
> - A common topic for clustering is a common cache and locking. It's possible with hazelcast. But I was not able use hazelcasts caching service - lots of class loader issues. I was able to use ehcache (no locking) and redis for this features. It's a shame that specially 'java caching api' is not possible in OSGi using hazelcast. It's also not easy with ehcache but I found a workaround.
> 
> - Also common cron jobs are needed - should be executed only once in a cluster. Maybe it's possible to generate a kubernetes cronjob config instead of using a scheduler (maybe by using annotations).
> 
> My background:
> 
> I'm woking for a telecomunication company even for process automatisation. Four years ago we decided to use karaf and JMS to create a decentralised environment. The control center is using vaadin as UI framework. Currently I'm planing to migrate the environment in a kubernetes cloud.
> 
> I found the presentation of Dimitry very helpful. I will try not to divide each service in a separate container. In this scenario it's also possible to share database entities in the same JVM engine - this will even boost the performance - instead of using it via rest all the time.
> 
> Also interesting is the support of API gateways. For me a big problem is the service discovery. It's absolutely stupid to configure each service manually in the API gateway. A more comfortable way would be some kind of discovery. Karaf could automatically configure/update the gateway depending on the provided jax rs resources.
> 
> cu
> 
> Mike
> 
> 
> 
> 
> 
> 
> 
>