You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Mich Talebzadeh <mi...@gmail.com> on 2018/07/08 08:59:21 UTC

Real time streaming as a microservice

Hi,

I have created the Kafka messaging architecture as a microservice that
feeds both Spark streaming and Flink. Spark streaming uses micro-batches
meaning "collect and process data" and flink as an event driven
architecture (a stateful application that reacts to incoming events by
triggering computations etc.

According to Wikipedia, A Microservice is a  technique that structures an
application as a collection of loosely coupled services. In a microservices
architecture, services are fine-grained and the protocols are lightweight.

Ok for streaming data among other things I have to create and configure
topic (or topics), design a robust zookeeper ensemble and create Kafka
brokers with scalability and resiliency. Then I can offer the streaming as
a microservice to subscribers among them Spark and Flink. I can upgrade
this microservice component in isolation without impacting either Spark or
Flink.

The problem I face here is the dependency on Flink etc on the jar files
specific for the version of Kafka deployed. For example kafka_2.12-1.1.0 is
built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink 1.5
application, I need  to use the correct dependency in sbt build. For
example:
libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-0.11" %
"1.5.0"
libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-base" %
"1.5.0"
libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
"1.5.0"
libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"

and the Scala code needs to change:

import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
…
    val stream = env
                 .addSource(new FlinkKafkaConsumer011[String]("md", new
SimpleStringSchema(), properties))

So in summary some changes need to be made to Flink to be able to interact
with the new version of Kafka. And more importantly if one can use an
abstract notion of microservice here?

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi Deepak,

I will put it there once all the bits and pieces come together. At the
moment I am drawing the diagrams. I will let you know.

Definitely everyone's contribution is welcome.

Regards,

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 15 Jul 2018 at 09:16, Deepak Sharma <de...@gmail.com> wrote:

> Is it on github Mich ?
> I would love to use the flink and spark edition and add some use cases
> from my side.
>
> Thanks
> Deepak
>
> On Sun, Jul 15, 2018, 13:38 Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> I have now managed to deploy both ZooKeeper and Kafka as microservices
>> using docker images.
>>
>> The idea came to me as I wanted to create lightweight processes for both
>> ZooKeeper and Kafka to be used as services for Flink and Spark
>> simultaneously.
>>
>> In this design both Flink and Spark rely on streaming market data
>> messages published through Kafka. My current design is simple one docker
>> for Zookeeper and another for Kafka
>>
>> [root@rhes75 ~]# docker ps -a
>> CONTAINER ID        IMAGE               COMMAND
>> CREATED             STATUS
>> PORTS                                            NAMES
>> 05cf097ac139        ches/kafka          "/start.sh"              9 hours
>> ago         Up 9 hours              *0.0.0.0:7203->7203/tcp,
>> 0.0.0.0:9092->9092/tcp*   kafka
>> b173e455cc80        jplock/zookeeper    "/opt/zookeeper/bin/…"   10 hours
>> ago        Up 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
>> 3888/tcp*       zookeeper
>>
>> Note that the docker ports are exposed to the physical host that running
>> the containers.
>>
>> A test message is simply created as follows:
>> ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
>> --replication-factor 1 --partitions 1 --topic test
>>
>> Note that rhes75 is the host that houses the dockers and port 2181 is the
>> zookeeper port used by the zookeeper docker and mapped
>>
>> The spark streaming uses speed layer in Lambda architecture to write to
>> an Hbase table for selected market data (Hbase requires connectivity to a
>> Zookeeper). For Hbase I specified a zookeeper instance running on another
>> host and Hbase works fine.
>>
>> Anyway I will provide further info and diagrams.
>>
>> Cheers,
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>> Thanks got it sorted.
>>>
>>> Regards,
>>>
>>>
>>> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>> Thanks Rahul.
>>>>
>>>> This is the outcome of
>>>>
>>>> [root@rhes75 ~]# iptables -t nat -L -n
>>>> Chain PREROUTING (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
>>>> match dst-type LOCAL
>>>> Chain INPUT (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> Chain OUTPUT (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
>>>> match dst-type LOCAL
>>>> Chain POSTROUTING (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
>>>> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
>>>> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
>>>> RETURN     all  --  192.168.122.0/24     255.255.255.255
>>>> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>>> ports: 1024-65535
>>>> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>>> ports: 1024-65535
>>>> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
>>>> Chain DOCKER (2 references)
>>>> target     prot opt source               destination
>>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>>>
>>>> So basically I need to connect to container from another host as the
>>>> link points it out.
>>>>
>>>> My docker is already running.
>>>>
>>>> [root@rhes75 ~]# docker ps -a
>>>> CONTAINER ID        IMAGE               COMMAND
>>>> CREATED             STATUS              PORTS               NAMES
>>>> 8dd84a174834        ubuntu              "bash"              19 hours
>>>> ago        Up 11 hours                             dockerZooKeeperKafka
>>>>
>>>> What would be an option to add a fixed port to the running container.?
>>>>
>>>> Regards,
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
>>>> wrote:
>>>>
>>>>> Seems like you need to expose your port via docker run or
>>>>> docker-compose .
>>>>>
>>>>>
>>>>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Rahul Singh
>>>>> rahul.singh@anant.us
>>>>>
>>>>> Anant Corporation
>>>>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com>, wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I have now successfully created a docker for RHEL75 as follows:
>>>>> >
>>>>> > [root@rhes75 ~]# docker ps -a
>>>>> > CONTAINER ID IMAGE COMMAND
>>>>> > CREATED STATUS PORTS NAMES
>>>>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>>>>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>>>>> > dockerZooKeeper
>>>>> > 8dd84a174834 ubuntu "bash" 6 hours
>>>>> > ago Up 6 hours
>>>>> > dockerZooKeeperKafka
>>>>> >
>>>>> > The first container is ready made for ZooKeeper that exposes the
>>>>> zookeeper
>>>>> > client port etc.
>>>>> >
>>>>> > The second container is an ubuntu shell which I installed both
>>>>> zookeeper
>>>>> > and Kafka on it. They are both running in container
>>>>> dockerZooKeeperKafka
>>>>> >
>>>>> >
>>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>>>>> > 5715 Kafka
>>>>> > 5647 QuorumPeerMain
>>>>> >
>>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>>>>> > (Not all processes could be identified, non-owned process info
>>>>> > will not be shown, you would have to be root to see it all.)
>>>>> > Active Internet connections (only servers)
>>>>> > Proto Recv-Q Send-Q Local Address Foreign Address
>>>>> > State User Inode PID/Program name
>>>>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>>>>> > LISTEN 1005 2865148 5715/java
>>>>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>>>>> > LISTEN 1005 2865147 5715/java
>>>>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>>>>> > LISTEN 1005 2865151 5715/java
>>>>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>>>>> > LISTEN 0 2757032 -
>>>>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>>>>> > LISTEN 1005 2852821 5647/java
>>>>> >
>>>>> >
>>>>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>>>>> > 0.0.0.0:* LISTEN 1005 2873507
>>>>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>>>>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>>>>> > 0 0 :::22 :::* LISTEN
>>>>> > 0 2757034 -
>>>>> >
>>>>> > I have a gateway node that is connected to the host running the
>>>>> container.
>>>>> > From within the container I can ssh to the gateway host *as both the
>>>>> > gateway host and host running the container are on the same VLAN.*
>>>>> >
>>>>> >
>>>>> > However, I cannot connect from gateway to the container. The
>>>>> container has
>>>>> > this IP address
>>>>> >
>>>>> > root@8dd84a174834:~# ifconfig -a
>>>>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>>>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>>>>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>>>>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>>> > TX packets 189400 bytes 13557709 (13.5 MB)
>>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>>> >
>>>>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>>>>> > inet 127.0.0.1 netmask 255.0.0.0
>>>>> > loop txqueuelen 1000 (Local Loopback)
>>>>> > RX packets 8450 bytes 534805 (534.8 KB)
>>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>>> > TX packets 8450 bytes 534805 (534.8 KB)
>>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>>> >
>>>>> >
>>>>> >
>>>>> > The interesting thing is that in order to publish streaming test
>>>>> data I
>>>>> > need to be able to do something like below
>>>>> >
>>>>> >
>>>>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>>>>> --broker-list
>>>>> >
>>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>>> > --topic md
>>>>> >
>>>>> >
>>>>> > That Kafka broker list --broker-list
>>>>> >
>>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>>> > needs to be replaced by <container hostname>:9092!
>>>>> >
>>>>> >
>>>>> > So at this juncture I am wondering what type of network needs to be
>>>>> created
>>>>> > as the container is running within another host.
>>>>> >
>>>>> >
>>>>> > Thanks
>>>>> >
>>>>> >
>>>>> > Dr Mich Talebzadeh
>>>>> >
>>>>> >
>>>>> >
>>>>> > LinkedIn *
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > <
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> >*
>>>>> >
>>>>> >
>>>>> >
>>>>> > http://talebzadehmich.wordpress.com
>>>>> >
>>>>> >
>>>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>> for any
>>>>> > loss, damage or destruction of data or any other property which may
>>>>> arise
>>>>> > from relying on this email's technical content is explicitly
>>>>> disclaimed.
>>>>> > The author will in no case be liable for any monetary damages
>>>>> arising from
>>>>> > such loss, damage or destruction.
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com>
>>>>> wrote:
>>>>> >
>>>>> > >
>>>>> > >
>>>>> > > ________________________________
>>>>> > > From: Mich Talebzadeh <mi...@gmail.com>
>>>>> > > Sent: Sunday, July 8, 2018 1:01 PM
>>>>> > > To: users@kafka.apache.org
>>>>> > > Subject: Re: Real time streaming as a microservice
>>>>> > >
>>>>> > > Thanks Martin.
>>>>> > >
>>>>> > > From an implementation point of view do we need to introduce
>>>>> docker for
>>>>> > > each microservice? In other words does it have to be artefact -->
>>>>> contain
>>>>> > > --> docker for this to be true microservice and all these
>>>>> microservices
>>>>> > > communicate through Service Registry.
>>>>> > > MG>for deployment deploying thru docker container would be the
>>>>> easiest
>>>>> > > means to test
>>>>> > > MG>but first we would need to concentrate
>>>>> > > MG>on your developing a micro-service first
>>>>> > > MG>your development of a service registry
>>>>> > > MG>your development of a micro-services container which can lookup
>>>>> > > necessary endpoints
>>>>> > > MG>since you pre-pordained Docker to be your deploy container I
>>>>> would
>>>>> > > suggest implementing OpenShift
>>>>> > > https://www.openshift.org/
>>>>> > > OpenShift Origin - Open Source Container Application Platform<
>>>>> > > https://www.openshift.org/>
>>>>> > > www.openshift.org
>>>>> > > The next generation open source app hosting platform by Red Hat
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > Also if we wanted to move from a monolithic classic design with
>>>>> Streaming
>>>>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark
>>>>> Streaming, Flink)
>>>>> > > --> Real time dashboard (anything built on something like D3) to
>>>>> > > microservices how would that entail.
>>>>> > > MG>the simpler the function the better ...something like
>>>>> > > MG>simple input...user enters 'foo'
>>>>> > > MG>simple processing....process does spark stream to determine
>>>>> what result
>>>>> > > responds to 'foo'
>>>>> > > MG>simple output...output will be text 'bar' formatting to be
>>>>> decided
>>>>> > > (text/html/pdf?)
>>>>> > >
>>>>> > > One option would be to have three
>>>>> > > principal microservices (each with sub-services) providing three
>>>>> > > components?
>>>>> > > MG>concentrate on the simplest function which would
>>>>> be_______________?
>>>>> > > MG>shoehorn simple function into a viable microservice
>>>>> > > MG>the following inventory microservice from redhat example shows
>>>>> how your
>>>>> > > ______? service
>>>>> > > MG>can be incorporated into a openshift container
>>>>> > > MG>and be readily deployable in docker container
>>>>> > > MG>
>>>>> > >
>>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>>> > > [
>>>>> > >
>>>>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>>>>> > > ]<
>>>>> > >
>>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>>> > > >
>>>>> > >
>>>>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>>>>> > >
>>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>>> > > >
>>>>> > > developers.redhat.com
>>>>> > > Today I want to talk about the demo we presented @ OpenShift
>>>>> Container
>>>>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>>>>> JBoss
>>>>> > > team’s great work available on this repo: In the next few
>>>>> paragraphs, I’ll
>>>>> > > describe in deep detail the microservices CoolStore example and
>>>>> how we used
>>>>> > > ...
>>>>> > >
>>>>> > >
>>>>> > > MG>the first step would involve knowing which simple function you
>>>>> need to
>>>>> > > deploy as microservice ?
>>>>> > >
>>>>> > > Regards,
>>>>> > >
>>>>> > > Dr Mich Talebzadeh
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > LinkedIn *
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > <
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > *
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > http://talebzadehmich.wordpress.com
>>>>> > >
>>>>> > >
>>>>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>> for any
>>>>> > > loss, damage or destruction of data or any other property which
>>>>> may arise
>>>>> > > from relying on this email's technical content is explicitly
>>>>> disclaimed.
>>>>> > > The author will in no case be liable for any monetary damages
>>>>> arising from
>>>>> > > such loss, damage or destruction.
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>>>>> wrote:
>>>>> > >
>>>>> > > >
>>>>> > > >
>>>>> > > > initial work under using Zookeeper as a Microservices container
>>>>> is here
>>>>> > > >
>>>>> > > >
>>>>> > >
>>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>>> > > >
>>>>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>>>>> > > >
>>>>> > >
>>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>>> > > > >
>>>>> > > > planet.jboss.org
>>>>> > > > In a microservice world, multiple services are typically
>>>>> distributed in a
>>>>> > > > PaaS environment. Immutable infrastructure, such as those
>>>>> provided by
>>>>> > > > containers or immutable VM images. Services may scale up and
>>>>> down based
>>>>> > > > upon certain pre-defined metrics. Exact address of the service
>>>>> may not be
>>>>> > > > known ...
>>>>> > > >
>>>>> > > > once your Zookeeper Microservices container is operational
>>>>> > > >
>>>>> > > > you would need to 'tweak' kafka to extend and implement
>>>>> > > classes/interfaces
>>>>> > > > to become
>>>>> > > > a true microservices component..this may help
>>>>> > > >
>>>>> > > >
>>>>> > > >
>>>>> > >
>>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>>> > > > [
>>>>> > >
>>>>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>>>>> > > > ]<
>>>>> > > >
>>>>> > >
>>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>>> > > > >
>>>>> > > >
>>>>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>>>>> > > >
>>>>> > >
>>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>>> > > > >
>>>>> > > > blog.arungupta.me
>>>>> > > > Have you ever wondered what does it take to refactor an existing
>>>>> Java EE
>>>>> > > > monolithic application to a microservices-based one? This blog
>>>>> explains
>>>>> > > how
>>>>> > > > a trivial shopping cart example was converted to
>>>>> microservices-based
>>>>> > > > application, and what are some of the concerns around it.
>>>>> > > >
>>>>> > > >
>>>>> > > >
>>>>> > > > let me know if i can help out
>>>>> > > > Martin
>>>>> > > >
>>>>> > > >
>>>>> > > > ________________________________
>>>>> > > > From: Jörn Franke <jo...@gmail.com>
>>>>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>>>>> > > > To: users@kafka.apache.org
>>>>> > > > Cc: user@flink.apache.org
>>>>> > > > Subject: Re: Real time streaming as a microservice
>>>>> > > >
>>>>> > > > Yes or Kafka will need it ...
>>>>> > > > As soon as your orchestrate different microservices this will
>>>>> happen.
>>>>> > > >
>>>>> > > >
>>>>> > > >
>>>>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com>
>>>>> > > > wrote:
>>>>> > > > >
>>>>> > > > > Thanks Jorn.
>>>>> > > > >
>>>>> > > > > So I gather as you correctly suggested, microservices do
>>>>> provide value
>>>>> > > in
>>>>> > > > > terms of modularisation. However, there will always
>>>>> "inevitably" be
>>>>> > > > > scenarios where the receiving artefact say Flink needs
>>>>> communication
>>>>> > > > > protocol changes?
>>>>> > > > >
>>>>> > > > > thanks
>>>>> > > > >
>>>>> > > > > Dr Mich Talebzadeh
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > LinkedIn *
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > <
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > *
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > http://talebzadehmich.wordpress.com
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>>> responsibility for
>>>>> > > any
>>>>> > > > > loss, damage or destruction of data or any other property
>>>>> which may
>>>>> > > arise
>>>>> > > > > from relying on this email's technical content is explicitly
>>>>> > > disclaimed.
>>>>> > > > > The author will in no case be liable for any monetary damages
>>>>> arising
>>>>> > > > from
>>>>> > > > > such loss, damage or destruction.
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <
>>>>> jornfranke@gmail.com>
>>>>> > > wrote:
>>>>> > > > > >
>>>>> > > > > > That they are loosely coupled does not mean they are
>>>>> independent. For
>>>>> > > > > > instance, you would not be able to replace Kafka with zeromq
>>>>> in your
>>>>> > > > > > scenario. Unfortunately also Kafka sometimes needs to
>>>>> introduce
>>>>> > > breaking
>>>>> > > > > > changes and the dependent application needs to upgrade.
>>>>> > > > > > You will not be able to avoid these scenarios in the future
>>>>> (this is
>>>>> > > > only
>>>>> > > > > > possible if micro services don’t communicate with each other
>>>>> or if
>>>>> > > they
>>>>> > > > > > would never need to change their communication protocol -
>>>>> pretty
>>>>> > > > impossible
>>>>> > > > > > ). However there are ways of course to reduce it, eg kafka
>>>>> could
>>>>> > > reduce
>>>>> > > > the
>>>>> > > > > > number of breaking changes or you can develop a very
>>>>> lightweight
>>>>> > > > > > microservice that is very easy to change and that only deals
>>>>> with the
>>>>> > > > > > broker integration and your application etc.
>>>>> > > > > >
>>>>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com
>>>>> > > >
>>>>> > > > > > wrote:
>>>>> > > > > > >
>>>>> > > > > > > Hi,
>>>>> > > > > > >
>>>>> > > > > > > I have created the Kafka messaging architecture as a
>>>>> microservice
>>>>> > > that
>>>>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>>>>> > > > micro-batches
>>>>> > > > > > > meaning "collect and process data" and flink as an event
>>>>> driven
>>>>> > > > > > > architecture (a stateful application that reacts to
>>>>> incoming events
>>>>> > > by
>>>>> > > > > > > triggering computations etc.
>>>>> > > > > > >
>>>>> > > > > > > According to Wikipedia, A Microservice is a technique that
>>>>> > > structures
>>>>> > > > an
>>>>> > > > > > > application as a collection of loosely coupled services.
>>>>> In a
>>>>> > > > > > microservices
>>>>> > > > > > > architecture, services are fine-grained and the protocols
>>>>> are
>>>>> > > > > > lightweight.
>>>>> > > > > > >
>>>>> > > > > > > Ok for streaming data among other things I have to create
>>>>> and
>>>>> > > configure
>>>>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>>>>> create
>>>>> > > Kafka
>>>>> > > > > > > brokers with scalability and resiliency. Then I can offer
>>>>> the
>>>>> > > streaming
>>>>> > > > > > as
>>>>> > > > > > > a microservice to subscribers among them Spark and Flink.
>>>>> I can
>>>>> > > upgrade
>>>>> > > > > > > this microservice component in isolation without impacting
>>>>> either
>>>>> > > Spark
>>>>> > > > > > or
>>>>> > > > > > > Flink.
>>>>> > > > > > >
>>>>> > > > > > > The problem I face here is the dependency on Flink etc on
>>>>> the jar
>>>>> > > files
>>>>> > > > > > > specific for the version of Kafka deployed. For example
>>>>> > > > kafka_2.12-1.1.0
>>>>> > > > > > is
>>>>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>>>>> work in
>>>>> > > Flink
>>>>> > > > > > 1.5
>>>>> > > > > > > application, I need to use the correct dependency in sbt
>>>>> build. For
>>>>> > > > > > > example:
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>>> > > > > > "flink-connector-kafka-0.11" %
>>>>> > > > > > > "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>>> > > > > > "flink-connector-kafka-base" %
>>>>> > > > > > > "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala"
>>>>> % "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.kafka" %
>>>>> "kafka-clients" %
>>>>> > > > "0.11.0.0"
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>>> "flink-streaming-scala"
>>>>> > > %
>>>>> > > > > > > "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>>>>> "0.11.0.0"
>>>>> > > > > > >
>>>>> > > > > > > and the Scala code needs to change:
>>>>> > > > > > >
>>>>> > > > > > > import
>>>>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>>>> > > > > > > …
>>>>> > > > > > > val stream = env
>>>>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>>>>> > > > > > > SimpleStringSchema(), properties))
>>>>> > > > > > >
>>>>> > > > > > > So in summary some changes need to be made to Flink to be
>>>>> able to
>>>>> > > > > > interact
>>>>> > > > > > > with the new version of Kafka. And more importantly if one
>>>>> can use an
>>>>> > > > > > > abstract notion of microservice here?
>>>>> > > > > > >
>>>>> > > > > > > Dr Mich Talebzadeh
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > > LinkedIn *
>>>>> > > > > >
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > > > <
>>>>> > > > > >
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > > > *
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > > http://talebzadehmich.wordpress.com
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>>> responsibility for
>>>>> > > > any
>>>>> > > > > > > loss, damage or destruction of data or any other property
>>>>> which may
>>>>> > > > arise
>>>>> > > > > > > from relying on this email's technical content is
>>>>> explicitly
>>>>> > > > disclaimed.
>>>>> > > > > > > The author will in no case be liable for any monetary
>>>>> damages arising
>>>>> > > > > > from
>>>>> > > > > > > such loss, damage or destruction.
>>>>> > > > > >
>>>>> > > >
>>>>> > >
>>>>>
>>>>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi Deepak,

I will put it there once all the bits and pieces come together. At the
moment I am drawing the diagrams. I will let you know.

Definitely everyone's contribution is welcome.

Regards,

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 15 Jul 2018 at 09:16, Deepak Sharma <de...@gmail.com> wrote:

> Is it on github Mich ?
> I would love to use the flink and spark edition and add some use cases
> from my side.
>
> Thanks
> Deepak
>
> On Sun, Jul 15, 2018, 13:38 Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> I have now managed to deploy both ZooKeeper and Kafka as microservices
>> using docker images.
>>
>> The idea came to me as I wanted to create lightweight processes for both
>> ZooKeeper and Kafka to be used as services for Flink and Spark
>> simultaneously.
>>
>> In this design both Flink and Spark rely on streaming market data
>> messages published through Kafka. My current design is simple one docker
>> for Zookeeper and another for Kafka
>>
>> [root@rhes75 ~]# docker ps -a
>> CONTAINER ID        IMAGE               COMMAND
>> CREATED             STATUS
>> PORTS                                            NAMES
>> 05cf097ac139        ches/kafka          "/start.sh"              9 hours
>> ago         Up 9 hours              *0.0.0.0:7203->7203/tcp,
>> 0.0.0.0:9092->9092/tcp*   kafka
>> b173e455cc80        jplock/zookeeper    "/opt/zookeeper/bin/…"   10 hours
>> ago        Up 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
>> 3888/tcp*       zookeeper
>>
>> Note that the docker ports are exposed to the physical host that running
>> the containers.
>>
>> A test message is simply created as follows:
>> ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
>> --replication-factor 1 --partitions 1 --topic test
>>
>> Note that rhes75 is the host that houses the dockers and port 2181 is the
>> zookeeper port used by the zookeeper docker and mapped
>>
>> The spark streaming uses speed layer in Lambda architecture to write to
>> an Hbase table for selected market data (Hbase requires connectivity to a
>> Zookeeper). For Hbase I specified a zookeeper instance running on another
>> host and Hbase works fine.
>>
>> Anyway I will provide further info and diagrams.
>>
>> Cheers,
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>> Thanks got it sorted.
>>>
>>> Regards,
>>>
>>>
>>> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
>>> wrote:
>>>
>>>> Thanks Rahul.
>>>>
>>>> This is the outcome of
>>>>
>>>> [root@rhes75 ~]# iptables -t nat -L -n
>>>> Chain PREROUTING (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
>>>> match dst-type LOCAL
>>>> Chain INPUT (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> Chain OUTPUT (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
>>>> match dst-type LOCAL
>>>> Chain POSTROUTING (policy ACCEPT)
>>>> target     prot opt source               destination
>>>> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
>>>> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
>>>> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
>>>> RETURN     all  --  192.168.122.0/24     255.255.255.255
>>>> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>>> ports: 1024-65535
>>>> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>>> ports: 1024-65535
>>>> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
>>>> Chain DOCKER (2 references)
>>>> target     prot opt source               destination
>>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>>>
>>>> So basically I need to connect to container from another host as the
>>>> link points it out.
>>>>
>>>> My docker is already running.
>>>>
>>>> [root@rhes75 ~]# docker ps -a
>>>> CONTAINER ID        IMAGE               COMMAND
>>>> CREATED             STATUS              PORTS               NAMES
>>>> 8dd84a174834        ubuntu              "bash"              19 hours
>>>> ago        Up 11 hours                             dockerZooKeeperKafka
>>>>
>>>> What would be an option to add a fixed port to the running container.?
>>>>
>>>> Regards,
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
>>>> wrote:
>>>>
>>>>> Seems like you need to expose your port via docker run or
>>>>> docker-compose .
>>>>>
>>>>>
>>>>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Rahul Singh
>>>>> rahul.singh@anant.us
>>>>>
>>>>> Anant Corporation
>>>>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com>, wrote:
>>>>> > Hi,
>>>>> >
>>>>> > I have now successfully created a docker for RHEL75 as follows:
>>>>> >
>>>>> > [root@rhes75 ~]# docker ps -a
>>>>> > CONTAINER ID IMAGE COMMAND
>>>>> > CREATED STATUS PORTS NAMES
>>>>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>>>>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>>>>> > dockerZooKeeper
>>>>> > 8dd84a174834 ubuntu "bash" 6 hours
>>>>> > ago Up 6 hours
>>>>> > dockerZooKeeperKafka
>>>>> >
>>>>> > The first container is ready made for ZooKeeper that exposes the
>>>>> zookeeper
>>>>> > client port etc.
>>>>> >
>>>>> > The second container is an ubuntu shell which I installed both
>>>>> zookeeper
>>>>> > and Kafka on it. They are both running in container
>>>>> dockerZooKeeperKafka
>>>>> >
>>>>> >
>>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>>>>> > 5715 Kafka
>>>>> > 5647 QuorumPeerMain
>>>>> >
>>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>>>>> > (Not all processes could be identified, non-owned process info
>>>>> > will not be shown, you would have to be root to see it all.)
>>>>> > Active Internet connections (only servers)
>>>>> > Proto Recv-Q Send-Q Local Address Foreign Address
>>>>> > State User Inode PID/Program name
>>>>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>>>>> > LISTEN 1005 2865148 5715/java
>>>>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>>>>> > LISTEN 1005 2865147 5715/java
>>>>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>>>>> > LISTEN 1005 2865151 5715/java
>>>>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>>>>> > LISTEN 0 2757032 -
>>>>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>>>>> > LISTEN 1005 2852821 5647/java
>>>>> >
>>>>> >
>>>>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>>>>> > 0.0.0.0:* LISTEN 1005 2873507
>>>>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>>>>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>>>>> > 0 0 :::22 :::* LISTEN
>>>>> > 0 2757034 -
>>>>> >
>>>>> > I have a gateway node that is connected to the host running the
>>>>> container.
>>>>> > From within the container I can ssh to the gateway host *as both the
>>>>> > gateway host and host running the container are on the same VLAN.*
>>>>> >
>>>>> >
>>>>> > However, I cannot connect from gateway to the container. The
>>>>> container has
>>>>> > this IP address
>>>>> >
>>>>> > root@8dd84a174834:~# ifconfig -a
>>>>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>>>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>>>>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>>>>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>>> > TX packets 189400 bytes 13557709 (13.5 MB)
>>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>>> >
>>>>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>>>>> > inet 127.0.0.1 netmask 255.0.0.0
>>>>> > loop txqueuelen 1000 (Local Loopback)
>>>>> > RX packets 8450 bytes 534805 (534.8 KB)
>>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>>> > TX packets 8450 bytes 534805 (534.8 KB)
>>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>>> >
>>>>> >
>>>>> >
>>>>> > The interesting thing is that in order to publish streaming test
>>>>> data I
>>>>> > need to be able to do something like below
>>>>> >
>>>>> >
>>>>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>>>>> --broker-list
>>>>> >
>>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>>> > --topic md
>>>>> >
>>>>> >
>>>>> > That Kafka broker list --broker-list
>>>>> >
>>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>>> > needs to be replaced by <container hostname>:9092!
>>>>> >
>>>>> >
>>>>> > So at this juncture I am wondering what type of network needs to be
>>>>> created
>>>>> > as the container is running within another host.
>>>>> >
>>>>> >
>>>>> > Thanks
>>>>> >
>>>>> >
>>>>> > Dr Mich Talebzadeh
>>>>> >
>>>>> >
>>>>> >
>>>>> > LinkedIn *
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > <
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> >*
>>>>> >
>>>>> >
>>>>> >
>>>>> > http://talebzadehmich.wordpress.com
>>>>> >
>>>>> >
>>>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>> for any
>>>>> > loss, damage or destruction of data or any other property which may
>>>>> arise
>>>>> > from relying on this email's technical content is explicitly
>>>>> disclaimed.
>>>>> > The author will in no case be liable for any monetary damages
>>>>> arising from
>>>>> > such loss, damage or destruction.
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com>
>>>>> wrote:
>>>>> >
>>>>> > >
>>>>> > >
>>>>> > > ________________________________
>>>>> > > From: Mich Talebzadeh <mi...@gmail.com>
>>>>> > > Sent: Sunday, July 8, 2018 1:01 PM
>>>>> > > To: users@kafka.apache.org
>>>>> > > Subject: Re: Real time streaming as a microservice
>>>>> > >
>>>>> > > Thanks Martin.
>>>>> > >
>>>>> > > From an implementation point of view do we need to introduce
>>>>> docker for
>>>>> > > each microservice? In other words does it have to be artefact -->
>>>>> contain
>>>>> > > --> docker for this to be true microservice and all these
>>>>> microservices
>>>>> > > communicate through Service Registry.
>>>>> > > MG>for deployment deploying thru docker container would be the
>>>>> easiest
>>>>> > > means to test
>>>>> > > MG>but first we would need to concentrate
>>>>> > > MG>on your developing a micro-service first
>>>>> > > MG>your development of a service registry
>>>>> > > MG>your development of a micro-services container which can lookup
>>>>> > > necessary endpoints
>>>>> > > MG>since you pre-pordained Docker to be your deploy container I
>>>>> would
>>>>> > > suggest implementing OpenShift
>>>>> > > https://www.openshift.org/
>>>>> > > OpenShift Origin - Open Source Container Application Platform<
>>>>> > > https://www.openshift.org/>
>>>>> > > www.openshift.org
>>>>> > > The next generation open source app hosting platform by Red Hat
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > Also if we wanted to move from a monolithic classic design with
>>>>> Streaming
>>>>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark
>>>>> Streaming, Flink)
>>>>> > > --> Real time dashboard (anything built on something like D3) to
>>>>> > > microservices how would that entail.
>>>>> > > MG>the simpler the function the better ...something like
>>>>> > > MG>simple input...user enters 'foo'
>>>>> > > MG>simple processing....process does spark stream to determine
>>>>> what result
>>>>> > > responds to 'foo'
>>>>> > > MG>simple output...output will be text 'bar' formatting to be
>>>>> decided
>>>>> > > (text/html/pdf?)
>>>>> > >
>>>>> > > One option would be to have three
>>>>> > > principal microservices (each with sub-services) providing three
>>>>> > > components?
>>>>> > > MG>concentrate on the simplest function which would
>>>>> be_______________?
>>>>> > > MG>shoehorn simple function into a viable microservice
>>>>> > > MG>the following inventory microservice from redhat example shows
>>>>> how your
>>>>> > > ______? service
>>>>> > > MG>can be incorporated into a openshift container
>>>>> > > MG>and be readily deployable in docker container
>>>>> > > MG>
>>>>> > >
>>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>>> > > [
>>>>> > >
>>>>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>>>>> > > ]<
>>>>> > >
>>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>>> > > >
>>>>> > >
>>>>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>>>>> > >
>>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>>> > > >
>>>>> > > developers.redhat.com
>>>>> > > Today I want to talk about the demo we presented @ OpenShift
>>>>> Container
>>>>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>>>>> JBoss
>>>>> > > team’s great work available on this repo: In the next few
>>>>> paragraphs, I’ll
>>>>> > > describe in deep detail the microservices CoolStore example and
>>>>> how we used
>>>>> > > ...
>>>>> > >
>>>>> > >
>>>>> > > MG>the first step would involve knowing which simple function you
>>>>> need to
>>>>> > > deploy as microservice ?
>>>>> > >
>>>>> > > Regards,
>>>>> > >
>>>>> > > Dr Mich Talebzadeh
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > LinkedIn *
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > <
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > *
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > http://talebzadehmich.wordpress.com
>>>>> > >
>>>>> > >
>>>>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>>> for any
>>>>> > > loss, damage or destruction of data or any other property which
>>>>> may arise
>>>>> > > from relying on this email's technical content is explicitly
>>>>> disclaimed.
>>>>> > > The author will in no case be liable for any monetary damages
>>>>> arising from
>>>>> > > such loss, damage or destruction.
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>>>>> wrote:
>>>>> > >
>>>>> > > >
>>>>> > > >
>>>>> > > > initial work under using Zookeeper as a Microservices container
>>>>> is here
>>>>> > > >
>>>>> > > >
>>>>> > >
>>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>>> > > >
>>>>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>>>>> > > >
>>>>> > >
>>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>>> > > > >
>>>>> > > > planet.jboss.org
>>>>> > > > In a microservice world, multiple services are typically
>>>>> distributed in a
>>>>> > > > PaaS environment. Immutable infrastructure, such as those
>>>>> provided by
>>>>> > > > containers or immutable VM images. Services may scale up and
>>>>> down based
>>>>> > > > upon certain pre-defined metrics. Exact address of the service
>>>>> may not be
>>>>> > > > known ...
>>>>> > > >
>>>>> > > > once your Zookeeper Microservices container is operational
>>>>> > > >
>>>>> > > > you would need to 'tweak' kafka to extend and implement
>>>>> > > classes/interfaces
>>>>> > > > to become
>>>>> > > > a true microservices component..this may help
>>>>> > > >
>>>>> > > >
>>>>> > > >
>>>>> > >
>>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>>> > > > [
>>>>> > >
>>>>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>>>>> > > > ]<
>>>>> > > >
>>>>> > >
>>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>>> > > > >
>>>>> > > >
>>>>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>>>>> > > >
>>>>> > >
>>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>>> > > > >
>>>>> > > > blog.arungupta.me
>>>>> > > > Have you ever wondered what does it take to refactor an existing
>>>>> Java EE
>>>>> > > > monolithic application to a microservices-based one? This blog
>>>>> explains
>>>>> > > how
>>>>> > > > a trivial shopping cart example was converted to
>>>>> microservices-based
>>>>> > > > application, and what are some of the concerns around it.
>>>>> > > >
>>>>> > > >
>>>>> > > >
>>>>> > > > let me know if i can help out
>>>>> > > > Martin
>>>>> > > >
>>>>> > > >
>>>>> > > > ________________________________
>>>>> > > > From: Jörn Franke <jo...@gmail.com>
>>>>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>>>>> > > > To: users@kafka.apache.org
>>>>> > > > Cc: user@flink.apache.org
>>>>> > > > Subject: Re: Real time streaming as a microservice
>>>>> > > >
>>>>> > > > Yes or Kafka will need it ...
>>>>> > > > As soon as your orchestrate different microservices this will
>>>>> happen.
>>>>> > > >
>>>>> > > >
>>>>> > > >
>>>>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com>
>>>>> > > > wrote:
>>>>> > > > >
>>>>> > > > > Thanks Jorn.
>>>>> > > > >
>>>>> > > > > So I gather as you correctly suggested, microservices do
>>>>> provide value
>>>>> > > in
>>>>> > > > > terms of modularisation. However, there will always
>>>>> "inevitably" be
>>>>> > > > > scenarios where the receiving artefact say Flink needs
>>>>> communication
>>>>> > > > > protocol changes?
>>>>> > > > >
>>>>> > > > > thanks
>>>>> > > > >
>>>>> > > > > Dr Mich Talebzadeh
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > LinkedIn *
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > <
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > *
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > http://talebzadehmich.wordpress.com
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>>> responsibility for
>>>>> > > any
>>>>> > > > > loss, damage or destruction of data or any other property
>>>>> which may
>>>>> > > arise
>>>>> > > > > from relying on this email's technical content is explicitly
>>>>> > > disclaimed.
>>>>> > > > > The author will in no case be liable for any monetary damages
>>>>> arising
>>>>> > > > from
>>>>> > > > > such loss, damage or destruction.
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > >
>>>>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <
>>>>> jornfranke@gmail.com>
>>>>> > > wrote:
>>>>> > > > > >
>>>>> > > > > > That they are loosely coupled does not mean they are
>>>>> independent. For
>>>>> > > > > > instance, you would not be able to replace Kafka with zeromq
>>>>> in your
>>>>> > > > > > scenario. Unfortunately also Kafka sometimes needs to
>>>>> introduce
>>>>> > > breaking
>>>>> > > > > > changes and the dependent application needs to upgrade.
>>>>> > > > > > You will not be able to avoid these scenarios in the future
>>>>> (this is
>>>>> > > > only
>>>>> > > > > > possible if micro services don’t communicate with each other
>>>>> or if
>>>>> > > they
>>>>> > > > > > would never need to change their communication protocol -
>>>>> pretty
>>>>> > > > impossible
>>>>> > > > > > ). However there are ways of course to reduce it, eg kafka
>>>>> could
>>>>> > > reduce
>>>>> > > > the
>>>>> > > > > > number of breaking changes or you can develop a very
>>>>> lightweight
>>>>> > > > > > microservice that is very easy to change and that only deals
>>>>> with the
>>>>> > > > > > broker integration and your application etc.
>>>>> > > > > >
>>>>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>>>>> mich.talebzadeh@gmail.com
>>>>> > > >
>>>>> > > > > > wrote:
>>>>> > > > > > >
>>>>> > > > > > > Hi,
>>>>> > > > > > >
>>>>> > > > > > > I have created the Kafka messaging architecture as a
>>>>> microservice
>>>>> > > that
>>>>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>>>>> > > > micro-batches
>>>>> > > > > > > meaning "collect and process data" and flink as an event
>>>>> driven
>>>>> > > > > > > architecture (a stateful application that reacts to
>>>>> incoming events
>>>>> > > by
>>>>> > > > > > > triggering computations etc.
>>>>> > > > > > >
>>>>> > > > > > > According to Wikipedia, A Microservice is a technique that
>>>>> > > structures
>>>>> > > > an
>>>>> > > > > > > application as a collection of loosely coupled services.
>>>>> In a
>>>>> > > > > > microservices
>>>>> > > > > > > architecture, services are fine-grained and the protocols
>>>>> are
>>>>> > > > > > lightweight.
>>>>> > > > > > >
>>>>> > > > > > > Ok for streaming data among other things I have to create
>>>>> and
>>>>> > > configure
>>>>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>>>>> create
>>>>> > > Kafka
>>>>> > > > > > > brokers with scalability and resiliency. Then I can offer
>>>>> the
>>>>> > > streaming
>>>>> > > > > > as
>>>>> > > > > > > a microservice to subscribers among them Spark and Flink.
>>>>> I can
>>>>> > > upgrade
>>>>> > > > > > > this microservice component in isolation without impacting
>>>>> either
>>>>> > > Spark
>>>>> > > > > > or
>>>>> > > > > > > Flink.
>>>>> > > > > > >
>>>>> > > > > > > The problem I face here is the dependency on Flink etc on
>>>>> the jar
>>>>> > > files
>>>>> > > > > > > specific for the version of Kafka deployed. For example
>>>>> > > > kafka_2.12-1.1.0
>>>>> > > > > > is
>>>>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>>>>> work in
>>>>> > > Flink
>>>>> > > > > > 1.5
>>>>> > > > > > > application, I need to use the correct dependency in sbt
>>>>> build. For
>>>>> > > > > > > example:
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>>> > > > > > "flink-connector-kafka-0.11" %
>>>>> > > > > > > "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>>> > > > > > "flink-connector-kafka-base" %
>>>>> > > > > > > "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala"
>>>>> % "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.kafka" %
>>>>> "kafka-clients" %
>>>>> > > > "0.11.0.0"
>>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>>> "flink-streaming-scala"
>>>>> > > %
>>>>> > > > > > > "1.5.0"
>>>>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>>>>> "0.11.0.0"
>>>>> > > > > > >
>>>>> > > > > > > and the Scala code needs to change:
>>>>> > > > > > >
>>>>> > > > > > > import
>>>>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>>>> > > > > > > …
>>>>> > > > > > > val stream = env
>>>>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>>>>> > > > > > > SimpleStringSchema(), properties))
>>>>> > > > > > >
>>>>> > > > > > > So in summary some changes need to be made to Flink to be
>>>>> able to
>>>>> > > > > > interact
>>>>> > > > > > > with the new version of Kafka. And more importantly if one
>>>>> can use an
>>>>> > > > > > > abstract notion of microservice here?
>>>>> > > > > > >
>>>>> > > > > > > Dr Mich Talebzadeh
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > > LinkedIn *
>>>>> > > > > >
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > > > <
>>>>> > > > > >
>>>>> > > >
>>>>> > >
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> > > > > > > *
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > > http://talebzadehmich.wordpress.com
>>>>> > > > > > >
>>>>> > > > > > >
>>>>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>>> responsibility for
>>>>> > > > any
>>>>> > > > > > > loss, damage or destruction of data or any other property
>>>>> which may
>>>>> > > > arise
>>>>> > > > > > > from relying on this email's technical content is
>>>>> explicitly
>>>>> > > > disclaimed.
>>>>> > > > > > > The author will in no case be liable for any monetary
>>>>> damages arising
>>>>> > > > > > from
>>>>> > > > > > > such loss, damage or destruction.
>>>>> > > > > >
>>>>> > > >
>>>>> > >
>>>>>
>>>>

Re: Real time streaming as a microservice

Posted by Deepak Sharma <de...@gmail.com>.
Is it on github Mich ?
I would love to use the flink and spark edition and add some use cases from
my side.

Thanks
Deepak

On Sun, Jul 15, 2018, 13:38 Mich Talebzadeh <mi...@gmail.com>
wrote:

> Hi all,
>
> I have now managed to deploy both ZooKeeper and Kafka as microservices
> using docker images.
>
> The idea came to me as I wanted to create lightweight processes for both
> ZooKeeper and Kafka to be used as services for Flink and Spark
> simultaneously.
>
> In this design both Flink and Spark rely on streaming market data messages
> published through Kafka. My current design is simple one docker for
> Zookeeper and another for Kafka
>
> [root@rhes75 ~]# docker ps -a
> CONTAINER ID        IMAGE               COMMAND
> CREATED             STATUS
> PORTS                                            NAMES
> 05cf097ac139        ches/kafka          "/start.sh"              9 hours
> ago         Up 9 hours              *0.0.0.0:7203->7203/tcp,
> 0.0.0.0:9092->9092/tcp*   kafka
> b173e455cc80        jplock/zookeeper    "/opt/zookeeper/bin/…"   10 hours
> ago        Up 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
> 3888/tcp*       zookeeper
>
> Note that the docker ports are exposed to the physical host that running
> the containers.
>
> A test message is simply created as follows:
> ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
> --replication-factor 1 --partitions 1 --topic test
>
> Note that rhes75 is the host that houses the dockers and port 2181 is the
> zookeeper port used by the zookeeper docker and mapped
>
> The spark streaming uses speed layer in Lambda architecture to write to an
> Hbase table for selected market data (Hbase requires connectivity to a
> Zookeeper). For Hbase I specified a zookeeper instance running on another
> host and Hbase works fine.
>
> Anyway I will provide further info and diagrams.
>
> Cheers,
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>> Thanks got it sorted.
>>
>> Regards,
>>
>>
>> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> Thanks Rahul.
>>>
>>> This is the outcome of
>>>
>>> [root@rhes75 ~]# iptables -t nat -L -n
>>> Chain PREROUTING (policy ACCEPT)
>>> target     prot opt source               destination
>>> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
>>> match dst-type LOCAL
>>> Chain INPUT (policy ACCEPT)
>>> target     prot opt source               destination
>>> Chain OUTPUT (policy ACCEPT)
>>> target     prot opt source               destination
>>> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
>>> match dst-type LOCAL
>>> Chain POSTROUTING (policy ACCEPT)
>>> target     prot opt source               destination
>>> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
>>> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
>>> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
>>> RETURN     all  --  192.168.122.0/24     255.255.255.255
>>> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>> ports: 1024-65535
>>> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>> ports: 1024-65535
>>> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
>>> Chain DOCKER (2 references)
>>> target     prot opt source               destination
>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>>
>>> So basically I need to connect to container from another host as the
>>> link points it out.
>>>
>>> My docker is already running.
>>>
>>> [root@rhes75 ~]# docker ps -a
>>> CONTAINER ID        IMAGE               COMMAND
>>> CREATED             STATUS              PORTS               NAMES
>>> 8dd84a174834        ubuntu              "bash"              19 hours
>>> ago        Up 11 hours                             dockerZooKeeperKafka
>>>
>>> What would be an option to add a fixed port to the running container.?
>>>
>>> Regards,
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
>>> wrote:
>>>
>>>> Seems like you need to expose your port via docker run or
>>>> docker-compose .
>>>>
>>>>
>>>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>>>
>>>>
>>>>
>>>> --
>>>> Rahul Singh
>>>> rahul.singh@anant.us
>>>>
>>>> Anant Corporation
>>>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com>, wrote:
>>>> > Hi,
>>>> >
>>>> > I have now successfully created a docker for RHEL75 as follows:
>>>> >
>>>> > [root@rhes75 ~]# docker ps -a
>>>> > CONTAINER ID IMAGE COMMAND
>>>> > CREATED STATUS PORTS NAMES
>>>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>>>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>>>> > dockerZooKeeper
>>>> > 8dd84a174834 ubuntu "bash" 6 hours
>>>> > ago Up 6 hours
>>>> > dockerZooKeeperKafka
>>>> >
>>>> > The first container is ready made for ZooKeeper that exposes the
>>>> zookeeper
>>>> > client port etc.
>>>> >
>>>> > The second container is an ubuntu shell which I installed both
>>>> zookeeper
>>>> > and Kafka on it. They are both running in container
>>>> dockerZooKeeperKafka
>>>> >
>>>> >
>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>>>> > 5715 Kafka
>>>> > 5647 QuorumPeerMain
>>>> >
>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>>>> > (Not all processes could be identified, non-owned process info
>>>> > will not be shown, you would have to be root to see it all.)
>>>> > Active Internet connections (only servers)
>>>> > Proto Recv-Q Send-Q Local Address Foreign Address
>>>> > State User Inode PID/Program name
>>>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>>>> > LISTEN 1005 2865148 5715/java
>>>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>>>> > LISTEN 1005 2865147 5715/java
>>>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>>>> > LISTEN 1005 2865151 5715/java
>>>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>>>> > LISTEN 0 2757032 -
>>>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>>>> > LISTEN 1005 2852821 5647/java
>>>> >
>>>> >
>>>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>>>> > 0.0.0.0:* LISTEN 1005 2873507
>>>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>>>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>>>> > 0 0 :::22 :::* LISTEN
>>>> > 0 2757034 -
>>>> >
>>>> > I have a gateway node that is connected to the host running the
>>>> container.
>>>> > From within the container I can ssh to the gateway host *as both the
>>>> > gateway host and host running the container are on the same VLAN.*
>>>> >
>>>> >
>>>> > However, I cannot connect from gateway to the container. The
>>>> container has
>>>> > this IP address
>>>> >
>>>> > root@8dd84a174834:~# ifconfig -a
>>>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>>>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>>>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>> > TX packets 189400 bytes 13557709 (13.5 MB)
>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>> >
>>>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>>>> > inet 127.0.0.1 netmask 255.0.0.0
>>>> > loop txqueuelen 1000 (Local Loopback)
>>>> > RX packets 8450 bytes 534805 (534.8 KB)
>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>> > TX packets 8450 bytes 534805 (534.8 KB)
>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>> >
>>>> >
>>>> >
>>>> > The interesting thing is that in order to publish streaming test data
>>>> I
>>>> > need to be able to do something like below
>>>> >
>>>> >
>>>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>>>> --broker-list
>>>> >
>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>> > --topic md
>>>> >
>>>> >
>>>> > That Kafka broker list --broker-list
>>>> >
>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>> > needs to be replaced by <container hostname>:9092!
>>>> >
>>>> >
>>>> > So at this juncture I am wondering what type of network needs to be
>>>> created
>>>> > as the container is running within another host.
>>>> >
>>>> >
>>>> > Thanks
>>>> >
>>>> >
>>>> > Dr Mich Talebzadeh
>>>> >
>>>> >
>>>> >
>>>> > LinkedIn *
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > <
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> >*
>>>> >
>>>> >
>>>> >
>>>> > http://talebzadehmich.wordpress.com
>>>> >
>>>> >
>>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any
>>>> > loss, damage or destruction of data or any other property which may
>>>> arise
>>>> > from relying on this email's technical content is explicitly
>>>> disclaimed.
>>>> > The author will in no case be liable for any monetary damages arising
>>>> from
>>>> > such loss, damage or destruction.
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com>
>>>> wrote:
>>>> >
>>>> > >
>>>> > >
>>>> > > ________________________________
>>>> > > From: Mich Talebzadeh <mi...@gmail.com>
>>>> > > Sent: Sunday, July 8, 2018 1:01 PM
>>>> > > To: users@kafka.apache.org
>>>> > > Subject: Re: Real time streaming as a microservice
>>>> > >
>>>> > > Thanks Martin.
>>>> > >
>>>> > > From an implementation point of view do we need to introduce docker
>>>> for
>>>> > > each microservice? In other words does it have to be artefact -->
>>>> contain
>>>> > > --> docker for this to be true microservice and all these
>>>> microservices
>>>> > > communicate through Service Registry.
>>>> > > MG>for deployment deploying thru docker container would be the
>>>> easiest
>>>> > > means to test
>>>> > > MG>but first we would need to concentrate
>>>> > > MG>on your developing a micro-service first
>>>> > > MG>your development of a service registry
>>>> > > MG>your development of a micro-services container which can lookup
>>>> > > necessary endpoints
>>>> > > MG>since you pre-pordained Docker to be your deploy container I
>>>> would
>>>> > > suggest implementing OpenShift
>>>> > > https://www.openshift.org/
>>>> > > OpenShift Origin - Open Source Container Application Platform<
>>>> > > https://www.openshift.org/>
>>>> > > www.openshift.org
>>>> > > The next generation open source app hosting platform by Red Hat
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > Also if we wanted to move from a monolithic classic design with
>>>> Streaming
>>>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark
>>>> Streaming, Flink)
>>>> > > --> Real time dashboard (anything built on something like D3) to
>>>> > > microservices how would that entail.
>>>> > > MG>the simpler the function the better ...something like
>>>> > > MG>simple input...user enters 'foo'
>>>> > > MG>simple processing....process does spark stream to determine what
>>>> result
>>>> > > responds to 'foo'
>>>> > > MG>simple output...output will be text 'bar' formatting to be
>>>> decided
>>>> > > (text/html/pdf?)
>>>> > >
>>>> > > One option would be to have three
>>>> > > principal microservices (each with sub-services) providing three
>>>> > > components?
>>>> > > MG>concentrate on the simplest function which would
>>>> be_______________?
>>>> > > MG>shoehorn simple function into a viable microservice
>>>> > > MG>the following inventory microservice from redhat example shows
>>>> how your
>>>> > > ______? service
>>>> > > MG>can be incorporated into a openshift container
>>>> > > MG>and be readily deployable in docker container
>>>> > > MG>
>>>> > >
>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>> > > [
>>>> > >
>>>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>>>> > > ]<
>>>> > >
>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>> > > >
>>>> > >
>>>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>>>> > >
>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>> > > >
>>>> > > developers.redhat.com
>>>> > > Today I want to talk about the demo we presented @ OpenShift
>>>> Container
>>>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>>>> JBoss
>>>> > > team’s great work available on this repo: In the next few
>>>> paragraphs, I’ll
>>>> > > describe in deep detail the microservices CoolStore example and how
>>>> we used
>>>> > > ...
>>>> > >
>>>> > >
>>>> > > MG>the first step would involve knowing which simple function you
>>>> need to
>>>> > > deploy as microservice ?
>>>> > >
>>>> > > Regards,
>>>> > >
>>>> > > Dr Mich Talebzadeh
>>>> > >
>>>> > >
>>>> > >
>>>> > > LinkedIn *
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > <
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > *
>>>> > >
>>>> > >
>>>> > >
>>>> > > http://talebzadehmich.wordpress.com
>>>> > >
>>>> > >
>>>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>> for any
>>>> > > loss, damage or destruction of data or any other property which may
>>>> arise
>>>> > > from relying on this email's technical content is explicitly
>>>> disclaimed.
>>>> > > The author will in no case be liable for any monetary damages
>>>> arising from
>>>> > > such loss, damage or destruction.
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>>>> wrote:
>>>> > >
>>>> > > >
>>>> > > >
>>>> > > > initial work under using Zookeeper as a Microservices container
>>>> is here
>>>> > > >
>>>> > > >
>>>> > >
>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>> > > >
>>>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>>>> > > >
>>>> > >
>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>> > > > >
>>>> > > > planet.jboss.org
>>>> > > > In a microservice world, multiple services are typically
>>>> distributed in a
>>>> > > > PaaS environment. Immutable infrastructure, such as those
>>>> provided by
>>>> > > > containers or immutable VM images. Services may scale up and down
>>>> based
>>>> > > > upon certain pre-defined metrics. Exact address of the service
>>>> may not be
>>>> > > > known ...
>>>> > > >
>>>> > > > once your Zookeeper Microservices container is operational
>>>> > > >
>>>> > > > you would need to 'tweak' kafka to extend and implement
>>>> > > classes/interfaces
>>>> > > > to become
>>>> > > > a true microservices component..this may help
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > >
>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>> > > > [
>>>> > >
>>>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>>>> > > > ]<
>>>> > > >
>>>> > >
>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>> > > > >
>>>> > > >
>>>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>>>> > > >
>>>> > >
>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>> > > > >
>>>> > > > blog.arungupta.me
>>>> > > > Have you ever wondered what does it take to refactor an existing
>>>> Java EE
>>>> > > > monolithic application to a microservices-based one? This blog
>>>> explains
>>>> > > how
>>>> > > > a trivial shopping cart example was converted to
>>>> microservices-based
>>>> > > > application, and what are some of the concerns around it.
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > let me know if i can help out
>>>> > > > Martin
>>>> > > >
>>>> > > >
>>>> > > > ________________________________
>>>> > > > From: Jörn Franke <jo...@gmail.com>
>>>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>>>> > > > To: users@kafka.apache.org
>>>> > > > Cc: user@flink.apache.org
>>>> > > > Subject: Re: Real time streaming as a microservice
>>>> > > >
>>>> > > > Yes or Kafka will need it ...
>>>> > > > As soon as your orchestrate different microservices this will
>>>> happen.
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com>
>>>> > > > wrote:
>>>> > > > >
>>>> > > > > Thanks Jorn.
>>>> > > > >
>>>> > > > > So I gather as you correctly suggested, microservices do
>>>> provide value
>>>> > > in
>>>> > > > > terms of modularisation. However, there will always
>>>> "inevitably" be
>>>> > > > > scenarios where the receiving artefact say Flink needs
>>>> communication
>>>> > > > > protocol changes?
>>>> > > > >
>>>> > > > > thanks
>>>> > > > >
>>>> > > > > Dr Mich Talebzadeh
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > LinkedIn *
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > <
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > *
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > http://talebzadehmich.wordpress.com
>>>> > > > >
>>>> > > > >
>>>> > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>> responsibility for
>>>> > > any
>>>> > > > > loss, damage or destruction of data or any other property which
>>>> may
>>>> > > arise
>>>> > > > > from relying on this email's technical content is explicitly
>>>> > > disclaimed.
>>>> > > > > The author will in no case be liable for any monetary damages
>>>> arising
>>>> > > > from
>>>> > > > > such loss, damage or destruction.
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <
>>>> jornfranke@gmail.com>
>>>> > > wrote:
>>>> > > > > >
>>>> > > > > > That they are loosely coupled does not mean they are
>>>> independent. For
>>>> > > > > > instance, you would not be able to replace Kafka with zeromq
>>>> in your
>>>> > > > > > scenario. Unfortunately also Kafka sometimes needs to
>>>> introduce
>>>> > > breaking
>>>> > > > > > changes and the dependent application needs to upgrade.
>>>> > > > > > You will not be able to avoid these scenarios in the future
>>>> (this is
>>>> > > > only
>>>> > > > > > possible if micro services don’t communicate with each other
>>>> or if
>>>> > > they
>>>> > > > > > would never need to change their communication protocol -
>>>> pretty
>>>> > > > impossible
>>>> > > > > > ). However there are ways of course to reduce it, eg kafka
>>>> could
>>>> > > reduce
>>>> > > > the
>>>> > > > > > number of breaking changes or you can develop a very
>>>> lightweight
>>>> > > > > > microservice that is very easy to change and that only deals
>>>> with the
>>>> > > > > > broker integration and your application etc.
>>>> > > > > >
>>>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com
>>>> > > >
>>>> > > > > > wrote:
>>>> > > > > > >
>>>> > > > > > > Hi,
>>>> > > > > > >
>>>> > > > > > > I have created the Kafka messaging architecture as a
>>>> microservice
>>>> > > that
>>>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>>>> > > > micro-batches
>>>> > > > > > > meaning "collect and process data" and flink as an event
>>>> driven
>>>> > > > > > > architecture (a stateful application that reacts to
>>>> incoming events
>>>> > > by
>>>> > > > > > > triggering computations etc.
>>>> > > > > > >
>>>> > > > > > > According to Wikipedia, A Microservice is a technique that
>>>> > > structures
>>>> > > > an
>>>> > > > > > > application as a collection of loosely coupled services. In
>>>> a
>>>> > > > > > microservices
>>>> > > > > > > architecture, services are fine-grained and the protocols
>>>> are
>>>> > > > > > lightweight.
>>>> > > > > > >
>>>> > > > > > > Ok for streaming data among other things I have to create
>>>> and
>>>> > > configure
>>>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>>>> create
>>>> > > Kafka
>>>> > > > > > > brokers with scalability and resiliency. Then I can offer
>>>> the
>>>> > > streaming
>>>> > > > > > as
>>>> > > > > > > a microservice to subscribers among them Spark and Flink. I
>>>> can
>>>> > > upgrade
>>>> > > > > > > this microservice component in isolation without impacting
>>>> either
>>>> > > Spark
>>>> > > > > > or
>>>> > > > > > > Flink.
>>>> > > > > > >
>>>> > > > > > > The problem I face here is the dependency on Flink etc on
>>>> the jar
>>>> > > files
>>>> > > > > > > specific for the version of Kafka deployed. For example
>>>> > > > kafka_2.12-1.1.0
>>>> > > > > > is
>>>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>>>> work in
>>>> > > Flink
>>>> > > > > > 1.5
>>>> > > > > > > application, I need to use the correct dependency in sbt
>>>> build. For
>>>> > > > > > > example:
>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>> > > > > > "flink-connector-kafka-0.11" %
>>>> > > > > > > "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>> > > > > > "flink-connector-kafka-base" %
>>>> > > > > > > "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala"
>>>> % "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients"
>>>> %
>>>> > > > "0.11.0.0"
>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>> "flink-streaming-scala"
>>>> > > %
>>>> > > > > > > "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>>>> "0.11.0.0"
>>>> > > > > > >
>>>> > > > > > > and the Scala code needs to change:
>>>> > > > > > >
>>>> > > > > > > import
>>>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>>> > > > > > > …
>>>> > > > > > > val stream = env
>>>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>>>> > > > > > > SimpleStringSchema(), properties))
>>>> > > > > > >
>>>> > > > > > > So in summary some changes need to be made to Flink to be
>>>> able to
>>>> > > > > > interact
>>>> > > > > > > with the new version of Kafka. And more importantly if one
>>>> can use an
>>>> > > > > > > abstract notion of microservice here?
>>>> > > > > > >
>>>> > > > > > > Dr Mich Talebzadeh
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > LinkedIn *
>>>> > > > > >
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > > > <
>>>> > > > > >
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > > > *
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > http://talebzadehmich.wordpress.com
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>> responsibility for
>>>> > > > any
>>>> > > > > > > loss, damage or destruction of data or any other property
>>>> which may
>>>> > > > arise
>>>> > > > > > > from relying on this email's technical content is explicitly
>>>> > > > disclaimed.
>>>> > > > > > > The author will in no case be liable for any monetary
>>>> damages arising
>>>> > > > > > from
>>>> > > > > > > such loss, damage or destruction.
>>>> > > > > >
>>>> > > >
>>>> > >
>>>>
>>>

Re: Real time streaming as a microservice

Posted by Deepak Sharma <de...@gmail.com>.
Is it on github Mich ?
I would love to use the flink and spark edition and add some use cases from
my side.

Thanks
Deepak

On Sun, Jul 15, 2018, 13:38 Mich Talebzadeh <mi...@gmail.com>
wrote:

> Hi all,
>
> I have now managed to deploy both ZooKeeper and Kafka as microservices
> using docker images.
>
> The idea came to me as I wanted to create lightweight processes for both
> ZooKeeper and Kafka to be used as services for Flink and Spark
> simultaneously.
>
> In this design both Flink and Spark rely on streaming market data messages
> published through Kafka. My current design is simple one docker for
> Zookeeper and another for Kafka
>
> [root@rhes75 ~]# docker ps -a
> CONTAINER ID        IMAGE               COMMAND
> CREATED             STATUS
> PORTS                                            NAMES
> 05cf097ac139        ches/kafka          "/start.sh"              9 hours
> ago         Up 9 hours              *0.0.0.0:7203->7203/tcp,
> 0.0.0.0:9092->9092/tcp*   kafka
> b173e455cc80        jplock/zookeeper    "/opt/zookeeper/bin/…"   10 hours
> ago        Up 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
> 3888/tcp*       zookeeper
>
> Note that the docker ports are exposed to the physical host that running
> the containers.
>
> A test message is simply created as follows:
> ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
> --replication-factor 1 --partitions 1 --topic test
>
> Note that rhes75 is the host that houses the dockers and port 2181 is the
> zookeeper port used by the zookeeper docker and mapped
>
> The spark streaming uses speed layer in Lambda architecture to write to an
> Hbase table for selected market data (Hbase requires connectivity to a
> Zookeeper). For Hbase I specified a zookeeper instance running on another
> host and Hbase works fine.
>
> Anyway I will provide further info and diagrams.
>
> Cheers,
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>> Thanks got it sorted.
>>
>> Regards,
>>
>>
>> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> Thanks Rahul.
>>>
>>> This is the outcome of
>>>
>>> [root@rhes75 ~]# iptables -t nat -L -n
>>> Chain PREROUTING (policy ACCEPT)
>>> target     prot opt source               destination
>>> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
>>> match dst-type LOCAL
>>> Chain INPUT (policy ACCEPT)
>>> target     prot opt source               destination
>>> Chain OUTPUT (policy ACCEPT)
>>> target     prot opt source               destination
>>> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
>>> match dst-type LOCAL
>>> Chain POSTROUTING (policy ACCEPT)
>>> target     prot opt source               destination
>>> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
>>> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
>>> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
>>> RETURN     all  --  192.168.122.0/24     255.255.255.255
>>> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>> ports: 1024-65535
>>> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
>>> ports: 1024-65535
>>> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
>>> Chain DOCKER (2 references)
>>> target     prot opt source               destination
>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>>
>>> So basically I need to connect to container from another host as the
>>> link points it out.
>>>
>>> My docker is already running.
>>>
>>> [root@rhes75 ~]# docker ps -a
>>> CONTAINER ID        IMAGE               COMMAND
>>> CREATED             STATUS              PORTS               NAMES
>>> 8dd84a174834        ubuntu              "bash"              19 hours
>>> ago        Up 11 hours                             dockerZooKeeperKafka
>>>
>>> What would be an option to add a fixed port to the running container.?
>>>
>>> Regards,
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
>>> wrote:
>>>
>>>> Seems like you need to expose your port via docker run or
>>>> docker-compose .
>>>>
>>>>
>>>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>>>
>>>>
>>>>
>>>> --
>>>> Rahul Singh
>>>> rahul.singh@anant.us
>>>>
>>>> Anant Corporation
>>>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com>, wrote:
>>>> > Hi,
>>>> >
>>>> > I have now successfully created a docker for RHEL75 as follows:
>>>> >
>>>> > [root@rhes75 ~]# docker ps -a
>>>> > CONTAINER ID IMAGE COMMAND
>>>> > CREATED STATUS PORTS NAMES
>>>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>>>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>>>> > dockerZooKeeper
>>>> > 8dd84a174834 ubuntu "bash" 6 hours
>>>> > ago Up 6 hours
>>>> > dockerZooKeeperKafka
>>>> >
>>>> > The first container is ready made for ZooKeeper that exposes the
>>>> zookeeper
>>>> > client port etc.
>>>> >
>>>> > The second container is an ubuntu shell which I installed both
>>>> zookeeper
>>>> > and Kafka on it. They are both running in container
>>>> dockerZooKeeperKafka
>>>> >
>>>> >
>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>>>> > 5715 Kafka
>>>> > 5647 QuorumPeerMain
>>>> >
>>>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>>>> > (Not all processes could be identified, non-owned process info
>>>> > will not be shown, you would have to be root to see it all.)
>>>> > Active Internet connections (only servers)
>>>> > Proto Recv-Q Send-Q Local Address Foreign Address
>>>> > State User Inode PID/Program name
>>>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>>>> > LISTEN 1005 2865148 5715/java
>>>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>>>> > LISTEN 1005 2865147 5715/java
>>>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>>>> > LISTEN 1005 2865151 5715/java
>>>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>>>> > LISTEN 0 2757032 -
>>>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>>>> > LISTEN 1005 2852821 5647/java
>>>> >
>>>> >
>>>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>>>> > 0.0.0.0:* LISTEN 1005 2873507
>>>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>>>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>>>> > 0 0 :::22 :::* LISTEN
>>>> > 0 2757034 -
>>>> >
>>>> > I have a gateway node that is connected to the host running the
>>>> container.
>>>> > From within the container I can ssh to the gateway host *as both the
>>>> > gateway host and host running the container are on the same VLAN.*
>>>> >
>>>> >
>>>> > However, I cannot connect from gateway to the container. The
>>>> container has
>>>> > this IP address
>>>> >
>>>> > root@8dd84a174834:~# ifconfig -a
>>>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>>>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>>>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>> > TX packets 189400 bytes 13557709 (13.5 MB)
>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>> >
>>>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>>>> > inet 127.0.0.1 netmask 255.0.0.0
>>>> > loop txqueuelen 1000 (Local Loopback)
>>>> > RX packets 8450 bytes 534805 (534.8 KB)
>>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>>> > TX packets 8450 bytes 534805 (534.8 KB)
>>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>>> >
>>>> >
>>>> >
>>>> > The interesting thing is that in order to publish streaming test data
>>>> I
>>>> > need to be able to do something like below
>>>> >
>>>> >
>>>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>>>> --broker-list
>>>> >
>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>> > --topic md
>>>> >
>>>> >
>>>> > That Kafka broker list --broker-list
>>>> >
>>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>>> > needs to be replaced by <container hostname>:9092!
>>>> >
>>>> >
>>>> > So at this juncture I am wondering what type of network needs to be
>>>> created
>>>> > as the container is running within another host.
>>>> >
>>>> >
>>>> > Thanks
>>>> >
>>>> >
>>>> > Dr Mich Talebzadeh
>>>> >
>>>> >
>>>> >
>>>> > LinkedIn *
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > <
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> >*
>>>> >
>>>> >
>>>> >
>>>> > http://talebzadehmich.wordpress.com
>>>> >
>>>> >
>>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any
>>>> > loss, damage or destruction of data or any other property which may
>>>> arise
>>>> > from relying on this email's technical content is explicitly
>>>> disclaimed.
>>>> > The author will in no case be liable for any monetary damages arising
>>>> from
>>>> > such loss, damage or destruction.
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com>
>>>> wrote:
>>>> >
>>>> > >
>>>> > >
>>>> > > ________________________________
>>>> > > From: Mich Talebzadeh <mi...@gmail.com>
>>>> > > Sent: Sunday, July 8, 2018 1:01 PM
>>>> > > To: users@kafka.apache.org
>>>> > > Subject: Re: Real time streaming as a microservice
>>>> > >
>>>> > > Thanks Martin.
>>>> > >
>>>> > > From an implementation point of view do we need to introduce docker
>>>> for
>>>> > > each microservice? In other words does it have to be artefact -->
>>>> contain
>>>> > > --> docker for this to be true microservice and all these
>>>> microservices
>>>> > > communicate through Service Registry.
>>>> > > MG>for deployment deploying thru docker container would be the
>>>> easiest
>>>> > > means to test
>>>> > > MG>but first we would need to concentrate
>>>> > > MG>on your developing a micro-service first
>>>> > > MG>your development of a service registry
>>>> > > MG>your development of a micro-services container which can lookup
>>>> > > necessary endpoints
>>>> > > MG>since you pre-pordained Docker to be your deploy container I
>>>> would
>>>> > > suggest implementing OpenShift
>>>> > > https://www.openshift.org/
>>>> > > OpenShift Origin - Open Source Container Application Platform<
>>>> > > https://www.openshift.org/>
>>>> > > www.openshift.org
>>>> > > The next generation open source app hosting platform by Red Hat
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > Also if we wanted to move from a monolithic classic design with
>>>> Streaming
>>>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark
>>>> Streaming, Flink)
>>>> > > --> Real time dashboard (anything built on something like D3) to
>>>> > > microservices how would that entail.
>>>> > > MG>the simpler the function the better ...something like
>>>> > > MG>simple input...user enters 'foo'
>>>> > > MG>simple processing....process does spark stream to determine what
>>>> result
>>>> > > responds to 'foo'
>>>> > > MG>simple output...output will be text 'bar' formatting to be
>>>> decided
>>>> > > (text/html/pdf?)
>>>> > >
>>>> > > One option would be to have three
>>>> > > principal microservices (each with sub-services) providing three
>>>> > > components?
>>>> > > MG>concentrate on the simplest function which would
>>>> be_______________?
>>>> > > MG>shoehorn simple function into a viable microservice
>>>> > > MG>the following inventory microservice from redhat example shows
>>>> how your
>>>> > > ______? service
>>>> > > MG>can be incorporated into a openshift container
>>>> > > MG>and be readily deployable in docker container
>>>> > > MG>
>>>> > >
>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>> > > [
>>>> > >
>>>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>>>> > > ]<
>>>> > >
>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>> > > >
>>>> > >
>>>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>>>> > >
>>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>>> > > >
>>>> > > developers.redhat.com
>>>> > > Today I want to talk about the demo we presented @ OpenShift
>>>> Container
>>>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>>>> JBoss
>>>> > > team’s great work available on this repo: In the next few
>>>> paragraphs, I’ll
>>>> > > describe in deep detail the microservices CoolStore example and how
>>>> we used
>>>> > > ...
>>>> > >
>>>> > >
>>>> > > MG>the first step would involve knowing which simple function you
>>>> need to
>>>> > > deploy as microservice ?
>>>> > >
>>>> > > Regards,
>>>> > >
>>>> > > Dr Mich Talebzadeh
>>>> > >
>>>> > >
>>>> > >
>>>> > > LinkedIn *
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > <
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > *
>>>> > >
>>>> > >
>>>> > >
>>>> > > http://talebzadehmich.wordpress.com
>>>> > >
>>>> > >
>>>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>>> for any
>>>> > > loss, damage or destruction of data or any other property which may
>>>> arise
>>>> > > from relying on this email's technical content is explicitly
>>>> disclaimed.
>>>> > > The author will in no case be liable for any monetary damages
>>>> arising from
>>>> > > such loss, damage or destruction.
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>>>> wrote:
>>>> > >
>>>> > > >
>>>> > > >
>>>> > > > initial work under using Zookeeper as a Microservices container
>>>> is here
>>>> > > >
>>>> > > >
>>>> > >
>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>> > > >
>>>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>>>> > > >
>>>> > >
>>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>>> > > > >
>>>> > > > planet.jboss.org
>>>> > > > In a microservice world, multiple services are typically
>>>> distributed in a
>>>> > > > PaaS environment. Immutable infrastructure, such as those
>>>> provided by
>>>> > > > containers or immutable VM images. Services may scale up and down
>>>> based
>>>> > > > upon certain pre-defined metrics. Exact address of the service
>>>> may not be
>>>> > > > known ...
>>>> > > >
>>>> > > > once your Zookeeper Microservices container is operational
>>>> > > >
>>>> > > > you would need to 'tweak' kafka to extend and implement
>>>> > > classes/interfaces
>>>> > > > to become
>>>> > > > a true microservices component..this may help
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > >
>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>> > > > [
>>>> > >
>>>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>>>> > > > ]<
>>>> > > >
>>>> > >
>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>> > > > >
>>>> > > >
>>>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>>>> > > >
>>>> > >
>>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>>> > > > >
>>>> > > > blog.arungupta.me
>>>> > > > Have you ever wondered what does it take to refactor an existing
>>>> Java EE
>>>> > > > monolithic application to a microservices-based one? This blog
>>>> explains
>>>> > > how
>>>> > > > a trivial shopping cart example was converted to
>>>> microservices-based
>>>> > > > application, and what are some of the concerns around it.
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > let me know if i can help out
>>>> > > > Martin
>>>> > > >
>>>> > > >
>>>> > > > ________________________________
>>>> > > > From: Jörn Franke <jo...@gmail.com>
>>>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>>>> > > > To: users@kafka.apache.org
>>>> > > > Cc: user@flink.apache.org
>>>> > > > Subject: Re: Real time streaming as a microservice
>>>> > > >
>>>> > > > Yes or Kafka will need it ...
>>>> > > > As soon as your orchestrate different microservices this will
>>>> happen.
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com>
>>>> > > > wrote:
>>>> > > > >
>>>> > > > > Thanks Jorn.
>>>> > > > >
>>>> > > > > So I gather as you correctly suggested, microservices do
>>>> provide value
>>>> > > in
>>>> > > > > terms of modularisation. However, there will always
>>>> "inevitably" be
>>>> > > > > scenarios where the receiving artefact say Flink needs
>>>> communication
>>>> > > > > protocol changes?
>>>> > > > >
>>>> > > > > thanks
>>>> > > > >
>>>> > > > > Dr Mich Talebzadeh
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > LinkedIn *
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > <
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > *
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > http://talebzadehmich.wordpress.com
>>>> > > > >
>>>> > > > >
>>>> > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>> responsibility for
>>>> > > any
>>>> > > > > loss, damage or destruction of data or any other property which
>>>> may
>>>> > > arise
>>>> > > > > from relying on this email's technical content is explicitly
>>>> > > disclaimed.
>>>> > > > > The author will in no case be liable for any monetary damages
>>>> arising
>>>> > > > from
>>>> > > > > such loss, damage or destruction.
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <
>>>> jornfranke@gmail.com>
>>>> > > wrote:
>>>> > > > > >
>>>> > > > > > That they are loosely coupled does not mean they are
>>>> independent. For
>>>> > > > > > instance, you would not be able to replace Kafka with zeromq
>>>> in your
>>>> > > > > > scenario. Unfortunately also Kafka sometimes needs to
>>>> introduce
>>>> > > breaking
>>>> > > > > > changes and the dependent application needs to upgrade.
>>>> > > > > > You will not be able to avoid these scenarios in the future
>>>> (this is
>>>> > > > only
>>>> > > > > > possible if micro services don’t communicate with each other
>>>> or if
>>>> > > they
>>>> > > > > > would never need to change their communication protocol -
>>>> pretty
>>>> > > > impossible
>>>> > > > > > ). However there are ways of course to reduce it, eg kafka
>>>> could
>>>> > > reduce
>>>> > > > the
>>>> > > > > > number of breaking changes or you can develop a very
>>>> lightweight
>>>> > > > > > microservice that is very easy to change and that only deals
>>>> with the
>>>> > > > > > broker integration and your application etc.
>>>> > > > > >
>>>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com
>>>> > > >
>>>> > > > > > wrote:
>>>> > > > > > >
>>>> > > > > > > Hi,
>>>> > > > > > >
>>>> > > > > > > I have created the Kafka messaging architecture as a
>>>> microservice
>>>> > > that
>>>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>>>> > > > micro-batches
>>>> > > > > > > meaning "collect and process data" and flink as an event
>>>> driven
>>>> > > > > > > architecture (a stateful application that reacts to
>>>> incoming events
>>>> > > by
>>>> > > > > > > triggering computations etc.
>>>> > > > > > >
>>>> > > > > > > According to Wikipedia, A Microservice is a technique that
>>>> > > structures
>>>> > > > an
>>>> > > > > > > application as a collection of loosely coupled services. In
>>>> a
>>>> > > > > > microservices
>>>> > > > > > > architecture, services are fine-grained and the protocols
>>>> are
>>>> > > > > > lightweight.
>>>> > > > > > >
>>>> > > > > > > Ok for streaming data among other things I have to create
>>>> and
>>>> > > configure
>>>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>>>> create
>>>> > > Kafka
>>>> > > > > > > brokers with scalability and resiliency. Then I can offer
>>>> the
>>>> > > streaming
>>>> > > > > > as
>>>> > > > > > > a microservice to subscribers among them Spark and Flink. I
>>>> can
>>>> > > upgrade
>>>> > > > > > > this microservice component in isolation without impacting
>>>> either
>>>> > > Spark
>>>> > > > > > or
>>>> > > > > > > Flink.
>>>> > > > > > >
>>>> > > > > > > The problem I face here is the dependency on Flink etc on
>>>> the jar
>>>> > > files
>>>> > > > > > > specific for the version of Kafka deployed. For example
>>>> > > > kafka_2.12-1.1.0
>>>> > > > > > is
>>>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>>>> work in
>>>> > > Flink
>>>> > > > > > 1.5
>>>> > > > > > > application, I need to use the correct dependency in sbt
>>>> build. For
>>>> > > > > > > example:
>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>> > > > > > "flink-connector-kafka-0.11" %
>>>> > > > > > > "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>> > > > > > "flink-connector-kafka-base" %
>>>> > > > > > > "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala"
>>>> % "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients"
>>>> %
>>>> > > > "0.11.0.0"
>>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>>> "flink-streaming-scala"
>>>> > > %
>>>> > > > > > > "1.5.0"
>>>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>>>> "0.11.0.0"
>>>> > > > > > >
>>>> > > > > > > and the Scala code needs to change:
>>>> > > > > > >
>>>> > > > > > > import
>>>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>>> > > > > > > …
>>>> > > > > > > val stream = env
>>>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>>>> > > > > > > SimpleStringSchema(), properties))
>>>> > > > > > >
>>>> > > > > > > So in summary some changes need to be made to Flink to be
>>>> able to
>>>> > > > > > interact
>>>> > > > > > > with the new version of Kafka. And more importantly if one
>>>> can use an
>>>> > > > > > > abstract notion of microservice here?
>>>> > > > > > >
>>>> > > > > > > Dr Mich Talebzadeh
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > LinkedIn *
>>>> > > > > >
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > > > <
>>>> > > > > >
>>>> > > >
>>>> > >
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> > > > > > > *
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > http://talebzadehmich.wordpress.com
>>>> > > > > > >
>>>> > > > > > >
>>>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>>>> responsibility for
>>>> > > > any
>>>> > > > > > > loss, damage or destruction of data or any other property
>>>> which may
>>>> > > > arise
>>>> > > > > > > from relying on this email's technical content is explicitly
>>>> > > > disclaimed.
>>>> > > > > > > The author will in no case be liable for any monetary
>>>> damages arising
>>>> > > > > > from
>>>> > > > > > > such loss, damage or destruction.
>>>> > > > > >
>>>> > > >
>>>> > >
>>>>
>>>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi all,

I have now managed to deploy both ZooKeeper and Kafka as microservices
using docker images.

The idea came to me as I wanted to create lightweight processes for both
ZooKeeper and Kafka to be used as services for Flink and Spark
simultaneously.

In this design both Flink and Spark rely on streaming market data messages
published through Kafka. My current design is simple one docker for
Zookeeper and another for Kafka

[root@rhes75 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND
CREATED             STATUS
PORTS                                            NAMES
05cf097ac139        ches/kafka          "/start.sh"              9 hours
ago         Up 9 hours              *0.0.0.0:7203->7203/tcp,
0.0.0.0:9092->9092/tcp*   kafka
b173e455cc80        jplock/zookeeper    "/opt/zookeeper/bin/…"   10 hours
ago        Up 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
3888/tcp*       zookeeper

Note that the docker ports are exposed to the physical host that running
the containers.

A test message is simply created as follows:
${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
--replication-factor 1 --partitions 1 --topic test

Note that rhes75 is the host that houses the dockers and port 2181 is the
zookeeper port used by the zookeeper docker and mapped

The spark streaming uses speed layer in Lambda architecture to write to an
Hbase table for selected market data (Hbase requires connectivity to a
Zookeeper). For Hbase I specified a zookeeper instance running on another
host and Hbase works fine.

Anyway I will provide further info and diagrams.

Cheers,


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh <mi...@gmail.com>
wrote:

>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
> Thanks got it sorted.
>
> Regards,
>
>
> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Thanks Rahul.
>>
>> This is the outcome of
>>
>> [root@rhes75 ~]# iptables -t nat -L -n
>> Chain PREROUTING (policy ACCEPT)
>> target     prot opt source               destination
>> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
>> match dst-type LOCAL
>> Chain INPUT (policy ACCEPT)
>> target     prot opt source               destination
>> Chain OUTPUT (policy ACCEPT)
>> target     prot opt source               destination
>> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
>> match dst-type LOCAL
>> Chain POSTROUTING (policy ACCEPT)
>> target     prot opt source               destination
>> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
>> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
>> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
>> RETURN     all  --  192.168.122.0/24     255.255.255.255
>> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
>> ports: 1024-65535
>> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
>> ports: 1024-65535
>> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
>> Chain DOCKER (2 references)
>> target     prot opt source               destination
>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>
>> So basically I need to connect to container from another host as the link
>> points it out.
>>
>> My docker is already running.
>>
>> [root@rhes75 ~]# docker ps -a
>> CONTAINER ID        IMAGE               COMMAND
>> CREATED             STATUS              PORTS               NAMES
>> 8dd84a174834        ubuntu              "bash"              19 hours
>> ago        Up 11 hours                             dockerZooKeeperKafka
>>
>> What would be an option to add a fixed port to the running container.?
>>
>> Regards,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
>> wrote:
>>
>>> Seems like you need to expose your port via docker run or docker-compose
>>> .
>>>
>>>
>>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>>
>>>
>>>
>>> --
>>> Rahul Singh
>>> rahul.singh@anant.us
>>>
>>> Anant Corporation
>>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com>, wrote:
>>> > Hi,
>>> >
>>> > I have now successfully created a docker for RHEL75 as follows:
>>> >
>>> > [root@rhes75 ~]# docker ps -a
>>> > CONTAINER ID IMAGE COMMAND
>>> > CREATED STATUS PORTS NAMES
>>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>>> > dockerZooKeeper
>>> > 8dd84a174834 ubuntu "bash" 6 hours
>>> > ago Up 6 hours
>>> > dockerZooKeeperKafka
>>> >
>>> > The first container is ready made for ZooKeeper that exposes the
>>> zookeeper
>>> > client port etc.
>>> >
>>> > The second container is an ubuntu shell which I installed both
>>> zookeeper
>>> > and Kafka on it. They are both running in container
>>> dockerZooKeeperKafka
>>> >
>>> >
>>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>>> > 5715 Kafka
>>> > 5647 QuorumPeerMain
>>> >
>>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>>> > (Not all processes could be identified, non-owned process info
>>> > will not be shown, you would have to be root to see it all.)
>>> > Active Internet connections (only servers)
>>> > Proto Recv-Q Send-Q Local Address Foreign Address
>>> > State User Inode PID/Program name
>>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>>> > LISTEN 1005 2865148 5715/java
>>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>>> > LISTEN 1005 2865147 5715/java
>>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>>> > LISTEN 1005 2865151 5715/java
>>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>>> > LISTEN 0 2757032 -
>>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>>> > LISTEN 1005 2852821 5647/java
>>> >
>>> >
>>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>>> > 0.0.0.0:* LISTEN 1005 2873507
>>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>>> > 0 0 :::22 :::* LISTEN
>>> > 0 2757034 -
>>> >
>>> > I have a gateway node that is connected to the host running the
>>> container.
>>> > From within the container I can ssh to the gateway host *as both the
>>> > gateway host and host running the container are on the same VLAN.*
>>> >
>>> >
>>> > However, I cannot connect from gateway to the container. The container
>>> has
>>> > this IP address
>>> >
>>> > root@8dd84a174834:~# ifconfig -a
>>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>> > TX packets 189400 bytes 13557709 (13.5 MB)
>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>> >
>>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>>> > inet 127.0.0.1 netmask 255.0.0.0
>>> > loop txqueuelen 1000 (Local Loopback)
>>> > RX packets 8450 bytes 534805 (534.8 KB)
>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>> > TX packets 8450 bytes 534805 (534.8 KB)
>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>> >
>>> >
>>> >
>>> > The interesting thing is that in order to publish streaming test data I
>>> > need to be able to do something like below
>>> >
>>> >
>>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>>> --broker-list
>>> >
>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>> > --topic md
>>> >
>>> >
>>> > That Kafka broker list --broker-list
>>> >
>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>> > needs to be replaced by <container hostname>:9092!
>>> >
>>> >
>>> > So at this juncture I am wondering what type of network needs to be
>>> created
>>> > as the container is running within another host.
>>> >
>>> >
>>> > Thanks
>>> >
>>> >
>>> > Dr Mich Talebzadeh
>>> >
>>> >
>>> >
>>> > LinkedIn *
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > <
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> >*
>>> >
>>> >
>>> >
>>> > http://talebzadehmich.wordpress.com
>>> >
>>> >
>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any
>>> > loss, damage or destruction of data or any other property which may
>>> arise
>>> > from relying on this email's technical content is explicitly
>>> disclaimed.
>>> > The author will in no case be liable for any monetary damages arising
>>> from
>>> > such loss, damage or destruction.
>>> >
>>> >
>>> >
>>> >
>>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com>
>>> wrote:
>>> >
>>> > >
>>> > >
>>> > > ________________________________
>>> > > From: Mich Talebzadeh <mi...@gmail.com>
>>> > > Sent: Sunday, July 8, 2018 1:01 PM
>>> > > To: users@kafka.apache.org
>>> > > Subject: Re: Real time streaming as a microservice
>>> > >
>>> > > Thanks Martin.
>>> > >
>>> > > From an implementation point of view do we need to introduce docker
>>> for
>>> > > each microservice? In other words does it have to be artefact -->
>>> contain
>>> > > --> docker for this to be true microservice and all these
>>> microservices
>>> > > communicate through Service Registry.
>>> > > MG>for deployment deploying thru docker container would be the
>>> easiest
>>> > > means to test
>>> > > MG>but first we would need to concentrate
>>> > > MG>on your developing a micro-service first
>>> > > MG>your development of a service registry
>>> > > MG>your development of a micro-services container which can lookup
>>> > > necessary endpoints
>>> > > MG>since you pre-pordained Docker to be your deploy container I would
>>> > > suggest implementing OpenShift
>>> > > https://www.openshift.org/
>>> > > OpenShift Origin - Open Source Container Application Platform<
>>> > > https://www.openshift.org/>
>>> > > www.openshift.org
>>> > > The next generation open source app hosting platform by Red Hat
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > Also if we wanted to move from a monolithic classic design with
>>> Streaming
>>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming,
>>> Flink)
>>> > > --> Real time dashboard (anything built on something like D3) to
>>> > > microservices how would that entail.
>>> > > MG>the simpler the function the better ...something like
>>> > > MG>simple input...user enters 'foo'
>>> > > MG>simple processing....process does spark stream to determine what
>>> result
>>> > > responds to 'foo'
>>> > > MG>simple output...output will be text 'bar' formatting to be decided
>>> > > (text/html/pdf?)
>>> > >
>>> > > One option would be to have three
>>> > > principal microservices (each with sub-services) providing three
>>> > > components?
>>> > > MG>concentrate on the simplest function which would
>>> be_______________?
>>> > > MG>shoehorn simple function into a viable microservice
>>> > > MG>the following inventory microservice from redhat example shows
>>> how your
>>> > > ______? service
>>> > > MG>can be incorporated into a openshift container
>>> > > MG>and be readily deployable in docker container
>>> > > MG>
>>> > >
>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>> > > [
>>> > >
>>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>>> > > ]<
>>> > >
>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>> > > >
>>> > >
>>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>>> > >
>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>> > > >
>>> > > developers.redhat.com
>>> > > Today I want to talk about the demo we presented @ OpenShift
>>> Container
>>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>>> JBoss
>>> > > team’s great work available on this repo: In the next few
>>> paragraphs, I’ll
>>> > > describe in deep detail the microservices CoolStore example and how
>>> we used
>>> > > ...
>>> > >
>>> > >
>>> > > MG>the first step would involve knowing which simple function you
>>> need to
>>> > > deploy as microservice ?
>>> > >
>>> > > Regards,
>>> > >
>>> > > Dr Mich Talebzadeh
>>> > >
>>> > >
>>> > >
>>> > > LinkedIn *
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > <
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > *
>>> > >
>>> > >
>>> > >
>>> > > http://talebzadehmich.wordpress.com
>>> > >
>>> > >
>>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>> for any
>>> > > loss, damage or destruction of data or any other property which may
>>> arise
>>> > > from relying on this email's technical content is explicitly
>>> disclaimed.
>>> > > The author will in no case be liable for any monetary damages
>>> arising from
>>> > > such loss, damage or destruction.
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>>> wrote:
>>> > >
>>> > > >
>>> > > >
>>> > > > initial work under using Zookeeper as a Microservices container is
>>> here
>>> > > >
>>> > > >
>>> > >
>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>> > > >
>>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>>> > > >
>>> > >
>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>> > > > >
>>> > > > planet.jboss.org
>>> > > > In a microservice world, multiple services are typically
>>> distributed in a
>>> > > > PaaS environment. Immutable infrastructure, such as those provided
>>> by
>>> > > > containers or immutable VM images. Services may scale up and down
>>> based
>>> > > > upon certain pre-defined metrics. Exact address of the service may
>>> not be
>>> > > > known ...
>>> > > >
>>> > > > once your Zookeeper Microservices container is operational
>>> > > >
>>> > > > you would need to 'tweak' kafka to extend and implement
>>> > > classes/interfaces
>>> > > > to become
>>> > > > a true microservices component..this may help
>>> > > >
>>> > > >
>>> > > >
>>> > >
>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>> > > > [
>>> > >
>>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>>> > > > ]<
>>> > > >
>>> > >
>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>> > > > >
>>> > > >
>>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>>> > > >
>>> > >
>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>> > > > >
>>> > > > blog.arungupta.me
>>> > > > Have you ever wondered what does it take to refactor an existing
>>> Java EE
>>> > > > monolithic application to a microservices-based one? This blog
>>> explains
>>> > > how
>>> > > > a trivial shopping cart example was converted to
>>> microservices-based
>>> > > > application, and what are some of the concerns around it.
>>> > > >
>>> > > >
>>> > > >
>>> > > > let me know if i can help out
>>> > > > Martin
>>> > > >
>>> > > >
>>> > > > ________________________________
>>> > > > From: Jörn Franke <jo...@gmail.com>
>>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>>> > > > To: users@kafka.apache.org
>>> > > > Cc: user@flink.apache.org
>>> > > > Subject: Re: Real time streaming as a microservice
>>> > > >
>>> > > > Yes or Kafka will need it ...
>>> > > > As soon as your orchestrate different microservices this will
>>> happen.
>>> > > >
>>> > > >
>>> > > >
>>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com>
>>> > > > wrote:
>>> > > > >
>>> > > > > Thanks Jorn.
>>> > > > >
>>> > > > > So I gather as you correctly suggested, microservices do provide
>>> value
>>> > > in
>>> > > > > terms of modularisation. However, there will always "inevitably"
>>> be
>>> > > > > scenarios where the receiving artefact say Flink needs
>>> communication
>>> > > > > protocol changes?
>>> > > > >
>>> > > > > thanks
>>> > > > >
>>> > > > > Dr Mich Talebzadeh
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > LinkedIn *
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > <
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > *
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > http://talebzadehmich.wordpress.com
>>> > > > >
>>> > > > >
>>> > > > > *Disclaimer:* Use it at your own risk. Any and all
>>> responsibility for
>>> > > any
>>> > > > > loss, damage or destruction of data or any other property which
>>> may
>>> > > arise
>>> > > > > from relying on this email's technical content is explicitly
>>> > > disclaimed.
>>> > > > > The author will in no case be liable for any monetary damages
>>> arising
>>> > > > from
>>> > > > > such loss, damage or destruction.
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jornfranke@gmail.com
>>> >
>>> > > wrote:
>>> > > > > >
>>> > > > > > That they are loosely coupled does not mean they are
>>> independent. For
>>> > > > > > instance, you would not be able to replace Kafka with zeromq
>>> in your
>>> > > > > > scenario. Unfortunately also Kafka sometimes needs to introduce
>>> > > breaking
>>> > > > > > changes and the dependent application needs to upgrade.
>>> > > > > > You will not be able to avoid these scenarios in the future
>>> (this is
>>> > > > only
>>> > > > > > possible if micro services don’t communicate with each other
>>> or if
>>> > > they
>>> > > > > > would never need to change their communication protocol -
>>> pretty
>>> > > > impossible
>>> > > > > > ). However there are ways of course to reduce it, eg kafka
>>> could
>>> > > reduce
>>> > > > the
>>> > > > > > number of breaking changes or you can develop a very
>>> lightweight
>>> > > > > > microservice that is very easy to change and that only deals
>>> with the
>>> > > > > > broker integration and your application etc.
>>> > > > > >
>>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com
>>> > > >
>>> > > > > > wrote:
>>> > > > > > >
>>> > > > > > > Hi,
>>> > > > > > >
>>> > > > > > > I have created the Kafka messaging architecture as a
>>> microservice
>>> > > that
>>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>>> > > > micro-batches
>>> > > > > > > meaning "collect and process data" and flink as an event
>>> driven
>>> > > > > > > architecture (a stateful application that reacts to incoming
>>> events
>>> > > by
>>> > > > > > > triggering computations etc.
>>> > > > > > >
>>> > > > > > > According to Wikipedia, A Microservice is a technique that
>>> > > structures
>>> > > > an
>>> > > > > > > application as a collection of loosely coupled services. In a
>>> > > > > > microservices
>>> > > > > > > architecture, services are fine-grained and the protocols are
>>> > > > > > lightweight.
>>> > > > > > >
>>> > > > > > > Ok for streaming data among other things I have to create and
>>> > > configure
>>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>>> create
>>> > > Kafka
>>> > > > > > > brokers with scalability and resiliency. Then I can offer the
>>> > > streaming
>>> > > > > > as
>>> > > > > > > a microservice to subscribers among them Spark and Flink. I
>>> can
>>> > > upgrade
>>> > > > > > > this microservice component in isolation without impacting
>>> either
>>> > > Spark
>>> > > > > > or
>>> > > > > > > Flink.
>>> > > > > > >
>>> > > > > > > The problem I face here is the dependency on Flink etc on
>>> the jar
>>> > > files
>>> > > > > > > specific for the version of Kafka deployed. For example
>>> > > > kafka_2.12-1.1.0
>>> > > > > > is
>>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>>> work in
>>> > > Flink
>>> > > > > > 1.5
>>> > > > > > > application, I need to use the correct dependency in sbt
>>> build. For
>>> > > > > > > example:
>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>> > > > > > "flink-connector-kafka-0.11" %
>>> > > > > > > "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>> > > > > > "flink-connector-kafka-base" %
>>> > > > > > > "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala" %
>>> "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients" %
>>> > > > "0.11.0.0"
>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>> "flink-streaming-scala"
>>> > > %
>>> > > > > > > "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>>> "0.11.0.0"
>>> > > > > > >
>>> > > > > > > and the Scala code needs to change:
>>> > > > > > >
>>> > > > > > > import
>>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>> > > > > > > …
>>> > > > > > > val stream = env
>>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>>> > > > > > > SimpleStringSchema(), properties))
>>> > > > > > >
>>> > > > > > > So in summary some changes need to be made to Flink to be
>>> able to
>>> > > > > > interact
>>> > > > > > > with the new version of Kafka. And more importantly if one
>>> can use an
>>> > > > > > > abstract notion of microservice here?
>>> > > > > > >
>>> > > > > > > Dr Mich Talebzadeh
>>> > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > LinkedIn *
>>> > > > > >
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > > > <
>>> > > > > >
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > > > *
>>> > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > http://talebzadehmich.wordpress.com
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>>> responsibility for
>>> > > > any
>>> > > > > > > loss, damage or destruction of data or any other property
>>> which may
>>> > > > arise
>>> > > > > > > from relying on this email's technical content is explicitly
>>> > > > disclaimed.
>>> > > > > > > The author will in no case be liable for any monetary
>>> damages arising
>>> > > > > > from
>>> > > > > > > such loss, damage or destruction.
>>> > > > > >
>>> > > >
>>> > >
>>>
>>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi all,

I have now managed to deploy both ZooKeeper and Kafka as microservices
using docker images.

The idea came to me as I wanted to create lightweight processes for both
ZooKeeper and Kafka to be used as services for Flink and Spark
simultaneously.

In this design both Flink and Spark rely on streaming market data messages
published through Kafka. My current design is simple one docker for
Zookeeper and another for Kafka

[root@rhes75 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND
CREATED             STATUS
PORTS                                            NAMES
05cf097ac139        ches/kafka          "/start.sh"              9 hours
ago         Up 9 hours              *0.0.0.0:7203->7203/tcp,
0.0.0.0:9092->9092/tcp*   kafka
b173e455cc80        jplock/zookeeper    "/opt/zookeeper/bin/…"   10 hours
ago        Up 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
3888/tcp*       zookeeper

Note that the docker ports are exposed to the physical host that running
the containers.

A test message is simply created as follows:
${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
--replication-factor 1 --partitions 1 --topic test

Note that rhes75 is the host that houses the dockers and port 2181 is the
zookeeper port used by the zookeeper docker and mapped

The spark streaming uses speed layer in Lambda architecture to write to an
Hbase table for selected market data (Hbase requires connectivity to a
Zookeeper). For Hbase I specified a zookeeper instance running on another
host and Hbase works fine.

Anyway I will provide further info and diagrams.

Cheers,


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh <mi...@gmail.com>
wrote:

>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
> Thanks got it sorted.
>
> Regards,
>
>
> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> Thanks Rahul.
>>
>> This is the outcome of
>>
>> [root@rhes75 ~]# iptables -t nat -L -n
>> Chain PREROUTING (policy ACCEPT)
>> target     prot opt source               destination
>> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
>> match dst-type LOCAL
>> Chain INPUT (policy ACCEPT)
>> target     prot opt source               destination
>> Chain OUTPUT (policy ACCEPT)
>> target     prot opt source               destination
>> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
>> match dst-type LOCAL
>> Chain POSTROUTING (policy ACCEPT)
>> target     prot opt source               destination
>> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
>> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
>> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
>> RETURN     all  --  192.168.122.0/24     255.255.255.255
>> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
>> ports: 1024-65535
>> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
>> ports: 1024-65535
>> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
>> Chain DOCKER (2 references)
>> target     prot opt source               destination
>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>>
>> So basically I need to connect to container from another host as the link
>> points it out.
>>
>> My docker is already running.
>>
>> [root@rhes75 ~]# docker ps -a
>> CONTAINER ID        IMAGE               COMMAND
>> CREATED             STATUS              PORTS               NAMES
>> 8dd84a174834        ubuntu              "bash"              19 hours
>> ago        Up 11 hours                             dockerZooKeeperKafka
>>
>> What would be an option to add a fixed port to the running container.?
>>
>> Regards,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
>> wrote:
>>
>>> Seems like you need to expose your port via docker run or docker-compose
>>> .
>>>
>>>
>>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>>
>>>
>>>
>>> --
>>> Rahul Singh
>>> rahul.singh@anant.us
>>>
>>> Anant Corporation
>>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com>, wrote:
>>> > Hi,
>>> >
>>> > I have now successfully created a docker for RHEL75 as follows:
>>> >
>>> > [root@rhes75 ~]# docker ps -a
>>> > CONTAINER ID IMAGE COMMAND
>>> > CREATED STATUS PORTS NAMES
>>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>>> > dockerZooKeeper
>>> > 8dd84a174834 ubuntu "bash" 6 hours
>>> > ago Up 6 hours
>>> > dockerZooKeeperKafka
>>> >
>>> > The first container is ready made for ZooKeeper that exposes the
>>> zookeeper
>>> > client port etc.
>>> >
>>> > The second container is an ubuntu shell which I installed both
>>> zookeeper
>>> > and Kafka on it. They are both running in container
>>> dockerZooKeeperKafka
>>> >
>>> >
>>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>>> > 5715 Kafka
>>> > 5647 QuorumPeerMain
>>> >
>>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>>> > (Not all processes could be identified, non-owned process info
>>> > will not be shown, you would have to be root to see it all.)
>>> > Active Internet connections (only servers)
>>> > Proto Recv-Q Send-Q Local Address Foreign Address
>>> > State User Inode PID/Program name
>>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>>> > LISTEN 1005 2865148 5715/java
>>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>>> > LISTEN 1005 2865147 5715/java
>>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>>> > LISTEN 1005 2865151 5715/java
>>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>>> > LISTEN 0 2757032 -
>>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>>> > LISTEN 1005 2852821 5647/java
>>> >
>>> >
>>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>>> > 0.0.0.0:* LISTEN 1005 2873507
>>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>>> > 0 0 :::22 :::* LISTEN
>>> > 0 2757034 -
>>> >
>>> > I have a gateway node that is connected to the host running the
>>> container.
>>> > From within the container I can ssh to the gateway host *as both the
>>> > gateway host and host running the container are on the same VLAN.*
>>> >
>>> >
>>> > However, I cannot connect from gateway to the container. The container
>>> has
>>> > this IP address
>>> >
>>> > root@8dd84a174834:~# ifconfig -a
>>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>> > TX packets 189400 bytes 13557709 (13.5 MB)
>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>> >
>>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>>> > inet 127.0.0.1 netmask 255.0.0.0
>>> > loop txqueuelen 1000 (Local Loopback)
>>> > RX packets 8450 bytes 534805 (534.8 KB)
>>> > RX errors 0 dropped 0 overruns 0 frame 0
>>> > TX packets 8450 bytes 534805 (534.8 KB)
>>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>>> >
>>> >
>>> >
>>> > The interesting thing is that in order to publish streaming test data I
>>> > need to be able to do something like below
>>> >
>>> >
>>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>>> --broker-list
>>> >
>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>> > --topic md
>>> >
>>> >
>>> > That Kafka broker list --broker-list
>>> >
>>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>>> > needs to be replaced by <container hostname>:9092!
>>> >
>>> >
>>> > So at this juncture I am wondering what type of network needs to be
>>> created
>>> > as the container is running within another host.
>>> >
>>> >
>>> > Thanks
>>> >
>>> >
>>> > Dr Mich Talebzadeh
>>> >
>>> >
>>> >
>>> > LinkedIn *
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > <
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> >*
>>> >
>>> >
>>> >
>>> > http://talebzadehmich.wordpress.com
>>> >
>>> >
>>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any
>>> > loss, damage or destruction of data or any other property which may
>>> arise
>>> > from relying on this email's technical content is explicitly
>>> disclaimed.
>>> > The author will in no case be liable for any monetary damages arising
>>> from
>>> > such loss, damage or destruction.
>>> >
>>> >
>>> >
>>> >
>>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com>
>>> wrote:
>>> >
>>> > >
>>> > >
>>> > > ________________________________
>>> > > From: Mich Talebzadeh <mi...@gmail.com>
>>> > > Sent: Sunday, July 8, 2018 1:01 PM
>>> > > To: users@kafka.apache.org
>>> > > Subject: Re: Real time streaming as a microservice
>>> > >
>>> > > Thanks Martin.
>>> > >
>>> > > From an implementation point of view do we need to introduce docker
>>> for
>>> > > each microservice? In other words does it have to be artefact -->
>>> contain
>>> > > --> docker for this to be true microservice and all these
>>> microservices
>>> > > communicate through Service Registry.
>>> > > MG>for deployment deploying thru docker container would be the
>>> easiest
>>> > > means to test
>>> > > MG>but first we would need to concentrate
>>> > > MG>on your developing a micro-service first
>>> > > MG>your development of a service registry
>>> > > MG>your development of a micro-services container which can lookup
>>> > > necessary endpoints
>>> > > MG>since you pre-pordained Docker to be your deploy container I would
>>> > > suggest implementing OpenShift
>>> > > https://www.openshift.org/
>>> > > OpenShift Origin - Open Source Container Application Platform<
>>> > > https://www.openshift.org/>
>>> > > www.openshift.org
>>> > > The next generation open source app hosting platform by Red Hat
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > Also if we wanted to move from a monolithic classic design with
>>> Streaming
>>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming,
>>> Flink)
>>> > > --> Real time dashboard (anything built on something like D3) to
>>> > > microservices how would that entail.
>>> > > MG>the simpler the function the better ...something like
>>> > > MG>simple input...user enters 'foo'
>>> > > MG>simple processing....process does spark stream to determine what
>>> result
>>> > > responds to 'foo'
>>> > > MG>simple output...output will be text 'bar' formatting to be decided
>>> > > (text/html/pdf?)
>>> > >
>>> > > One option would be to have three
>>> > > principal microservices (each with sub-services) providing three
>>> > > components?
>>> > > MG>concentrate on the simplest function which would
>>> be_______________?
>>> > > MG>shoehorn simple function into a viable microservice
>>> > > MG>the following inventory microservice from redhat example shows
>>> how your
>>> > > ______? service
>>> > > MG>can be incorporated into a openshift container
>>> > > MG>and be readily deployable in docker container
>>> > > MG>
>>> > >
>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>> > > [
>>> > >
>>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>>> > > ]<
>>> > >
>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>> > > >
>>> > >
>>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>>> > >
>>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>>> > > >
>>> > > developers.redhat.com
>>> > > Today I want to talk about the demo we presented @ OpenShift
>>> Container
>>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>>> JBoss
>>> > > team’s great work available on this repo: In the next few
>>> paragraphs, I’ll
>>> > > describe in deep detail the microservices CoolStore example and how
>>> we used
>>> > > ...
>>> > >
>>> > >
>>> > > MG>the first step would involve knowing which simple function you
>>> need to
>>> > > deploy as microservice ?
>>> > >
>>> > > Regards,
>>> > >
>>> > > Dr Mich Talebzadeh
>>> > >
>>> > >
>>> > >
>>> > > LinkedIn *
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > <
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > *
>>> > >
>>> > >
>>> > >
>>> > > http://talebzadehmich.wordpress.com
>>> > >
>>> > >
>>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>>> for any
>>> > > loss, damage or destruction of data or any other property which may
>>> arise
>>> > > from relying on this email's technical content is explicitly
>>> disclaimed.
>>> > > The author will in no case be liable for any monetary damages
>>> arising from
>>> > > such loss, damage or destruction.
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>>> wrote:
>>> > >
>>> > > >
>>> > > >
>>> > > > initial work under using Zookeeper as a Microservices container is
>>> here
>>> > > >
>>> > > >
>>> > >
>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>> > > >
>>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>>> > > >
>>> > >
>>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>>> > > > >
>>> > > > planet.jboss.org
>>> > > > In a microservice world, multiple services are typically
>>> distributed in a
>>> > > > PaaS environment. Immutable infrastructure, such as those provided
>>> by
>>> > > > containers or immutable VM images. Services may scale up and down
>>> based
>>> > > > upon certain pre-defined metrics. Exact address of the service may
>>> not be
>>> > > > known ...
>>> > > >
>>> > > > once your Zookeeper Microservices container is operational
>>> > > >
>>> > > > you would need to 'tweak' kafka to extend and implement
>>> > > classes/interfaces
>>> > > > to become
>>> > > > a true microservices component..this may help
>>> > > >
>>> > > >
>>> > > >
>>> > >
>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>> > > > [
>>> > >
>>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>>> > > > ]<
>>> > > >
>>> > >
>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>> > > > >
>>> > > >
>>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>>> > > >
>>> > >
>>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>>> > > > >
>>> > > > blog.arungupta.me
>>> > > > Have you ever wondered what does it take to refactor an existing
>>> Java EE
>>> > > > monolithic application to a microservices-based one? This blog
>>> explains
>>> > > how
>>> > > > a trivial shopping cart example was converted to
>>> microservices-based
>>> > > > application, and what are some of the concerns around it.
>>> > > >
>>> > > >
>>> > > >
>>> > > > let me know if i can help out
>>> > > > Martin
>>> > > >
>>> > > >
>>> > > > ________________________________
>>> > > > From: Jörn Franke <jo...@gmail.com>
>>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>>> > > > To: users@kafka.apache.org
>>> > > > Cc: user@flink.apache.org
>>> > > > Subject: Re: Real time streaming as a microservice
>>> > > >
>>> > > > Yes or Kafka will need it ...
>>> > > > As soon as your orchestrate different microservices this will
>>> happen.
>>> > > >
>>> > > >
>>> > > >
>>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com>
>>> > > > wrote:
>>> > > > >
>>> > > > > Thanks Jorn.
>>> > > > >
>>> > > > > So I gather as you correctly suggested, microservices do provide
>>> value
>>> > > in
>>> > > > > terms of modularisation. However, there will always "inevitably"
>>> be
>>> > > > > scenarios where the receiving artefact say Flink needs
>>> communication
>>> > > > > protocol changes?
>>> > > > >
>>> > > > > thanks
>>> > > > >
>>> > > > > Dr Mich Talebzadeh
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > LinkedIn *
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > <
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > *
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > http://talebzadehmich.wordpress.com
>>> > > > >
>>> > > > >
>>> > > > > *Disclaimer:* Use it at your own risk. Any and all
>>> responsibility for
>>> > > any
>>> > > > > loss, damage or destruction of data or any other property which
>>> may
>>> > > arise
>>> > > > > from relying on this email's technical content is explicitly
>>> > > disclaimed.
>>> > > > > The author will in no case be liable for any monetary damages
>>> arising
>>> > > > from
>>> > > > > such loss, damage or destruction.
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jornfranke@gmail.com
>>> >
>>> > > wrote:
>>> > > > > >
>>> > > > > > That they are loosely coupled does not mean they are
>>> independent. For
>>> > > > > > instance, you would not be able to replace Kafka with zeromq
>>> in your
>>> > > > > > scenario. Unfortunately also Kafka sometimes needs to introduce
>>> > > breaking
>>> > > > > > changes and the dependent application needs to upgrade.
>>> > > > > > You will not be able to avoid these scenarios in the future
>>> (this is
>>> > > > only
>>> > > > > > possible if micro services don’t communicate with each other
>>> or if
>>> > > they
>>> > > > > > would never need to change their communication protocol -
>>> pretty
>>> > > > impossible
>>> > > > > > ). However there are ways of course to reduce it, eg kafka
>>> could
>>> > > reduce
>>> > > > the
>>> > > > > > number of breaking changes or you can develop a very
>>> lightweight
>>> > > > > > microservice that is very easy to change and that only deals
>>> with the
>>> > > > > > broker integration and your application etc.
>>> > > > > >
>>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com
>>> > > >
>>> > > > > > wrote:
>>> > > > > > >
>>> > > > > > > Hi,
>>> > > > > > >
>>> > > > > > > I have created the Kafka messaging architecture as a
>>> microservice
>>> > > that
>>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>>> > > > micro-batches
>>> > > > > > > meaning "collect and process data" and flink as an event
>>> driven
>>> > > > > > > architecture (a stateful application that reacts to incoming
>>> events
>>> > > by
>>> > > > > > > triggering computations etc.
>>> > > > > > >
>>> > > > > > > According to Wikipedia, A Microservice is a technique that
>>> > > structures
>>> > > > an
>>> > > > > > > application as a collection of loosely coupled services. In a
>>> > > > > > microservices
>>> > > > > > > architecture, services are fine-grained and the protocols are
>>> > > > > > lightweight.
>>> > > > > > >
>>> > > > > > > Ok for streaming data among other things I have to create and
>>> > > configure
>>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>>> create
>>> > > Kafka
>>> > > > > > > brokers with scalability and resiliency. Then I can offer the
>>> > > streaming
>>> > > > > > as
>>> > > > > > > a microservice to subscribers among them Spark and Flink. I
>>> can
>>> > > upgrade
>>> > > > > > > this microservice component in isolation without impacting
>>> either
>>> > > Spark
>>> > > > > > or
>>> > > > > > > Flink.
>>> > > > > > >
>>> > > > > > > The problem I face here is the dependency on Flink etc on
>>> the jar
>>> > > files
>>> > > > > > > specific for the version of Kafka deployed. For example
>>> > > > kafka_2.12-1.1.0
>>> > > > > > is
>>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>>> work in
>>> > > Flink
>>> > > > > > 1.5
>>> > > > > > > application, I need to use the correct dependency in sbt
>>> build. For
>>> > > > > > > example:
>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>> > > > > > "flink-connector-kafka-0.11" %
>>> > > > > > > "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>> > > > > > "flink-connector-kafka-base" %
>>> > > > > > > "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala" %
>>> "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients" %
>>> > > > "0.11.0.0"
>>> > > > > > > libraryDependencies += "org.apache.flink" %%
>>> "flink-streaming-scala"
>>> > > %
>>> > > > > > > "1.5.0"
>>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>>> "0.11.0.0"
>>> > > > > > >
>>> > > > > > > and the Scala code needs to change:
>>> > > > > > >
>>> > > > > > > import
>>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>> > > > > > > …
>>> > > > > > > val stream = env
>>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>>> > > > > > > SimpleStringSchema(), properties))
>>> > > > > > >
>>> > > > > > > So in summary some changes need to be made to Flink to be
>>> able to
>>> > > > > > interact
>>> > > > > > > with the new version of Kafka. And more importantly if one
>>> can use an
>>> > > > > > > abstract notion of microservice here?
>>> > > > > > >
>>> > > > > > > Dr Mich Talebzadeh
>>> > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > LinkedIn *
>>> > > > > >
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > > > <
>>> > > > > >
>>> > > >
>>> > >
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> > > > > > > *
>>> > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > http://talebzadehmich.wordpress.com
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>>> responsibility for
>>> > > > any
>>> > > > > > > loss, damage or destruction of data or any other property
>>> which may
>>> > > > arise
>>> > > > > > > from relying on this email's technical content is explicitly
>>> > > > disclaimed.
>>> > > > > > > The author will in no case be liable for any monetary
>>> damages arising
>>> > > > > > from
>>> > > > > > > such loss, damage or destruction.
>>> > > > > >
>>> > > >
>>> > >
>>>
>>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Thanks got it sorted.

Regards,


On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Thanks Rahul.
>
> This is the outcome of
>
> [root@rhes75 ~]# iptables -t nat -L -n
> Chain PREROUTING (policy ACCEPT)
> target     prot opt source               destination
> DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
> match dst-type LOCAL
> Chain INPUT (policy ACCEPT)
> target     prot opt source               destination
> Chain OUTPUT (policy ACCEPT)
> target     prot opt source               destination
> DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
> match dst-type LOCAL
> Chain POSTROUTING (policy ACCEPT)
> target     prot opt source               destination
> MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
> MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
> RETURN     all  --  192.168.122.0/24     224.0.0.0/24
> RETURN     all  --  192.168.122.0/24     255.255.255.255
> MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq
> ports: 1024-65535
> MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq
> ports: 1024-65535
> MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
> Chain DOCKER (2 references)
> target     prot opt source               destination
> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
> RETURN     all  --  0.0.0.0/0            0.0.0.0/0
>
> So basically I need to connect to container from another host as the link
> points it out.
>
> My docker is already running.
>
> [root@rhes75 ~]# docker ps -a
> CONTAINER ID        IMAGE               COMMAND
> CREATED             STATUS              PORTS               NAMES
> 8dd84a174834        ubuntu              "bash"              19 hours
> ago        Up 11 hours                             dockerZooKeeperKafka
>
> What would be an option to add a fixed port to the running container.?
>
> Regards,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
> wrote:
>
>> Seems like you need to expose your port via docker run or docker-compose .
>>
>>
>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>
>>
>>
>> --
>> Rahul Singh
>> rahul.singh@anant.us
>>
>> Anant Corporation
>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <mi...@gmail.com>,
>> wrote:
>> > Hi,
>> >
>> > I have now successfully created a docker for RHEL75 as follows:
>> >
>> > [root@rhes75 ~]# docker ps -a
>> > CONTAINER ID IMAGE COMMAND
>> > CREATED STATUS PORTS NAMES
>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>> > dockerZooKeeper
>> > 8dd84a174834 ubuntu "bash" 6 hours
>> > ago Up 6 hours
>> > dockerZooKeeperKafka
>> >
>> > The first container is ready made for ZooKeeper that exposes the
>> zookeeper
>> > client port etc.
>> >
>> > The second container is an ubuntu shell which I installed both zookeeper
>> > and Kafka on it. They are both running in container dockerZooKeeperKafka
>> >
>> >
>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>> > 5715 Kafka
>> > 5647 QuorumPeerMain
>> >
>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>> > (Not all processes could be identified, non-owned process info
>> > will not be shown, you would have to be root to see it all.)
>> > Active Internet connections (only servers)
>> > Proto Recv-Q Send-Q Local Address Foreign Address
>> > State User Inode PID/Program name
>> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
>> > LISTEN 1005 2865148 5715/java
>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>> > LISTEN 1005 2865147 5715/java
>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>> > LISTEN 1005 2865151 5715/java
>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>> > LISTEN 0 2757032 -
>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>> > LISTEN 1005 2852821 5647/java
>> >
>> >
>> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
>> > 0.0.0.0:* LISTEN 1005 2873507
>> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
>> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
>> > 0 0 :::22 :::* LISTEN
>> > 0 2757034 -
>> >
>> > I have a gateway node that is connected to the host running the
>> container.
>> > From within the container I can ssh to the gateway host *as both the
>> > gateway host and host running the container are on the same VLAN.*
>> >
>> >
>> > However, I cannot connect from gateway to the container. The container
>> has
>> > this IP address
>> >
>> > root@8dd84a174834:~# ifconfig -a
>> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
>> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
>> > RX packets 173015 bytes 3263068025 (3.2 GB)
>> > RX errors 0 dropped 0 overruns 0 frame 0
>> > TX packets 189400 bytes 13557709 (13.5 MB)
>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>> >
>> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
>> > inet 127.0.0.1 netmask 255.0.0.0
>> > loop txqueuelen 1000 (Local Loopback)
>> > RX packets 8450 bytes 534805 (534.8 KB)
>> > RX errors 0 dropped 0 overruns 0 frame 0
>> > TX packets 8450 bytes 534805 (534.8 KB)
>> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>> >
>> >
>> >
>> > The interesting thing is that in order to publish streaming test data I
>> > need to be able to do something like below
>> >
>> >
>> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh
>> --broker-list
>> >
>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>> > --topic md
>> >
>> >
>> > That Kafka broker list --broker-list
>> >
>> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
>> > needs to be replaced by <container hostname>:9092!
>> >
>> >
>> > So at this juncture I am wondering what type of network needs to be
>> created
>> > as the container is running within another host.
>> >
>> >
>> > Thanks
>> >
>> >
>> > Dr Mich Talebzadeh
>> >
>> >
>> >
>> > LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > <
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >*
>> >
>> >
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> >
>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>> > loss, damage or destruction of data or any other property which may
>> arise
>> > from relying on this email's technical content is explicitly disclaimed.
>> > The author will in no case be liable for any monetary damages arising
>> from
>> > such loss, damage or destruction.
>> >
>> >
>> >
>> >
>> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com> wrote:
>> >
>> > >
>> > >
>> > > ________________________________
>> > > From: Mich Talebzadeh <mi...@gmail.com>
>> > > Sent: Sunday, July 8, 2018 1:01 PM
>> > > To: users@kafka.apache.org
>> > > Subject: Re: Real time streaming as a microservice
>> > >
>> > > Thanks Martin.
>> > >
>> > > From an implementation point of view do we need to introduce docker
>> for
>> > > each microservice? In other words does it have to be artefact -->
>> contain
>> > > --> docker for this to be true microservice and all these
>> microservices
>> > > communicate through Service Registry.
>> > > MG>for deployment deploying thru docker container would be the easiest
>> > > means to test
>> > > MG>but first we would need to concentrate
>> > > MG>on your developing a micro-service first
>> > > MG>your development of a service registry
>> > > MG>your development of a micro-services container which can lookup
>> > > necessary endpoints
>> > > MG>since you pre-pordained Docker to be your deploy container I would
>> > > suggest implementing OpenShift
>> > > https://www.openshift.org/
>> > > OpenShift Origin - Open Source Container Application Platform<
>> > > https://www.openshift.org/>
>> > > www.openshift.org
>> > > The next generation open source app hosting platform by Red Hat
>> > >
>> > >
>> > >
>> > >
>> > > Also if we wanted to move from a monolithic classic design with
>> Streaming
>> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming,
>> Flink)
>> > > --> Real time dashboard (anything built on something like D3) to
>> > > microservices how would that entail.
>> > > MG>the simpler the function the better ...something like
>> > > MG>simple input...user enters 'foo'
>> > > MG>simple processing....process does spark stream to determine what
>> result
>> > > responds to 'foo'
>> > > MG>simple output...output will be text 'bar' formatting to be decided
>> > > (text/html/pdf?)
>> > >
>> > > One option would be to have three
>> > > principal microservices (each with sub-services) providing three
>> > > components?
>> > > MG>concentrate on the simplest function which would be_______________?
>> > > MG>shoehorn simple function into a viable microservice
>> > > MG>the following inventory microservice from redhat example shows how
>> your
>> > > ______? service
>> > > MG>can be incorporated into a openshift container
>> > > MG>and be readily deployable in docker container
>> > > MG>
>> > >
>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>> > > [
>> > >
>> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
>> > > ]<
>> > >
>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>> > > >
>> > >
>> > > OpenShift and DevOps: The CoolStore Microservices Example<
>> > >
>> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
>> > > >
>> > > developers.redhat.com
>> > > Today I want to talk about the demo we presented @ OpenShift Container
>> > > Platform Roadshow in Milan & Rome last week. The demo was based on
>> JBoss
>> > > team’s great work available on this repo: In the next few paragraphs,
>> I’ll
>> > > describe in deep detail the microservices CoolStore example and how
>> we used
>> > > ...
>> > >
>> > >
>> > > MG>the first step would involve knowing which simple function you
>> need to
>> > > deploy as microservice ?
>> > >
>> > > Regards,
>> > >
>> > > Dr Mich Talebzadeh
>> > >
>> > >
>> > >
>> > > LinkedIn *
>> > >
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > > <
>> > >
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > > > *
>> > >
>> > >
>> > >
>> > > http://talebzadehmich.wordpress.com
>> > >
>> > >
>> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>> > > loss, damage or destruction of data or any other property which may
>> arise
>> > > from relying on this email's technical content is explicitly
>> disclaimed.
>> > > The author will in no case be liable for any monetary damages arising
>> from
>> > > such loss, damage or destruction.
>> > >
>> > >
>> > >
>> > >
>> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
>> wrote:
>> > >
>> > > >
>> > > >
>> > > > initial work under using Zookeeper as a Microservices container is
>> here
>> > > >
>> > > >
>> > >
>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>> > > >
>> > > > ZooKeeper for Microservice Registration and Discovery ...<
>> > > >
>> > >
>> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>> > > > >
>> > > > planet.jboss.org
>> > > > In a microservice world, multiple services are typically
>> distributed in a
>> > > > PaaS environment. Immutable infrastructure, such as those provided
>> by
>> > > > containers or immutable VM images. Services may scale up and down
>> based
>> > > > upon certain pre-defined metrics. Exact address of the service may
>> not be
>> > > > known ...
>> > > >
>> > > > once your Zookeeper Microservices container is operational
>> > > >
>> > > > you would need to 'tweak' kafka to extend and implement
>> > > classes/interfaces
>> > > > to become
>> > > > a true microservices component..this may help
>> > > >
>> > > >
>> > > >
>> > >
>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>> > > > [
>> > >
>> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
>> > > > ]<
>> > > >
>> > >
>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>> > > > >
>> > > >
>> > > > Monolithic to Microservices Refactoring for Java EE ...<
>> > > >
>> > >
>> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
>> > > > >
>> > > > blog.arungupta.me
>> > > > Have you ever wondered what does it take to refactor an existing
>> Java EE
>> > > > monolithic application to a microservices-based one? This blog
>> explains
>> > > how
>> > > > a trivial shopping cart example was converted to microservices-based
>> > > > application, and what are some of the concerns around it.
>> > > >
>> > > >
>> > > >
>> > > > let me know if i can help out
>> > > > Martin
>> > > >
>> > > >
>> > > > ________________________________
>> > > > From: Jörn Franke <jo...@gmail.com>
>> > > > Sent: Sunday, July 8, 2018 6:18 AM
>> > > > To: users@kafka.apache.org
>> > > > Cc: user@flink.apache.org
>> > > > Subject: Re: Real time streaming as a microservice
>> > > >
>> > > > Yes or Kafka will need it ...
>> > > > As soon as your orchestrate different microservices this will
>> happen.
>> > > >
>> > > >
>> > > >
>> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com>
>> > > > wrote:
>> > > > >
>> > > > > Thanks Jorn.
>> > > > >
>> > > > > So I gather as you correctly suggested, microservices do provide
>> value
>> > > in
>> > > > > terms of modularisation. However, there will always "inevitably"
>> be
>> > > > > scenarios where the receiving artefact say Flink needs
>> communication
>> > > > > protocol changes?
>> > > > >
>> > > > > thanks
>> > > > >
>> > > > > Dr Mich Talebzadeh
>> > > > >
>> > > > >
>> > > > >
>> > > > > LinkedIn *
>> > > >
>> > >
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > > > > <
>> > > >
>> > >
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > > > > *
>> > > > >
>> > > > >
>> > > > >
>> > > > > http://talebzadehmich.wordpress.com
>> > > > >
>> > > > >
>> > > > > *Disclaimer:* Use it at your own risk. Any and all responsibility
>> for
>> > > any
>> > > > > loss, damage or destruction of data or any other property which
>> may
>> > > arise
>> > > > > from relying on this email's technical content is explicitly
>> > > disclaimed.
>> > > > > The author will in no case be liable for any monetary damages
>> arising
>> > > > from
>> > > > > such loss, damage or destruction.
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com>
>> > > wrote:
>> > > > > >
>> > > > > > That they are loosely coupled does not mean they are
>> independent. For
>> > > > > > instance, you would not be able to replace Kafka with zeromq in
>> your
>> > > > > > scenario. Unfortunately also Kafka sometimes needs to introduce
>> > > breaking
>> > > > > > changes and the dependent application needs to upgrade.
>> > > > > > You will not be able to avoid these scenarios in the future
>> (this is
>> > > > only
>> > > > > > possible if micro services don’t communicate with each other or
>> if
>> > > they
>> > > > > > would never need to change their communication protocol - pretty
>> > > > impossible
>> > > > > > ). However there are ways of course to reduce it, eg kafka could
>> > > reduce
>> > > > the
>> > > > > > number of breaking changes or you can develop a very lightweight
>> > > > > > microservice that is very easy to change and that only deals
>> with the
>> > > > > > broker integration and your application etc.
>> > > > > >
>> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com
>> > > >
>> > > > > > wrote:
>> > > > > > >
>> > > > > > > Hi,
>> > > > > > >
>> > > > > > > I have created the Kafka messaging architecture as a
>> microservice
>> > > that
>> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
>> > > > micro-batches
>> > > > > > > meaning "collect and process data" and flink as an event
>> driven
>> > > > > > > architecture (a stateful application that reacts to incoming
>> events
>> > > by
>> > > > > > > triggering computations etc.
>> > > > > > >
>> > > > > > > According to Wikipedia, A Microservice is a technique that
>> > > structures
>> > > > an
>> > > > > > > application as a collection of loosely coupled services. In a
>> > > > > > microservices
>> > > > > > > architecture, services are fine-grained and the protocols are
>> > > > > > lightweight.
>> > > > > > >
>> > > > > > > Ok for streaming data among other things I have to create and
>> > > configure
>> > > > > > > topic (or topics), design a robust zookeeper ensemble and
>> create
>> > > Kafka
>> > > > > > > brokers with scalability and resiliency. Then I can offer the
>> > > streaming
>> > > > > > as
>> > > > > > > a microservice to subscribers among them Spark and Flink. I
>> can
>> > > upgrade
>> > > > > > > this microservice component in isolation without impacting
>> either
>> > > Spark
>> > > > > > or
>> > > > > > > Flink.
>> > > > > > >
>> > > > > > > The problem I face here is the dependency on Flink etc on the
>> jar
>> > > files
>> > > > > > > specific for the version of Kafka deployed. For example
>> > > > kafka_2.12-1.1.0
>> > > > > > is
>> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this
>> work in
>> > > Flink
>> > > > > > 1.5
>> > > > > > > application, I need to use the correct dependency in sbt
>> build. For
>> > > > > > > example:
>> > > > > > > libraryDependencies += "org.apache.flink" %%
>> > > > > > "flink-connector-kafka-0.11" %
>> > > > > > > "1.5.0"
>> > > > > > > libraryDependencies += "org.apache.flink" %%
>> > > > > > "flink-connector-kafka-base" %
>> > > > > > > "1.5.0"
>> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala" %
>> "1.5.0"
>> > > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients" %
>> > > > "0.11.0.0"
>> > > > > > > libraryDependencies += "org.apache.flink" %%
>> "flink-streaming-scala"
>> > > %
>> > > > > > > "1.5.0"
>> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
>> "0.11.0.0"
>> > > > > > >
>> > > > > > > and the Scala code needs to change:
>> > > > > > >
>> > > > > > > import
>> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>> > > > > > > …
>> > > > > > > val stream = env
>> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
>> > > > > > > SimpleStringSchema(), properties))
>> > > > > > >
>> > > > > > > So in summary some changes need to be made to Flink to be
>> able to
>> > > > > > interact
>> > > > > > > with the new version of Kafka. And more importantly if one
>> can use an
>> > > > > > > abstract notion of microservice here?
>> > > > > > >
>> > > > > > > Dr Mich Talebzadeh
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > LinkedIn *
>> > > > > >
>> > > >
>> > >
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > > > > > > <
>> > > > > >
>> > > >
>> > >
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > > > > > > *
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > http://talebzadehmich.wordpress.com
>> > > > > > >
>> > > > > > >
>> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
>> responsibility for
>> > > > any
>> > > > > > > loss, damage or destruction of data or any other property
>> which may
>> > > > arise
>> > > > > > > from relying on this email's technical content is explicitly
>> > > > disclaimed.
>> > > > > > > The author will in no case be liable for any monetary damages
>> arising
>> > > > > > from
>> > > > > > > such loss, damage or destruction.
>> > > > > >
>> > > >
>> > >
>>
>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Rahul.

This is the outcome of

[root@rhes75 ~]# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE
match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE
match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0
MASQUERADE  all  --  172.18.0.0/16        0.0.0.0/0
RETURN     all  --  192.168.122.0/24     224.0.0.0/24
RETURN     all  --  192.168.122.0/24     255.255.255.255
MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq ports:
1024-65535
MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq ports:
1024-65535
MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0

So basically I need to connect to container from another host as the link
points it out.

My docker is already running.

[root@rhes75 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND
CREATED             STATUS              PORTS               NAMES
8dd84a174834        ubuntu              "bash"              19 hours
ago        Up 11 hours                             dockerZooKeeperKafka

What would be an option to add a fixed port to the running container.?

Regards,

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 10 Jul 2018 at 08:35, Rahul Singh <ra...@gmail.com>
wrote:

> Seems like you need to expose your port via docker run or docker-compose .
>
>
> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>
>
>
> --
> Rahul Singh
> rahul.singh@anant.us
>
> Anant Corporation
> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <mi...@gmail.com>,
> wrote:
> > Hi,
> >
> > I have now successfully created a docker for RHEL75 as follows:
> >
> > [root@rhes75 ~]# docker ps -a
> > CONTAINER ID IMAGE COMMAND
> > CREATED STATUS PORTS NAMES
> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
> > dockerZooKeeper
> > 8dd84a174834 ubuntu "bash" 6 hours
> > ago Up 6 hours
> > dockerZooKeeperKafka
> >
> > The first container is ready made for ZooKeeper that exposes the
> zookeeper
> > client port etc.
> >
> > The second container is an ubuntu shell which I installed both zookeeper
> > and Kafka on it. They are both running in container dockerZooKeeperKafka
> >
> >
> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
> > 5715 Kafka
> > 5647 QuorumPeerMain
> >
> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
> > (Not all processes could be identified, non-owned process info
> > will not be shown, you would have to be root to see it all.)
> > Active Internet connections (only servers)
> > Proto Recv-Q Send-Q Local Address Foreign Address
> > State User Inode PID/Program name
> > tcp 0 0 0.0.0.0:9999 0.0.0.0:*
> > LISTEN 1005 2865148 5715/java
> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
> > LISTEN 1005 2865147 5715/java
> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
> > LISTEN 1005 2865151 5715/java
> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
> > LISTEN 0 2757032 -
> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
> > LISTEN 1005 2852821 5647/java
> >
> >
> > *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
> > 0.0.0.0:* LISTEN 1005 2873507
> > 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
> > 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
> > 0 0 :::22 :::* LISTEN
> > 0 2757034 -
> >
> > I have a gateway node that is connected to the host running the
> container.
> > From within the container I can ssh to the gateway host *as both the
> > gateway host and host running the container are on the same VLAN.*
> >
> >
> > However, I cannot connect from gateway to the container. The container
> has
> > this IP address
> >
> > root@8dd84a174834:~# ifconfig -a
> > eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> > *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
> > ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
> > RX packets 173015 bytes 3263068025 (3.2 GB)
> > RX errors 0 dropped 0 overruns 0 frame 0
> > TX packets 189400 bytes 13557709 (13.5 MB)
> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >
> > lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
> > inet 127.0.0.1 netmask 255.0.0.0
> > loop txqueuelen 1000 (Local Loopback)
> > RX packets 8450 bytes 534805 (534.8 KB)
> > RX errors 0 dropped 0 overruns 0 frame 0
> > TX packets 8450 bytes 534805 (534.8 KB)
> > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
> >
> >
> >
> > The interesting thing is that in order to publish streaming test data I
> > need to be able to do something like below
> >
> >
> > cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh --broker-list
> >
> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
> > --topic md
> >
> >
> > That Kafka broker list --broker-list
> >
> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
> > needs to be replaced by <container hostname>:9092!
> >
> >
> > So at this juncture I am wondering what type of network needs to be
> created
> > as the container is running within another host.
> >
> >
> > Thanks
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> > On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com> wrote:
> >
> > >
> > >
> > > ________________________________
> > > From: Mich Talebzadeh <mi...@gmail.com>
> > > Sent: Sunday, July 8, 2018 1:01 PM
> > > To: users@kafka.apache.org
> > > Subject: Re: Real time streaming as a microservice
> > >
> > > Thanks Martin.
> > >
> > > From an implementation point of view do we need to introduce docker for
> > > each microservice? In other words does it have to be artefact -->
> contain
> > > --> docker for this to be true microservice and all these microservices
> > > communicate through Service Registry.
> > > MG>for deployment deploying thru docker container would be the easiest
> > > means to test
> > > MG>but first we would need to concentrate
> > > MG>on your developing a micro-service first
> > > MG>your development of a service registry
> > > MG>your development of a micro-services container which can lookup
> > > necessary endpoints
> > > MG>since you pre-pordained Docker to be your deploy container I would
> > > suggest implementing OpenShift
> > > https://www.openshift.org/
> > > OpenShift Origin - Open Source Container Application Platform<
> > > https://www.openshift.org/>
> > > www.openshift.org
> > > The next generation open source app hosting platform by Red Hat
> > >
> > >
> > >
> > >
> > > Also if we wanted to move from a monolithic classic design with
> Streaming
> > > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming,
> Flink)
> > > --> Real time dashboard (anything built on something like D3) to
> > > microservices how would that entail.
> > > MG>the simpler the function the better ...something like
> > > MG>simple input...user enters 'foo'
> > > MG>simple processing....process does spark stream to determine what
> result
> > > responds to 'foo'
> > > MG>simple output...output will be text 'bar' formatting to be decided
> > > (text/html/pdf?)
> > >
> > > One option would be to have three
> > > principal microservices (each with sub-services) providing three
> > > components?
> > > MG>concentrate on the simplest function which would be_______________?
> > > MG>shoehorn simple function into a viable microservice
> > > MG>the following inventory microservice from redhat example shows how
> your
> > > ______? service
> > > MG>can be incorporated into a openshift container
> > > MG>and be readily deployable in docker container
> > > MG>
> > >
> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> > > [
> > >
> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
> > > ]<
> > >
> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> > > >
> > >
> > > OpenShift and DevOps: The CoolStore Microservices Example<
> > >
> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> > > >
> > > developers.redhat.com
> > > Today I want to talk about the demo we presented @ OpenShift Container
> > > Platform Roadshow in Milan & Rome last week. The demo was based on
> JBoss
> > > team’s great work available on this repo: In the next few paragraphs,
> I’ll
> > > describe in deep detail the microservices CoolStore example and how we
> used
> > > ...
> > >
> > >
> > > MG>the first step would involve knowing which simple function you need
> to
> > > deploy as microservice ?
> > >
> > > Regards,
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn *
> > >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > <
> > >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > *
> > >
> > >
> > >
> > > http://talebzadehmich.wordpress.com
> > >
> > >
> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> > > loss, damage or destruction of data or any other property which may
> arise
> > > from relying on this email's technical content is explicitly
> disclaimed.
> > > The author will in no case be liable for any monetary damages arising
> from
> > > such loss, damage or destruction.
> > >
> > >
> > >
> > >
> > > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com>
> wrote:
> > >
> > > >
> > > >
> > > > initial work under using Zookeeper as a Microservices container is
> here
> > > >
> > > >
> > >
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> > > >
> > > > ZooKeeper for Microservice Registration and Discovery ...<
> > > >
> > >
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> > > > >
> > > > planet.jboss.org
> > > > In a microservice world, multiple services are typically distributed
> in a
> > > > PaaS environment. Immutable infrastructure, such as those provided by
> > > > containers or immutable VM images. Services may scale up and down
> based
> > > > upon certain pre-defined metrics. Exact address of the service may
> not be
> > > > known ...
> > > >
> > > > once your Zookeeper Microservices container is operational
> > > >
> > > > you would need to 'tweak' kafka to extend and implement
> > > classes/interfaces
> > > > to become
> > > > a true microservices component..this may help
> > > >
> > > >
> > > >
> > >
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > > > [
> > >
> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
> > > > ]<
> > > >
> > >
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > > > >
> > > >
> > > > Monolithic to Microservices Refactoring for Java EE ...<
> > > >
> > >
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > > > >
> > > > blog.arungupta.me
> > > > Have you ever wondered what does it take to refactor an existing
> Java EE
> > > > monolithic application to a microservices-based one? This blog
> explains
> > > how
> > > > a trivial shopping cart example was converted to microservices-based
> > > > application, and what are some of the concerns around it.
> > > >
> > > >
> > > >
> > > > let me know if i can help out
> > > > Martin
> > > >
> > > >
> > > > ________________________________
> > > > From: Jörn Franke <jo...@gmail.com>
> > > > Sent: Sunday, July 8, 2018 6:18 AM
> > > > To: users@kafka.apache.org
> > > > Cc: user@flink.apache.org
> > > > Subject: Re: Real time streaming as a microservice
> > > >
> > > > Yes or Kafka will need it ...
> > > > As soon as your orchestrate different microservices this will happen.
> > > >
> > > >
> > > >
> > > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <
> mich.talebzadeh@gmail.com>
> > > > wrote:
> > > > >
> > > > > Thanks Jorn.
> > > > >
> > > > > So I gather as you correctly suggested, microservices do provide
> value
> > > in
> > > > > terms of modularisation. However, there will always "inevitably" be
> > > > > scenarios where the receiving artefact say Flink needs
> communication
> > > > > protocol changes?
> > > > >
> > > > > thanks
> > > > >
> > > > > Dr Mich Talebzadeh
> > > > >
> > > > >
> > > > >
> > > > > LinkedIn *
> > > >
> > >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > > <
> > > >
> > >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > > *
> > > > >
> > > > >
> > > > >
> > > > > http://talebzadehmich.wordpress.com
> > > > >
> > > > >
> > > > > *Disclaimer:* Use it at your own risk. Any and all responsibility
> for
> > > any
> > > > > loss, damage or destruction of data or any other property which may
> > > arise
> > > > > from relying on this email's technical content is explicitly
> > > disclaimed.
> > > > > The author will in no case be liable for any monetary damages
> arising
> > > > from
> > > > > such loss, damage or destruction.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com>
> > > wrote:
> > > > > >
> > > > > > That they are loosely coupled does not mean they are
> independent. For
> > > > > > instance, you would not be able to replace Kafka with zeromq in
> your
> > > > > > scenario. Unfortunately also Kafka sometimes needs to introduce
> > > breaking
> > > > > > changes and the dependent application needs to upgrade.
> > > > > > You will not be able to avoid these scenarios in the future
> (this is
> > > > only
> > > > > > possible if micro services don’t communicate with each other or
> if
> > > they
> > > > > > would never need to change their communication protocol - pretty
> > > > impossible
> > > > > > ). However there are ways of course to reduce it, eg kafka could
> > > reduce
> > > > the
> > > > > > number of breaking changes or you can develop a very lightweight
> > > > > > microservice that is very easy to change and that only deals
> with the
> > > > > > broker integration and your application etc.
> > > > > >
> > > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <
> mich.talebzadeh@gmail.com
> > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I have created the Kafka messaging architecture as a
> microservice
> > > that
> > > > > > > feeds both Spark streaming and Flink. Spark streaming uses
> > > > micro-batches
> > > > > > > meaning "collect and process data" and flink as an event driven
> > > > > > > architecture (a stateful application that reacts to incoming
> events
> > > by
> > > > > > > triggering computations etc.
> > > > > > >
> > > > > > > According to Wikipedia, A Microservice is a technique that
> > > structures
> > > > an
> > > > > > > application as a collection of loosely coupled services. In a
> > > > > > microservices
> > > > > > > architecture, services are fine-grained and the protocols are
> > > > > > lightweight.
> > > > > > >
> > > > > > > Ok for streaming data among other things I have to create and
> > > configure
> > > > > > > topic (or topics), design a robust zookeeper ensemble and
> create
> > > Kafka
> > > > > > > brokers with scalability and resiliency. Then I can offer the
> > > streaming
> > > > > > as
> > > > > > > a microservice to subscribers among them Spark and Flink. I can
> > > upgrade
> > > > > > > this microservice component in isolation without impacting
> either
> > > Spark
> > > > > > or
> > > > > > > Flink.
> > > > > > >
> > > > > > > The problem I face here is the dependency on Flink etc on the
> jar
> > > files
> > > > > > > specific for the version of Kafka deployed. For example
> > > > kafka_2.12-1.1.0
> > > > > > is
> > > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this work
> in
> > > Flink
> > > > > > 1.5
> > > > > > > application, I need to use the correct dependency in sbt
> build. For
> > > > > > > example:
> > > > > > > libraryDependencies += "org.apache.flink" %%
> > > > > > "flink-connector-kafka-0.11" %
> > > > > > > "1.5.0"
> > > > > > > libraryDependencies += "org.apache.flink" %%
> > > > > > "flink-connector-kafka-base" %
> > > > > > > "1.5.0"
> > > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala" %
> "1.5.0"
> > > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients" %
> > > > "0.11.0.0"
> > > > > > > libraryDependencies += "org.apache.flink" %%
> "flink-streaming-scala"
> > > %
> > > > > > > "1.5.0"
> > > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" %
> "0.11.0.0"
> > > > > > >
> > > > > > > and the Scala code needs to change:
> > > > > > >
> > > > > > > import
> > > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> > > > > > > …
> > > > > > > val stream = env
> > > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
> > > > > > > SimpleStringSchema(), properties))
> > > > > > >
> > > > > > > So in summary some changes need to be made to Flink to be able
> to
> > > > > > interact
> > > > > > > with the new version of Kafka. And more importantly if one can
> use an
> > > > > > > abstract notion of microservice here?
> > > > > > >
> > > > > > > Dr Mich Talebzadeh
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > LinkedIn *
> > > > > >
> > > >
> > >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > > > > <
> > > > > >
> > > >
> > >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > > > > *
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > http://talebzadehmich.wordpress.com
> > > > > > >
> > > > > > >
> > > > > > > *Disclaimer:* Use it at your own risk. Any and all
> responsibility for
> > > > any
> > > > > > > loss, damage or destruction of data or any other property
> which may
> > > > arise
> > > > > > > from relying on this email's technical content is explicitly
> > > > disclaimed.
> > > > > > > The author will in no case be liable for any monetary damages
> arising
> > > > > > from
> > > > > > > such loss, damage or destruction.
> > > > > >
> > > >
> > >
>

Re: Real time streaming as a microservice

Posted by Rahul Singh <ra...@gmail.com>.
Seems like you need to expose your port via docker run or docker-compose .

https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/



--
Rahul Singh
rahul.singh@anant.us

Anant Corporation
On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh <mi...@gmail.com>, wrote:
> Hi,
>
> I have now successfully created a docker for RHEL75 as follows:
>
> [root@rhes75 ~]# docker ps -a
> CONTAINER ID IMAGE COMMAND
> CREATED STATUS PORTS NAMES
> 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
> ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
> dockerZooKeeper
> 8dd84a174834 ubuntu "bash" 6 hours
> ago Up 6 hours
> dockerZooKeeperKafka
>
> The first container is ready made for ZooKeeper that exposes the zookeeper
> client port etc.
>
> The second container is an ubuntu shell which I installed both zookeeper
> and Kafka on it. They are both running in container dockerZooKeeperKafka
>
>
> hduser@8dd84a174834: /home/hduser/dba/bin> jps
> 5715 Kafka
> 5647 QuorumPeerMain
>
> hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
> (Not all processes could be identified, non-owned process info
> will not be shown, you would have to be root to see it all.)
> Active Internet connections (only servers)
> Proto Recv-Q Send-Q Local Address Foreign Address
> State User Inode PID/Program name
> tcp 0 0 0.0.0.0:9999 0.0.0.0:*
> LISTEN 1005 2865148 5715/java
> tcp 0 0 0.0.0.0:35312 0.0.0.0:*
> LISTEN 1005 2865147 5715/java
> tcp 0 0 0.0.0.0:34193 0.0.0.0:*
> LISTEN 1005 2865151 5715/java
> tcp 0 0 0.0.0.0:22 0.0.0.0:*
> LISTEN 0 2757032 -
> tcp 0 0 0.0.0.0:40803 0.0.0.0:*
> LISTEN 1005 2852821 5647/java
>
>
> *tcp 0 0 0.0.0.0:9092 <http://0.0.0.0:9092>
> 0.0.0.0:* LISTEN 1005 2873507
> 5715/javatcp 0 0 0.0.0.0:2181 <http://0.0.0.0:2181>
> 0.0.0.0:* LISTEN 1005 2852829 5647/java*tcp6
> 0 0 :::22 :::* LISTEN
> 0 2757034 -
>
> I have a gateway node that is connected to the host running the container.
> From within the container I can ssh to the gateway host *as both the
> gateway host and host running the container are on the same VLAN.*
>
>
> However, I cannot connect from gateway to the container. The container has
> this IP address
>
> root@8dd84a174834:~# ifconfig -a
> eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> *inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255*
> ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
> RX packets 173015 bytes 3263068025 (3.2 GB)
> RX errors 0 dropped 0 overruns 0 frame 0
> TX packets 189400 bytes 13557709 (13.5 MB)
> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>
> lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
> inet 127.0.0.1 netmask 255.0.0.0
> loop txqueuelen 1000 (Local Loopback)
> RX packets 8450 bytes 534805 (534.8 KB)
> RX errors 0 dropped 0 overruns 0 frame 0
> TX packets 8450 bytes 534805 (534.8 KB)
> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
>
>
>
> The interesting thing is that in order to publish streaming test data I
> need to be able to do something like below
>
>
> cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh --broker-list
> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
> --topic md
>
>
> That Kafka broker list --broker-list
> rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
> needs to be replaced by <container hostname>:9092!
>
>
> So at this juncture I am wondering what type of network needs to be created
> as the container is running within another host.
>
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com> wrote:
>
> >
> >
> > ________________________________
> > From: Mich Talebzadeh <mi...@gmail.com>
> > Sent: Sunday, July 8, 2018 1:01 PM
> > To: users@kafka.apache.org
> > Subject: Re: Real time streaming as a microservice
> >
> > Thanks Martin.
> >
> > From an implementation point of view do we need to introduce docker for
> > each microservice? In other words does it have to be artefact --> contain
> > --> docker for this to be true microservice and all these microservices
> > communicate through Service Registry.
> > MG>for deployment deploying thru docker container would be the easiest
> > means to test
> > MG>but first we would need to concentrate
> > MG>on your developing a micro-service first
> > MG>your development of a service registry
> > MG>your development of a micro-services container which can lookup
> > necessary endpoints
> > MG>since you pre-pordained Docker to be your deploy container I would
> > suggest implementing OpenShift
> > https://www.openshift.org/
> > OpenShift Origin - Open Source Container Application Platform<
> > https://www.openshift.org/>
> > www.openshift.org
> > The next generation open source app hosting platform by Red Hat
> >
> >
> >
> >
> > Also if we wanted to move from a monolithic classic design with Streaming
> > Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming, Flink)
> > --> Real time dashboard (anything built on something like D3) to
> > microservices how would that entail.
> > MG>the simpler the function the better ...something like
> > MG>simple input...user enters 'foo'
> > MG>simple processing....process does spark stream to determine what result
> > responds to 'foo'
> > MG>simple output...output will be text 'bar' formatting to be decided
> > (text/html/pdf?)
> >
> > One option would be to have three
> > principal microservices (each with sub-services) providing three
> > components?
> > MG>concentrate on the simplest function which would be_______________?
> > MG>shoehorn simple function into a viable microservice
> > MG>the following inventory microservice from redhat example shows how your
> > ______? service
> > MG>can be incorporated into a openshift container
> > MG>and be readily deployable in docker container
> > MG>
> > https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> > [
> > https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
> > ]<
> > https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> > >
> >
> > OpenShift and DevOps: The CoolStore Microservices Example<
> > https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> > >
> > developers.redhat.com
> > Today I want to talk about the demo we presented @ OpenShift Container
> > Platform Roadshow in Milan & Rome last week. The demo was based on JBoss
> > team’s great work available on this repo: In the next few paragraphs, I’ll
> > describe in deep detail the microservices CoolStore example and how we used
> > ...
> >
> >
> > MG>the first step would involve knowing which simple function you need to
> > deploy as microservice ?
> >
> > Regards,
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > *
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising from
> > such loss, damage or destruction.
> >
> >
> >
> >
> > On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com> wrote:
> >
> > >
> > >
> > > initial work under using Zookeeper as a Microservices container is here
> > >
> > >
> > http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> > >
> > > ZooKeeper for Microservice Registration and Discovery ...<
> > >
> > http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> > > >
> > > planet.jboss.org
> > > In a microservice world, multiple services are typically distributed in a
> > > PaaS environment. Immutable infrastructure, such as those provided by
> > > containers or immutable VM images. Services may scale up and down based
> > > upon certain pre-defined metrics. Exact address of the service may not be
> > > known ...
> > >
> > > once your Zookeeper Microservices container is operational
> > >
> > > you would need to 'tweak' kafka to extend and implement
> > classes/interfaces
> > > to become
> > > a true microservices component..this may help
> > >
> > >
> > >
> > http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > > [
> > http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
> > > ]<
> > >
> > http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > > >
> > >
> > > Monolithic to Microservices Refactoring for Java EE ...<
> > >
> > http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > > >
> > > blog.arungupta.me
> > > Have you ever wondered what does it take to refactor an existing Java EE
> > > monolithic application to a microservices-based one? This blog explains
> > how
> > > a trivial shopping cart example was converted to microservices-based
> > > application, and what are some of the concerns around it.
> > >
> > >
> > >
> > > let me know if i can help out
> > > Martin
> > >
> > >
> > > ________________________________
> > > From: Jörn Franke <jo...@gmail.com>
> > > Sent: Sunday, July 8, 2018 6:18 AM
> > > To: users@kafka.apache.org
> > > Cc: user@flink.apache.org
> > > Subject: Re: Real time streaming as a microservice
> > >
> > > Yes or Kafka will need it ...
> > > As soon as your orchestrate different microservices this will happen.
> > >
> > >
> > >
> > > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com>
> > > wrote:
> > > >
> > > > Thanks Jorn.
> > > >
> > > > So I gather as you correctly suggested, microservices do provide value
> > in
> > > > terms of modularisation. However, there will always "inevitably" be
> > > > scenarios where the receiving artefact say Flink needs communication
> > > > protocol changes?
> > > >
> > > > thanks
> > > >
> > > > Dr Mich Talebzadeh
> > > >
> > > >
> > > >
> > > > LinkedIn *
> > >
> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > <
> > >
> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > *
> > > >
> > > >
> > > >
> > > > http://talebzadehmich.wordpress.com
> > > >
> > > >
> > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> > any
> > > > loss, damage or destruction of data or any other property which may
> > arise
> > > > from relying on this email's technical content is explicitly
> > disclaimed.
> > > > The author will in no case be liable for any monetary damages arising
> > > from
> > > > such loss, damage or destruction.
> > > >
> > > >
> > > >
> > > >
> > > > > On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com>
> > wrote:
> > > > >
> > > > > That they are loosely coupled does not mean they are independent. For
> > > > > instance, you would not be able to replace Kafka with zeromq in your
> > > > > scenario. Unfortunately also Kafka sometimes needs to introduce
> > breaking
> > > > > changes and the dependent application needs to upgrade.
> > > > > You will not be able to avoid these scenarios in the future (this is
> > > only
> > > > > possible if micro services don’t communicate with each other or if
> > they
> > > > > would never need to change their communication protocol - pretty
> > > impossible
> > > > > ). However there are ways of course to reduce it, eg kafka could
> > reduce
> > > the
> > > > > number of breaking changes or you can develop a very lightweight
> > > > > microservice that is very easy to change and that only deals with the
> > > > > broker integration and your application etc.
> > > > >
> > > > > > On 8. Jul 2018, at 10:59, Mich Talebzadeh <mich.talebzadeh@gmail.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I have created the Kafka messaging architecture as a microservice
> > that
> > > > > > feeds both Spark streaming and Flink. Spark streaming uses
> > > micro-batches
> > > > > > meaning "collect and process data" and flink as an event driven
> > > > > > architecture (a stateful application that reacts to incoming events
> > by
> > > > > > triggering computations etc.
> > > > > >
> > > > > > According to Wikipedia, A Microservice is a technique that
> > structures
> > > an
> > > > > > application as a collection of loosely coupled services. In a
> > > > > microservices
> > > > > > architecture, services are fine-grained and the protocols are
> > > > > lightweight.
> > > > > >
> > > > > > Ok for streaming data among other things I have to create and
> > configure
> > > > > > topic (or topics), design a robust zookeeper ensemble and create
> > Kafka
> > > > > > brokers with scalability and resiliency. Then I can offer the
> > streaming
> > > > > as
> > > > > > a microservice to subscribers among them Spark and Flink. I can
> > upgrade
> > > > > > this microservice component in isolation without impacting either
> > Spark
> > > > > or
> > > > > > Flink.
> > > > > >
> > > > > > The problem I face here is the dependency on Flink etc on the jar
> > files
> > > > > > specific for the version of Kafka deployed. For example
> > > kafka_2.12-1.1.0
> > > > > is
> > > > > > built on Scala 2.12 and Kafka version 1.1.0. To make this work in
> > Flink
> > > > > 1.5
> > > > > > application, I need to use the correct dependency in sbt build. For
> > > > > > example:
> > > > > > libraryDependencies += "org.apache.flink" %%
> > > > > "flink-connector-kafka-0.11" %
> > > > > > "1.5.0"
> > > > > > libraryDependencies += "org.apache.flink" %%
> > > > > "flink-connector-kafka-base" %
> > > > > > "1.5.0"
> > > > > > libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> > > > > > libraryDependencies += "org.apache.kafka" % "kafka-clients" %
> > > "0.11.0.0"
> > > > > > libraryDependencies += "org.apache.flink" %% "flink-streaming-scala"
> > %
> > > > > > "1.5.0"
> > > > > > libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> > > > > >
> > > > > > and the Scala code needs to change:
> > > > > >
> > > > > > import
> > > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> > > > > > …
> > > > > > val stream = env
> > > > > > .addSource(new FlinkKafkaConsumer011[String]("md", new
> > > > > > SimpleStringSchema(), properties))
> > > > > >
> > > > > > So in summary some changes need to be made to Flink to be able to
> > > > > interact
> > > > > > with the new version of Kafka. And more importantly if one can use an
> > > > > > abstract notion of microservice here?
> > > > > >
> > > > > > Dr Mich Talebzadeh
> > > > > >
> > > > > >
> > > > > >
> > > > > > LinkedIn *
> > > > >
> > >
> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > > > <
> > > > >
> > >
> > https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > > > *
> > > > > >
> > > > > >
> > > > > >
> > > > > > http://talebzadehmich.wordpress.com
> > > > > >
> > > > > >
> > > > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> > > any
> > > > > > loss, damage or destruction of data or any other property which may
> > > arise
> > > > > > from relying on this email's technical content is explicitly
> > > disclaimed.
> > > > > > The author will in no case be liable for any monetary damages arising
> > > > > from
> > > > > > such loss, damage or destruction.
> > > > >
> > >
> >

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

I have now successfully created a docker for RHEL75 as follows:

[root@rhes75 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND
CREATED             STATUS              PORTS                          NAMES
816f07de15b1        zookeeper           "/docker-entrypoint.…"   2 hours
ago         Up 2 hours          2181/tcp, 2888/tcp, 3888/tcp
dockerZooKeeper
8dd84a174834        ubuntu              "bash"                   6 hours
ago         Up 6 hours
dockerZooKeeperKafka

The first container is ready made for ZooKeeper that exposes the zookeeper
client port etc.

The second container is an ubuntu shell which I installed both zookeeper
and Kafka on it. They are both running in container dockerZooKeeperKafka


hduser@8dd84a174834: /home/hduser/dba/bin> jps
5715 Kafka
5647 QuorumPeerMain

hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address
State       User       Inode      PID/Program name
tcp        0      0 0.0.0.0:9999            0.0.0.0:*
LISTEN      1005       2865148    5715/java
tcp        0      0 0.0.0.0:35312           0.0.0.0:*
LISTEN      1005       2865147    5715/java
tcp        0      0 0.0.0.0:34193           0.0.0.0:*
LISTEN      1005       2865151    5715/java
tcp        0      0 0.0.0.0:22              0.0.0.0:*
LISTEN      0          2757032    -
tcp        0      0 0.0.0.0:40803           0.0.0.0:*
LISTEN      1005       2852821    5647/java


*tcp        0      0 0.0.0.0:9092 <http://0.0.0.0:9092>
0.0.0.0:*               LISTEN      1005       2873507
5715/javatcp        0      0 0.0.0.0:2181 <http://0.0.0.0:2181>
0.0.0.0:*               LISTEN      1005       2852829    5647/java*tcp6
0      0 :::22                   :::*                    LISTEN
0          2757034    -

I have a gateway node that is connected to the host running the container.
From within the container I can ssh to the gateway  host *as both the
gateway host and host running the container are on the same VLAN.*


However, I cannot connect from gateway to the container. The container has
this IP address

root@8dd84a174834:~# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        *inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255*
        ether 02:42:ac:11:00:02  txqueuelen 0  (Ethernet)
        RX packets 173015  bytes 3263068025 (3.2 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 189400  bytes 13557709 (13.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 8450  bytes 534805 (534.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8450  bytes 534805 (534.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



The interesting thing is that in order to publish streaming test data I
need to be able to do something like below


cat ${PRICES} | ${KAFKA_HOME}/bin/kafka-console-producer.sh --broker-list
rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
--topic md


That Kafka broker list --broker-list
rhes75:9092,rhes564:9092,rhes75:9093,rhes564:9093,rhes75:9094,rhes564:9094
needs to be replaced by <container hostname>:9092!


So at this juncture I am wondering what type of network needs to be created
as the container is running within another host.


Thanks


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 8 Jul 2018 at 20:00, Martin Gainty <mg...@hotmail.com> wrote:

>
>
> ________________________________
> From: Mich Talebzadeh <mi...@gmail.com>
> Sent: Sunday, July 8, 2018 1:01 PM
> To: users@kafka.apache.org
> Subject: Re: Real time streaming as a microservice
>
> Thanks Martin.
>
> From an implementation point of view do we need to introduce docker for
> each microservice? In other words does it have to be artefact --> contain
> --> docker for this to be true microservice and all these microservices
> communicate through Service Registry.
> MG>for deployment deploying thru docker container would be the easiest
> means to test
> MG>but first we would need to concentrate
> MG>on your developing a micro-service first
> MG>your development of a service registry
> MG>your development of a micro-services container which can lookup
> necessary endpoints
> MG>since you pre-pordained Docker to be your deploy container I would
> suggest implementing OpenShift
> https://www.openshift.org/
> OpenShift Origin - Open Source Container Application Platform<
> https://www.openshift.org/>
> www.openshift.org
> The next generation open source app hosting platform by Red Hat
>
>
>
>
> Also if we wanted to move from a monolithic classic design with Streaming
> Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming, Flink)
> --> Real time dashboard (anything built on something like D3) to
> microservices how would that entail.
> MG>the simpler the function the better ...something like
> MG>simple input...user enters 'foo'
> MG>simple processing....process does spark stream to determine what result
> responds to 'foo'
> MG>simple output...output will be text 'bar' formatting to be decided
> (text/html/pdf?)
>
> One option would be to have three
> principal microservices (each with sub-services) providing three
> components?
> MG>concentrate on the simplest function which would be_______________?
> MG>shoehorn simple function into a viable microservice
> MG>the following inventory microservice from redhat example shows how your
> ______? service
> MG>can be incorporated into a openshift container
> MG>and be readily deployable in docker container
> MG>
> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> [
> https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png
> ]<
> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> >
>
> OpenShift and DevOps: The CoolStore Microservices Example<
> https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
> >
> developers.redhat.com
> Today I want to talk about the demo we presented @ OpenShift Container
> Platform Roadshow in Milan & Rome last week. The demo was based on JBoss
> team’s great work available on this repo: In the next few paragraphs, I’ll
> describe in deep detail the microservices CoolStore example and how we used
> ...
>
>
> MG>the first step would involve knowing which simple function you need to
> deploy as microservice ?
>
> Regards,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com> wrote:
>
> >
> >
> > initial work under using Zookeeper as a Microservices container is here
> >
> >
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> >
> > ZooKeeper for Microservice Registration and Discovery ...<
> >
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> > >
> > planet.jboss.org
> > In a microservice world, multiple services are typically distributed in a
> > PaaS environment. Immutable infrastructure, such as those provided by
> > containers or immutable VM images. Services may scale up and down based
> > upon certain pre-defined metrics. Exact address of the service may not be
> > known ...
> >
> > once your Zookeeper Microservices container is operational
> >
> > you would need to 'tweak' kafka to extend and implement
> classes/interfaces
> > to become
> > a true microservices component..this may help
> >
> >
> >
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > [
> http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
> > ]<
> >
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > >
> >
> > Monolithic to Microservices Refactoring for Java EE ...<
> >
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> > >
> > blog.arungupta.me
> > Have you ever wondered what does it take to refactor an existing Java EE
> > monolithic application to a microservices-based one? This blog explains
> how
> > a trivial shopping cart example was converted to microservices-based
> > application, and what are some of the concerns around it.
> >
> >
> >
> > let me know if i can help out
> > Martin
> >
> >
> > ________________________________
> > From: Jörn Franke <jo...@gmail.com>
> > Sent: Sunday, July 8, 2018 6:18 AM
> > To: users@kafka.apache.org
> > Cc: user@flink.apache.org
> > Subject: Re: Real time streaming as a microservice
> >
> > Yes or Kafka will need it ...
> > As soon as your orchestrate different microservices this will happen.
> >
> >
> >
> > > On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com>
> > wrote:
> > >
> > > Thanks Jorn.
> > >
> > > So I gather as you correctly suggested, microservices do provide value
> in
> > > terms of modularisation. However, there will always "inevitably" be
> > > scenarios where the receiving artefact say Flink needs communication
> > > protocol changes?
> > >
> > > thanks
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn *
> >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > <
> >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > >*
> > >
> > >
> > >
> > > http://talebzadehmich.wordpress.com
> > >
> > >
> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> > > loss, damage or destruction of data or any other property which may
> arise
> > > from relying on this email's technical content is explicitly
> disclaimed.
> > > The author will in no case be liable for any monetary damages arising
> > from
> > > such loss, damage or destruction.
> > >
> > >
> > >
> > >
> > >> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com>
> wrote:
> > >>
> > >> That they are loosely coupled does not mean they are independent. For
> > >> instance, you would not be able to replace Kafka with zeromq in your
> > >> scenario. Unfortunately also Kafka sometimes needs to introduce
> breaking
> > >> changes and the dependent application needs to upgrade.
> > >> You will not be able to avoid these scenarios in the future (this is
> > only
> > >> possible if micro services don’t communicate with each other or if
> they
> > >> would never need to change their communication protocol - pretty
> > impossible
> > >> ). However there are ways of course to reduce it, eg kafka could
> reduce
> > the
> > >> number of breaking changes or you can develop a very lightweight
> > >> microservice that is very easy to change and that only deals with the
> > >> broker integration and your application etc.
> > >>
> > >>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mich.talebzadeh@gmail.com
> >
> > >> wrote:
> > >>>
> > >>> Hi,
> > >>>
> > >>> I have created the Kafka messaging architecture as a microservice
> that
> > >>> feeds both Spark streaming and Flink. Spark streaming uses
> > micro-batches
> > >>> meaning "collect and process data" and flink as an event driven
> > >>> architecture (a stateful application that reacts to incoming events
> by
> > >>> triggering computations etc.
> > >>>
> > >>> According to Wikipedia, A Microservice is a  technique that
> structures
> > an
> > >>> application as a collection of loosely coupled services. In a
> > >> microservices
> > >>> architecture, services are fine-grained and the protocols are
> > >> lightweight.
> > >>>
> > >>> Ok for streaming data among other things I have to create and
> configure
> > >>> topic (or topics), design a robust zookeeper ensemble and create
> Kafka
> > >>> brokers with scalability and resiliency. Then I can offer the
> streaming
> > >> as
> > >>> a microservice to subscribers among them Spark and Flink. I can
> upgrade
> > >>> this microservice component in isolation without impacting either
> Spark
> > >> or
> > >>> Flink.
> > >>>
> > >>> The problem I face here is the dependency on Flink etc on the jar
> files
> > >>> specific for the version of Kafka deployed. For example
> > kafka_2.12-1.1.0
> > >> is
> > >>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in
> Flink
> > >> 1.5
> > >>> application, I need  to use the correct dependency in sbt build. For
> > >>> example:
> > >>> libraryDependencies += "org.apache.flink" %%
> > >> "flink-connector-kafka-0.11" %
> > >>> "1.5.0"
> > >>> libraryDependencies += "org.apache.flink" %%
> > >> "flink-connector-kafka-base" %
> > >>> "1.5.0"
> > >>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> > >>> libraryDependencies += "org.apache.kafka" % "kafka-clients" %
> > "0.11.0.0"
> > >>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala"
> %
> > >>> "1.5.0"
> > >>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> > >>>
> > >>> and the Scala code needs to change:
> > >>>
> > >>> import
> > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> > >>> …
> > >>>   val stream = env
> > >>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
> > >>> SimpleStringSchema(), properties))
> > >>>
> > >>> So in summary some changes need to be made to Flink to be able to
> > >> interact
> > >>> with the new version of Kafka. And more importantly if one can use an
> > >>> abstract notion of microservice here?
> > >>>
> > >>> Dr Mich Talebzadeh
> > >>>
> > >>>
> > >>>
> > >>> LinkedIn *
> > >>
> >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > >>> <
> > >>
> >
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > >>> *
> > >>>
> > >>>
> > >>>
> > >>> http://talebzadehmich.wordpress.com
> > >>>
> > >>>
> > >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> > any
> > >>> loss, damage or destruction of data or any other property which may
> > arise
> > >>> from relying on this email's technical content is explicitly
> > disclaimed.
> > >>> The author will in no case be liable for any monetary damages arising
> > >> from
> > >>> such loss, damage or destruction.
> > >>
> >
>

Re: Real time streaming as a microservice

Posted by Martin Gainty <mg...@hotmail.com>.

________________________________
From: Mich Talebzadeh <mi...@gmail.com>
Sent: Sunday, July 8, 2018 1:01 PM
To: users@kafka.apache.org
Subject: Re: Real time streaming as a microservice

Thanks Martin.

From an implementation point of view do we need to introduce docker for
each microservice? In other words does it have to be artefact --> contain
--> docker for this to be true microservice and all these microservices
communicate through Service Registry.
MG>for deployment deploying thru docker container would be the easiest means to test
MG>but first we would need to concentrate
MG>on your developing a micro-service first
MG>your development of a service registry
MG>your development of a micro-services container which can lookup necessary endpoints
MG>since you pre-pordained Docker to be your deploy container I would suggest implementing OpenShift
https://www.openshift.org/
OpenShift Origin - Open Source Container Application Platform<https://www.openshift.org/>
www.openshift.org
The next generation open source app hosting platform by Red Hat




Also if we wanted to move from a monolithic classic design with Streaming
Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming, Flink)
--> Real time dashboard (anything built on something like D3) to
microservices how would that entail.
MG>the simpler the function the better ...something like
MG>simple input...user enters 'foo'
MG>simple processing....process does spark stream to determine what result responds to 'foo'
MG>simple output...output will be text 'bar' formatting to be decided (text/html/pdf?)

One option would be to have three
principal microservices (each with sub-services) providing three components?
MG>concentrate on the simplest function which would be_______________?
MG>shoehorn simple function into a viable microservice
MG>the following inventory microservice from redhat example shows how your ______? service
MG>can be incorporated into a openshift container
MG>and be readily deployable in docker container
MG>https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
[https://developers.redhat.com/blog/wp-content/uploads/2017/05/img_5912da9d19c3c.png]<https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/>

OpenShift and DevOps: The CoolStore Microservices Example<https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/>
developers.redhat.com
Today I want to talk about the demo we presented @ OpenShift Container Platform Roadshow in Milan & Rome last week. The demo was based on JBoss team’s great work available on this repo: In the next few paragraphs, I’ll describe in deep detail the microservices CoolStore example and how we used ...


MG>the first step would involve knowing which simple function you need to deploy as microservice ?

Regards,

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com> wrote:

>
>
> initial work under using Zookeeper as a Microservices container is here
>
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>
> ZooKeeper for Microservice Registration and Discovery ...<
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> >
> planet.jboss.org
> In a microservice world, multiple services are typically distributed in a
> PaaS environment. Immutable infrastructure, such as those provided by
> containers or immutable VM images. Services may scale up and down based
> upon certain pre-defined metrics. Exact address of the service may not be
> known ...
>
> once your Zookeeper Microservices container is operational
>
> you would need to 'tweak' kafka to extend and implement classes/interfaces
> to become
> a true microservices component..this may help
>
>
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> [http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
> ]<
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> >
>
> Monolithic to Microservices Refactoring for Java EE ...<
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> >
> blog.arungupta.me
> Have you ever wondered what does it take to refactor an existing Java EE
> monolithic application to a microservices-based one? This blog explains how
> a trivial shopping cart example was converted to microservices-based
> application, and what are some of the concerns around it.
>
>
>
> let me know if i can help out
> Martin
>
>
> ________________________________
> From: Jörn Franke <jo...@gmail.com>
> Sent: Sunday, July 8, 2018 6:18 AM
> To: users@kafka.apache.org
> Cc: user@flink.apache.org
> Subject: Re: Real time streaming as a microservice
>
> Yes or Kafka will need it ...
> As soon as your orchestrate different microservices this will happen.
>
>
>
> > On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com>
> wrote:
> >
> > Thanks Jorn.
> >
> > So I gather as you correctly suggested, microservices do provide value in
> > terms of modularisation. However, there will always "inevitably" be
> > scenarios where the receiving artefact say Flink needs communication
> > protocol changes?
> >
> > thanks
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> >> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:
> >>
> >> That they are loosely coupled does not mean they are independent. For
> >> instance, you would not be able to replace Kafka with zeromq in your
> >> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
> >> changes and the dependent application needs to upgrade.
> >> You will not be able to avoid these scenarios in the future (this is
> only
> >> possible if micro services don’t communicate with each other or if they
> >> would never need to change their communication protocol - pretty
> impossible
> >> ). However there are ways of course to reduce it, eg kafka could reduce
> the
> >> number of breaking changes or you can develop a very lightweight
> >> microservice that is very easy to change and that only deals with the
> >> broker integration and your application etc.
> >>
> >>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have created the Kafka messaging architecture as a microservice that
> >>> feeds both Spark streaming and Flink. Spark streaming uses
> micro-batches
> >>> meaning "collect and process data" and flink as an event driven
> >>> architecture (a stateful application that reacts to incoming events by
> >>> triggering computations etc.
> >>>
> >>> According to Wikipedia, A Microservice is a  technique that structures
> an
> >>> application as a collection of loosely coupled services. In a
> >> microservices
> >>> architecture, services are fine-grained and the protocols are
> >> lightweight.
> >>>
> >>> Ok for streaming data among other things I have to create and configure
> >>> topic (or topics), design a robust zookeeper ensemble and create Kafka
> >>> brokers with scalability and resiliency. Then I can offer the streaming
> >> as
> >>> a microservice to subscribers among them Spark and Flink. I can upgrade
> >>> this microservice component in isolation without impacting either Spark
> >> or
> >>> Flink.
> >>>
> >>> The problem I face here is the dependency on Flink etc on the jar files
> >>> specific for the version of Kafka deployed. For example
> kafka_2.12-1.1.0
> >> is
> >>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
> >> 1.5
> >>> application, I need  to use the correct dependency in sbt build. For
> >>> example:
> >>> libraryDependencies += "org.apache.flink" %%
> >> "flink-connector-kafka-0.11" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.flink" %%
> >> "flink-connector-kafka-base" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> >>> libraryDependencies += "org.apache.kafka" % "kafka-clients" %
> "0.11.0.0"
> >>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> >>>
> >>> and the Scala code needs to change:
> >>>
> >>> import
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> >>> …
> >>>   val stream = env
> >>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
> >>> SimpleStringSchema(), properties))
> >>>
> >>> So in summary some changes need to be made to Flink to be able to
> >> interact
> >>> with the new version of Kafka. And more importantly if one can use an
> >>> abstract notion of microservice here?
> >>>
> >>> Dr Mich Talebzadeh
> >>>
> >>>
> >>>
> >>> LinkedIn *
> >>
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> <
> >>
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> *
> >>>
> >>>
> >>>
> >>> http://talebzadehmich.wordpress.com
> >>>
> >>>
> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >>> loss, damage or destruction of data or any other property which may
> arise
> >>> from relying on this email's technical content is explicitly
> disclaimed.
> >>> The author will in no case be liable for any monetary damages arising
> >> from
> >>> such loss, damage or destruction.
> >>
>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Martin.

From an implementation point of view do we need to introduce docker for
each microservice? In other words does it have to be artefact --> contain
--> docker for this to be true microservice and all these microservices
communicate through Service Registry.

Also if we wanted to move from a monolithic classic design with Streaming
Ingestion (ZooKeeper, Kafka) --> Processing engine (Spark Streaming, Flink)
--> Real time dashboard (anything built on something like D3) to
microservices how would that entail. One option would be to have three
principal microservices (each with sub-services) providing three components?

Regards,

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 8 Jul 2018 at 13:58, Martin Gainty <mg...@hotmail.com> wrote:

>
>
> initial work under using Zookeeper as a Microservices container is here
>
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
>
> ZooKeeper for Microservice Registration and Discovery ...<
> http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery
> >
> planet.jboss.org
> In a microservice world, multiple services are typically distributed in a
> PaaS environment. Immutable infrastructure, such as those provided by
> containers or immutable VM images. Services may scale up and down based
> upon certain pre-defined metrics. Exact address of the service may not be
> known ...
>
> once your Zookeeper Microservices container is operational
>
> you would need to 'tweak' kafka to extend and implement classes/interfaces
> to become
> a true microservices component..this may help
>
>
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> [http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png
> ]<
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> >
>
> Monolithic to Microservices Refactoring for Java EE ...<
> http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
> >
> blog.arungupta.me
> Have you ever wondered what does it take to refactor an existing Java EE
> monolithic application to a microservices-based one? This blog explains how
> a trivial shopping cart example was converted to microservices-based
> application, and what are some of the concerns around it.
>
>
>
> let me know if i can help out
> Martin
>
>
> ________________________________
> From: Jörn Franke <jo...@gmail.com>
> Sent: Sunday, July 8, 2018 6:18 AM
> To: users@kafka.apache.org
> Cc: user@flink.apache.org
> Subject: Re: Real time streaming as a microservice
>
> Yes or Kafka will need it ...
> As soon as your orchestrate different microservices this will happen.
>
>
>
> > On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com>
> wrote:
> >
> > Thanks Jorn.
> >
> > So I gather as you correctly suggested, microservices do provide value in
> > terms of modularisation. However, there will always "inevitably" be
> > scenarios where the receiving artefact say Flink needs communication
> > protocol changes?
> >
> > thanks
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> >> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:
> >>
> >> That they are loosely coupled does not mean they are independent. For
> >> instance, you would not be able to replace Kafka with zeromq in your
> >> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
> >> changes and the dependent application needs to upgrade.
> >> You will not be able to avoid these scenarios in the future (this is
> only
> >> possible if micro services don’t communicate with each other or if they
> >> would never need to change their communication protocol - pretty
> impossible
> >> ). However there are ways of course to reduce it, eg kafka could reduce
> the
> >> number of breaking changes or you can develop a very lightweight
> >> microservice that is very easy to change and that only deals with the
> >> broker integration and your application etc.
> >>
> >>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have created the Kafka messaging architecture as a microservice that
> >>> feeds both Spark streaming and Flink. Spark streaming uses
> micro-batches
> >>> meaning "collect and process data" and flink as an event driven
> >>> architecture (a stateful application that reacts to incoming events by
> >>> triggering computations etc.
> >>>
> >>> According to Wikipedia, A Microservice is a  technique that structures
> an
> >>> application as a collection of loosely coupled services. In a
> >> microservices
> >>> architecture, services are fine-grained and the protocols are
> >> lightweight.
> >>>
> >>> Ok for streaming data among other things I have to create and configure
> >>> topic (or topics), design a robust zookeeper ensemble and create Kafka
> >>> brokers with scalability and resiliency. Then I can offer the streaming
> >> as
> >>> a microservice to subscribers among them Spark and Flink. I can upgrade
> >>> this microservice component in isolation without impacting either Spark
> >> or
> >>> Flink.
> >>>
> >>> The problem I face here is the dependency on Flink etc on the jar files
> >>> specific for the version of Kafka deployed. For example
> kafka_2.12-1.1.0
> >> is
> >>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
> >> 1.5
> >>> application, I need  to use the correct dependency in sbt build. For
> >>> example:
> >>> libraryDependencies += "org.apache.flink" %%
> >> "flink-connector-kafka-0.11" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.flink" %%
> >> "flink-connector-kafka-base" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> >>> libraryDependencies += "org.apache.kafka" % "kafka-clients" %
> "0.11.0.0"
> >>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> >>> "1.5.0"
> >>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> >>>
> >>> and the Scala code needs to change:
> >>>
> >>> import
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> >>> …
> >>>   val stream = env
> >>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
> >>> SimpleStringSchema(), properties))
> >>>
> >>> So in summary some changes need to be made to Flink to be able to
> >> interact
> >>> with the new version of Kafka. And more importantly if one can use an
> >>> abstract notion of microservice here?
> >>>
> >>> Dr Mich Talebzadeh
> >>>
> >>>
> >>>
> >>> LinkedIn *
> >>
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> <
> >>
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >>> *
> >>>
> >>>
> >>>
> >>> http://talebzadehmich.wordpress.com
> >>>
> >>>
> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >>> loss, damage or destruction of data or any other property which may
> arise
> >>> from relying on this email's technical content is explicitly
> disclaimed.
> >>> The author will in no case be liable for any monetary damages arising
> >> from
> >>> such loss, damage or destruction.
> >>
>

Re: Real time streaming as a microservice

Posted by Martin Gainty <mg...@hotmail.com>.

initial work under using Zookeeper as a Microservices container is here
http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery

ZooKeeper for Microservice Registration and Discovery ...<http://planet.jboss.org/post/zookeeper_for_microservice_registration_and_discovery>
planet.jboss.org
In a microservice world, multiple services are typically distributed in a PaaS environment. Immutable infrastructure, such as those provided by containers or immutable VM images. Services may scale up and down based upon certain pre-defined metrics. Exact address of the service may not be known ...

once your Zookeeper Microservices container is operational

you would need to 'tweak' kafka to extend and implement classes/interfaces to become
a true microservices component..this may help

http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/
[http://blog.arungupta.me/wp-content/uploads/2015/06/javaee-monolithic.png]<http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/>

Monolithic to Microservices Refactoring for Java EE ...<http://blog.arungupta.me/monolithic-microservices-refactoring-javaee-applications/>
blog.arungupta.me
Have you ever wondered what does it take to refactor an existing Java EE monolithic application to a microservices-based one? This blog explains how a trivial shopping cart example was converted to microservices-based application, and what are some of the concerns around it.



let me know if i can help out
Martin


________________________________
From: Jörn Franke <jo...@gmail.com>
Sent: Sunday, July 8, 2018 6:18 AM
To: users@kafka.apache.org
Cc: user@flink.apache.org
Subject: Re: Real time streaming as a microservice

Yes or Kafka will need it ...
As soon as your orchestrate different microservices this will happen.



> On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com> wrote:
>
> Thanks Jorn.
>
> So I gather as you correctly suggested, microservices do provide value in
> terms of modularisation. However, there will always "inevitably" be
> scenarios where the receiving artefact say Flink needs communication
> protocol changes?
>
> thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:
>>
>> That they are loosely coupled does not mean they are independent. For
>> instance, you would not be able to replace Kafka with zeromq in your
>> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
>> changes and the dependent application needs to upgrade.
>> You will not be able to avoid these scenarios in the future (this is only
>> possible if micro services don’t communicate with each other or if they
>> would never need to change their communication protocol - pretty impossible
>> ). However there are ways of course to reduce it, eg kafka could reduce the
>> number of breaking changes or you can develop a very lightweight
>> microservice that is very easy to change and that only deals with the
>> broker integration and your application etc.
>>
>>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>>
>>> Hi,
>>>
>>> I have created the Kafka messaging architecture as a microservice that
>>> feeds both Spark streaming and Flink. Spark streaming uses micro-batches
>>> meaning "collect and process data" and flink as an event driven
>>> architecture (a stateful application that reacts to incoming events by
>>> triggering computations etc.
>>>
>>> According to Wikipedia, A Microservice is a  technique that structures an
>>> application as a collection of loosely coupled services. In a
>> microservices
>>> architecture, services are fine-grained and the protocols are
>> lightweight.
>>>
>>> Ok for streaming data among other things I have to create and configure
>>> topic (or topics), design a robust zookeeper ensemble and create Kafka
>>> brokers with scalability and resiliency. Then I can offer the streaming
>> as
>>> a microservice to subscribers among them Spark and Flink. I can upgrade
>>> this microservice component in isolation without impacting either Spark
>> or
>>> Flink.
>>>
>>> The problem I face here is the dependency on Flink etc on the jar files
>>> specific for the version of Kafka deployed. For example kafka_2.12-1.1.0
>> is
>>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
>> 1.5
>>> application, I need  to use the correct dependency in sbt build. For
>>> example:
>>> libraryDependencies += "org.apache.flink" %%
>> "flink-connector-kafka-0.11" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.flink" %%
>> "flink-connector-kafka-base" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
>>> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
>>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
>>>
>>> and the Scala code needs to change:
>>>
>>> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>> …
>>>   val stream = env
>>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
>>> SimpleStringSchema(), properties))
>>>
>>> So in summary some changes need to be made to Flink to be able to
>> interact
>>> with the new version of Kafka. And more importantly if one can use an
>>> abstract notion of microservice here?
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>>> loss, damage or destruction of data or any other property which may arise
>>> from relying on this email's technical content is explicitly disclaimed.
>>> The author will in no case be liable for any monetary damages arising
>> from
>>> such loss, damage or destruction.
>>

Re: Real time streaming as a microservice

Posted by Jörn Franke <jo...@gmail.com>.
Yes or Kafka will need it ...
As soon as your orchestrate different microservices this will happen.



> On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com> wrote:
> 
> Thanks Jorn.
> 
> So I gather as you correctly suggested, microservices do provide value in
> terms of modularisation. However, there will always "inevitably" be
> scenarios where the receiving artefact say Flink needs communication
> protocol changes?
> 
> thanks
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> 
> 
> 
> 
>> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:
>> 
>> That they are loosely coupled does not mean they are independent. For
>> instance, you would not be able to replace Kafka with zeromq in your
>> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
>> changes and the dependent application needs to upgrade.
>> You will not be able to avoid these scenarios in the future (this is only
>> possible if micro services don’t communicate with each other or if they
>> would never need to change their communication protocol - pretty impossible
>> ). However there are ways of course to reduce it, eg kafka could reduce the
>> number of breaking changes or you can develop a very lightweight
>> microservice that is very easy to change and that only deals with the
>> broker integration and your application etc.
>> 
>>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I have created the Kafka messaging architecture as a microservice that
>>> feeds both Spark streaming and Flink. Spark streaming uses micro-batches
>>> meaning "collect and process data" and flink as an event driven
>>> architecture (a stateful application that reacts to incoming events by
>>> triggering computations etc.
>>> 
>>> According to Wikipedia, A Microservice is a  technique that structures an
>>> application as a collection of loosely coupled services. In a
>> microservices
>>> architecture, services are fine-grained and the protocols are
>> lightweight.
>>> 
>>> Ok for streaming data among other things I have to create and configure
>>> topic (or topics), design a robust zookeeper ensemble and create Kafka
>>> brokers with scalability and resiliency. Then I can offer the streaming
>> as
>>> a microservice to subscribers among them Spark and Flink. I can upgrade
>>> this microservice component in isolation without impacting either Spark
>> or
>>> Flink.
>>> 
>>> The problem I face here is the dependency on Flink etc on the jar files
>>> specific for the version of Kafka deployed. For example kafka_2.12-1.1.0
>> is
>>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
>> 1.5
>>> application, I need  to use the correct dependency in sbt build. For
>>> example:
>>> libraryDependencies += "org.apache.flink" %%
>> "flink-connector-kafka-0.11" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.flink" %%
>> "flink-connector-kafka-base" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
>>> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
>>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
>>> 
>>> and the Scala code needs to change:
>>> 
>>> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>> …
>>>   val stream = env
>>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
>>> SimpleStringSchema(), properties))
>>> 
>>> So in summary some changes need to be made to Flink to be able to
>> interact
>>> with the new version of Kafka. And more importantly if one can use an
>>> abstract notion of microservice here?
>>> 
>>> Dr Mich Talebzadeh
>>> 
>>> 
>>> 
>>> LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>> 
>>> 
>>> 
>>> http://talebzadehmich.wordpress.com
>>> 
>>> 
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>>> loss, damage or destruction of data or any other property which may arise
>>> from relying on this email's technical content is explicitly disclaimed.
>>> The author will in no case be liable for any monetary damages arising
>> from
>>> such loss, damage or destruction.
>> 

Re: Real time streaming as a microservice

Posted by Jörn Franke <jo...@gmail.com>.
Yes or Kafka will need it ...
As soon as your orchestrate different microservices this will happen.



> On 8. Jul 2018, at 11:33, Mich Talebzadeh <mi...@gmail.com> wrote:
> 
> Thanks Jorn.
> 
> So I gather as you correctly suggested, microservices do provide value in
> terms of modularisation. However, there will always "inevitably" be
> scenarios where the receiving artefact say Flink needs communication
> protocol changes?
> 
> thanks
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> 
> 
> 
> 
>> On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:
>> 
>> That they are loosely coupled does not mean they are independent. For
>> instance, you would not be able to replace Kafka with zeromq in your
>> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
>> changes and the dependent application needs to upgrade.
>> You will not be able to avoid these scenarios in the future (this is only
>> possible if micro services don’t communicate with each other or if they
>> would never need to change their communication protocol - pretty impossible
>> ). However there are ways of course to reduce it, eg kafka could reduce the
>> number of breaking changes or you can develop a very lightweight
>> microservice that is very easy to change and that only deals with the
>> broker integration and your application etc.
>> 
>>> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I have created the Kafka messaging architecture as a microservice that
>>> feeds both Spark streaming and Flink. Spark streaming uses micro-batches
>>> meaning "collect and process data" and flink as an event driven
>>> architecture (a stateful application that reacts to incoming events by
>>> triggering computations etc.
>>> 
>>> According to Wikipedia, A Microservice is a  technique that structures an
>>> application as a collection of loosely coupled services. In a
>> microservices
>>> architecture, services are fine-grained and the protocols are
>> lightweight.
>>> 
>>> Ok for streaming data among other things I have to create and configure
>>> topic (or topics), design a robust zookeeper ensemble and create Kafka
>>> brokers with scalability and resiliency. Then I can offer the streaming
>> as
>>> a microservice to subscribers among them Spark and Flink. I can upgrade
>>> this microservice component in isolation without impacting either Spark
>> or
>>> Flink.
>>> 
>>> The problem I face here is the dependency on Flink etc on the jar files
>>> specific for the version of Kafka deployed. For example kafka_2.12-1.1.0
>> is
>>> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
>> 1.5
>>> application, I need  to use the correct dependency in sbt build. For
>>> example:
>>> libraryDependencies += "org.apache.flink" %%
>> "flink-connector-kafka-0.11" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.flink" %%
>> "flink-connector-kafka-base" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
>>> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
>>> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
>>> "1.5.0"
>>> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
>>> 
>>> and the Scala code needs to change:
>>> 
>>> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
>>> …
>>>   val stream = env
>>>                .addSource(new FlinkKafkaConsumer011[String]("md", new
>>> SimpleStringSchema(), properties))
>>> 
>>> So in summary some changes need to be made to Flink to be able to
>> interact
>>> with the new version of Kafka. And more importantly if one can use an
>>> abstract notion of microservice here?
>>> 
>>> Dr Mich Talebzadeh
>>> 
>>> 
>>> 
>>> LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>> 
>>> 
>>> 
>>> http://talebzadehmich.wordpress.com
>>> 
>>> 
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>>> loss, damage or destruction of data or any other property which may arise
>>> from relying on this email's technical content is explicitly disclaimed.
>>> The author will in no case be liable for any monetary damages arising
>> from
>>> such loss, damage or destruction.
>> 

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Jorn.

So I gather as you correctly suggested, microservices do provide value in
terms of modularisation. However, there will always "inevitably" be
scenarios where the receiving artefact say Flink needs communication
protocol changes?

thanks

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:

> That they are loosely coupled does not mean they are independent. For
> instance, you would not be able to replace Kafka with zeromq in your
> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
> changes and the dependent application needs to upgrade.
> You will not be able to avoid these scenarios in the future (this is only
> possible if micro services don’t communicate with each other or if they
> would never need to change their communication protocol - pretty impossible
> ). However there are ways of course to reduce it, eg kafka could reduce the
> number of breaking changes or you can develop a very lightweight
> microservice that is very easy to change and that only deals with the
> broker integration and your application etc.
>
> > On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > I have created the Kafka messaging architecture as a microservice that
> > feeds both Spark streaming and Flink. Spark streaming uses micro-batches
> > meaning "collect and process data" and flink as an event driven
> > architecture (a stateful application that reacts to incoming events by
> > triggering computations etc.
> >
> > According to Wikipedia, A Microservice is a  technique that structures an
> > application as a collection of loosely coupled services. In a
> microservices
> > architecture, services are fine-grained and the protocols are
> lightweight.
> >
> > Ok for streaming data among other things I have to create and configure
> > topic (or topics), design a robust zookeeper ensemble and create Kafka
> > brokers with scalability and resiliency. Then I can offer the streaming
> as
> > a microservice to subscribers among them Spark and Flink. I can upgrade
> > this microservice component in isolation without impacting either Spark
> or
> > Flink.
> >
> > The problem I face here is the dependency on Flink etc on the jar files
> > specific for the version of Kafka deployed. For example kafka_2.12-1.1.0
> is
> > built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
> 1.5
> > application, I need  to use the correct dependency in sbt build. For
> > example:
> > libraryDependencies += "org.apache.flink" %%
> "flink-connector-kafka-0.11" %
> > "1.5.0"
> > libraryDependencies += "org.apache.flink" %%
> "flink-connector-kafka-base" %
> > "1.5.0"
> > libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> > libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
> > libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> > "1.5.0"
> > libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> >
> > and the Scala code needs to change:
> >
> > import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> > …
> >    val stream = env
> >                 .addSource(new FlinkKafkaConsumer011[String]("md", new
> > SimpleStringSchema(), properties))
> >
> > So in summary some changes need to be made to Flink to be able to
> interact
> > with the new version of Kafka. And more importantly if one can use an
> > abstract notion of microservice here?
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
>

Re: Real time streaming as a microservice

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Jorn.

So I gather as you correctly suggested, microservices do provide value in
terms of modularisation. However, there will always "inevitably" be
scenarios where the receiving artefact say Flink needs communication
protocol changes?

thanks

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 8 Jul 2018 at 10:25, Jörn Franke <jo...@gmail.com> wrote:

> That they are loosely coupled does not mean they are independent. For
> instance, you would not be able to replace Kafka with zeromq in your
> scenario. Unfortunately also Kafka sometimes needs to introduce breaking
> changes and the dependent application needs to upgrade.
> You will not be able to avoid these scenarios in the future (this is only
> possible if micro services don’t communicate with each other or if they
> would never need to change their communication protocol - pretty impossible
> ). However there are ways of course to reduce it, eg kafka could reduce the
> number of breaking changes or you can develop a very lightweight
> microservice that is very easy to change and that only deals with the
> broker integration and your application etc.
>
> > On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > I have created the Kafka messaging architecture as a microservice that
> > feeds both Spark streaming and Flink. Spark streaming uses micro-batches
> > meaning "collect and process data" and flink as an event driven
> > architecture (a stateful application that reacts to incoming events by
> > triggering computations etc.
> >
> > According to Wikipedia, A Microservice is a  technique that structures an
> > application as a collection of loosely coupled services. In a
> microservices
> > architecture, services are fine-grained and the protocols are
> lightweight.
> >
> > Ok for streaming data among other things I have to create and configure
> > topic (or topics), design a robust zookeeper ensemble and create Kafka
> > brokers with scalability and resiliency. Then I can offer the streaming
> as
> > a microservice to subscribers among them Spark and Flink. I can upgrade
> > this microservice component in isolation without impacting either Spark
> or
> > Flink.
> >
> > The problem I face here is the dependency on Flink etc on the jar files
> > specific for the version of Kafka deployed. For example kafka_2.12-1.1.0
> is
> > built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink
> 1.5
> > application, I need  to use the correct dependency in sbt build. For
> > example:
> > libraryDependencies += "org.apache.flink" %%
> "flink-connector-kafka-0.11" %
> > "1.5.0"
> > libraryDependencies += "org.apache.flink" %%
> "flink-connector-kafka-base" %
> > "1.5.0"
> > libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> > libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
> > libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> > "1.5.0"
> > libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> >
> > and the Scala code needs to change:
> >
> > import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> > …
> >    val stream = env
> >                 .addSource(new FlinkKafkaConsumer011[String]("md", new
> > SimpleStringSchema(), properties))
> >
> > So in summary some changes need to be made to Flink to be able to
> interact
> > with the new version of Kafka. And more importantly if one can use an
> > abstract notion of microservice here?
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
>

Re: Real time streaming as a microservice

Posted by Jörn Franke <jo...@gmail.com>.
That they are loosely coupled does not mean they are independent. For instance, you would not be able to replace Kafka with zeromq in your scenario. Unfortunately also Kafka sometimes needs to introduce breaking changes and the dependent application needs to upgrade. 
You will not be able to avoid these scenarios in the future (this is only possible if micro services don’t communicate with each other or if they would never need to change their communication protocol - pretty impossible ). However there are ways of course to reduce it, eg kafka could reduce the number of breaking changes or you can develop a very lightweight microservice that is very easy to change and that only deals with the broker integration and your application etc.

> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com> wrote:
> 
> Hi,
> 
> I have created the Kafka messaging architecture as a microservice that
> feeds both Spark streaming and Flink. Spark streaming uses micro-batches
> meaning "collect and process data" and flink as an event driven
> architecture (a stateful application that reacts to incoming events by
> triggering computations etc.
> 
> According to Wikipedia, A Microservice is a  technique that structures an
> application as a collection of loosely coupled services. In a microservices
> architecture, services are fine-grained and the protocols are lightweight.
> 
> Ok for streaming data among other things I have to create and configure
> topic (or topics), design a robust zookeeper ensemble and create Kafka
> brokers with scalability and resiliency. Then I can offer the streaming as
> a microservice to subscribers among them Spark and Flink. I can upgrade
> this microservice component in isolation without impacting either Spark or
> Flink.
> 
> The problem I face here is the dependency on Flink etc on the jar files
> specific for the version of Kafka deployed. For example kafka_2.12-1.1.0 is
> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink 1.5
> application, I need  to use the correct dependency in sbt build. For
> example:
> libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-0.11" %
> "1.5.0"
> libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-base" %
> "1.5.0"
> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> "1.5.0"
> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> 
> and the Scala code needs to change:
> 
> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> …
>    val stream = env
>                 .addSource(new FlinkKafkaConsumer011[String]("md", new
> SimpleStringSchema(), properties))
> 
> So in summary some changes need to be made to Flink to be able to interact
> with the new version of Kafka. And more importantly if one can use an
> abstract notion of microservice here?
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.

Re: Real time streaming as a microservice

Posted by Jörn Franke <jo...@gmail.com>.
That they are loosely coupled does not mean they are independent. For instance, you would not be able to replace Kafka with zeromq in your scenario. Unfortunately also Kafka sometimes needs to introduce breaking changes and the dependent application needs to upgrade. 
You will not be able to avoid these scenarios in the future (this is only possible if micro services don’t communicate with each other or if they would never need to change their communication protocol - pretty impossible ). However there are ways of course to reduce it, eg kafka could reduce the number of breaking changes or you can develop a very lightweight microservice that is very easy to change and that only deals with the broker integration and your application etc.

> On 8. Jul 2018, at 10:59, Mich Talebzadeh <mi...@gmail.com> wrote:
> 
> Hi,
> 
> I have created the Kafka messaging architecture as a microservice that
> feeds both Spark streaming and Flink. Spark streaming uses micro-batches
> meaning "collect and process data" and flink as an event driven
> architecture (a stateful application that reacts to incoming events by
> triggering computations etc.
> 
> According to Wikipedia, A Microservice is a  technique that structures an
> application as a collection of loosely coupled services. In a microservices
> architecture, services are fine-grained and the protocols are lightweight.
> 
> Ok for streaming data among other things I have to create and configure
> topic (or topics), design a robust zookeeper ensemble and create Kafka
> brokers with scalability and resiliency. Then I can offer the streaming as
> a microservice to subscribers among them Spark and Flink. I can upgrade
> this microservice component in isolation without impacting either Spark or
> Flink.
> 
> The problem I face here is the dependency on Flink etc on the jar files
> specific for the version of Kafka deployed. For example kafka_2.12-1.1.0 is
> built on Scala 2.12 and Kafka version 1.1.0. To make this work in Flink 1.5
> application, I need  to use the correct dependency in sbt build. For
> example:
> libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-0.11" %
> "1.5.0"
> libraryDependencies += "org.apache.flink" %% "flink-connector-kafka-base" %
> "1.5.0"
> libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.5.0"
> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"
> libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" %
> "1.5.0"
> libraryDependencies += "org.apache.kafka" %% "kafka" % "0.11.0.0"
> 
> and the Scala code needs to change:
> 
> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
> …
>    val stream = env
>                 .addSource(new FlinkKafkaConsumer011[String]("md", new
> SimpleStringSchema(), properties))
> 
> So in summary some changes need to be made to Flink to be able to interact
> with the new version of Kafka. And more importantly if one can use an
> abstract notion of microservice here?
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.