You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Javi Roman <jr...@gmail.com> on 2014/05/20 22:09:18 UTC

storm to HDFS and lambda architecture

Hi!

I've been thinking about Nathan Marz lambda architecture with the
components:

1. Kafka as message bus, the entry point of raw data.
2. Camus to dump data into HDFS (the batch layer).
3. And Storm to dump data into HBase (the speed layer).

I guess this is the "classical architecture" (the theory), however thinking
in the Storm-to-HDFS connector from P. Taylor: to dump processed data from
Storm to HDFS is a good idea taking into account the lambda architecture?
Do you think this could slow down the speed layer concept? Otherwise
Storm-to-HDFS connector is suitable for other plans.

Many thanks

--
Javi Roman

Re: storm to HDFS and lambda architecture

Posted by Javi Roman <jr...@gmail.com>.
Answered my own question, after a deeper search in google, the point is
Storm-on-YARN, is designed for use in the batch layer (over a HDFS
cluster), so the Storm-to-HDFS connector has sense in this batch layer. On
the other hand, the Storm in the speed layer has no sense the use of HDFS
connector.

I guess this architecture is similar to Spark/Spark Streaming concept.

Anybody has any comment about this thoughts?

--
Javi Roman



On Tue, May 20, 2014 at 10:09 PM, Javi Roman <jr...@gmail.com>wrote:

> Hi!
>
> I've been thinking about Nathan Marz lambda architecture with the
> components:
>
> 1. Kafka as message bus, the entry point of raw data.
> 2. Camus to dump data into HDFS (the batch layer).
> 3. And Storm to dump data into HBase (the speed layer).
>
> I guess this is the "classical architecture" (the theory), however
> thinking in the Storm-to-HDFS connector from P. Taylor: to dump processed
> data from Storm to HDFS is a good idea taking into account the lambda
> architecture? Do you think this could slow down the speed layer concept?
> Otherwise Storm-to-HDFS connector is suitable for other plans.
>
> Many thanks
>
> --
> Javi Roman
>
>