You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@nifi.apache.org by Joe Witt <jo...@gmail.com> on 2017/05/15 16:15:27 UTC

Re: Storage buffer separation in a multi-tenant cluster.

Kris,

Now that five months have passed on this thread I suppose it is time
to reply :-)  Sorry about the delay

To the basis of the question we do not provide in Apache NiFi today
any built-in or integrated quota management for the various tenants of
a given system.  The multi-tenancy today is largely oriented around
isolation from an authorization/entitlements perspective.  Thus far
we've not built in anything to restrict a given portion of the flow on
CPU, memory, disk, network usage though it does seem like we're
trending toward such controls being useful or at the very least we can
better describe how to effectively accomplish that and close gaps
where necessary.

When we're talking about buffer/storage separation is the concern
about isolating usage for live flow, usage for retained objects (for
replay, click to content), or is it perhaps a security isolation
consideration?  In your idea would the isolation be made available at
a process group level?

Certainly something we can have a good discussion on for what makes
sense to introduce.

Thanks
Joe

On Thu, Jan 19, 2017 at 1:33 PM, Kristopher Kane <kk...@gmail.com> wrote:
> I'm thinking that NiFi uses in the wild are project oriented with regard to
> resources and not presented as an enterprise platform where the multi-tenant
> risks are of concern.
>
> On Thu, Jan 12, 2017 at 3:28 PM, Kristopher Kane <kk...@gmail.com>
> wrote:
>>
>> I work with a medium sized Storm cluster that is used by many tenants.  As
>> the admins of the Storm cluster we must be mindful of network and CPU IO and
>> adjust, manually, based on usage.  Many of these Storm uses would be a
>> better fit with NiFi's inbuilt capabilities and ease of use whilst leaving
>> the high throughput work in Storm.  Storm works really well out of the box
>> with many (dozens) of separate users across hundreds of topologies. We
>> simply add more nodes and don't have to worry much about load and users
>> walking over each other since our failure replay is from Kafka always.
>>
>> What isn't obvious to me is how local buffer storage is handled in a
>> multi-tenant NiFi cluster and am wondering if others have patterns out there
>> to prevent a NiFi user from eating up available disk thus downing other
>> user's workflows.
>>
>> My initial thought is a management layer outside of NiFi that invokes
>> Linux FS quotas by user account.  Does NiFi have anything built in for this
>> type of preventive measure?
>>
>> Thanks,
>>
>> Kris
>
>