You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "vincent eiger (JIRA)" <ji...@apache.org> on 2016/12/19 15:16:59 UTC

[jira] [Created] (HBASE-17340) Workload management with Hbase

vincent eiger created HBASE-17340:
-------------------------------------

             Summary: Workload management with Hbase 
                 Key: HBASE-17340
                 URL: https://issues.apache.org/jira/browse/HBASE-17340
             Project: HBase
          Issue Type: Brainstorming
          Components: Client
            Reporter: vincent eiger


Hi guys, I am project manager in a bank where we work on a POC. We try to test the first step of our Datalake with:
- a first layer for landing area : store files without transformation in HDFS (nowdays 6 files)
-a second layer in Hbase where we upload with one spark job a unique Hbase table.
We know that with capacity scheduler in Yarn we could guarantee resources consumption (memory, CPU) for spark jobs.
My question is :
- how i could guarantee resources for my Hbase client : I mean one application which request my table with Hbase native client and at the same time some other application, batch run spark jobs... I want ensure all the time at least 30% of capactity to my application (which request my Hbase table) in order to guarantee for instance 50 request at the same time with less than 200 ms per user.
Is Slider the good answer ?
Thanks a lot for answers, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)