You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2015/11/05 07:43:28 UTC

[jira] [Comment Edited] (HBASE-12790) Support fairness across parallelized scans

    [ https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991231#comment-14991231 ] 

stack edited comment on HBASE-12790 at 11/5/15 6:42 AM:
--------------------------------------------------------

bq. We will extend the groupid concept to all the client requests. That includes scan, gets, MutateRequest, MultiRequest, Bulkloadrequest etc.

Sorry. Wasn't party to the conversation, but this seems at first blush (until I hear more), like the wrong direction completely. Rather than making Scan "staccato", a notion that I think you could argue should be the default behavior when scanning (its already sortof 'staccato' given its going to be preading from hdfs), instead, the codebase is to be littered with this arbitrary 'groupid' doohickey thingy that, truth be told, is a phoenix thing (yeah, others could use it but its so exotic, only phoenix will be able to make sense of it).

bq. This will allow every Put, Delete, Increment, Append, Get and Scan to have a grouping id. 

Stinks!! (Not  directed at you [~ram_krish] but whoever all who came up w/ this notion sir).

Why we need groupid at all? Scan already has an identifier kept in lease accounting, etc. If you want grouping, arbitrate by Connection. If Connection to coarse for a client, have your client create a new Connection per its notion of 'group', whatever that is.

A roundrobin scheduler that lives in phoenix only and that requires a rolling restart and dedication of the cluster to phoenix-only workloads is one way to solve this, yeah, but it seems like you have a generic problem and a generic soln is not that far away as I see it making use of attributes already available to you. The generic approach could be less work and more generally beneficial.


was (Author: stack):
bq. We will extend the groupid concept to all the client requests. That includes scan, gets, MutateRequest, MultiRequest, Bulkloadrequest etc.

Sorry. Wasn't party to the conversation, but this seems at first blush (until I hear more), like the wrong direction completely. Rather than making Scan "staccato", a notion that I think you could argue should be the default behavior when scanning (its already sortof 'staccato' given its going to be preading from hdfs), instead, the codebase is to be littered with this arbitrary 'groupid' doohickey thingy that, truth be told, is a phoenix thing (yeah, others could use it but its so exotic, only phoenix will be able to make sense of it).

bq. This will allow every Put, Delete, Increment, Append, Get and Scan to have a grouping id. 

Stinks!! (Not  directed at you [~ram_krish] but whoever all who came up w/ this notion sir).

Why we need groupid at all? Scan already has an identifier kept in lease accounting, etc. If you want grouping, arbitrate by Connection. If Connection to coarse for a client, have your client create a new Connection per its notion of 'group', whatever that is.

A roundrobin scheduler that lives in phoenix only and that requires a rolling restart and dedication of the cluster to phoenix-only workloads is one way to solve this, yeah, but it seems like you have a generic problem and a generic soln would be less work and generally beneficial.

> Support fairness across parallelized scans
> ------------------------------------------
>
>                 Key: HBASE-12790
>                 URL: https://issues.apache.org/jira/browse/HBASE-12790
>             Project: HBase
>          Issue Type: New Feature
>            Reporter: James Taylor
>            Assignee: ramkrishna.s.vasudevan
>              Labels: Phoenix
>         Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, HBASE-12790_1.patch, HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in getting back results. This can lead to starvation with a loaded cluster and interleaved scans, since the RPC queue will be ordered and processed on a FIFO basis. For example, if there are two clients, A & B that submit largish scans at the same time. Say each scan is broken down into 100 scans by the client (broken down into equal depth chunks along the row key), and the 100 scans of client A are queued first, followed immediately by the 100 scans of client B. In this case, client B will be starved out of getting any results back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead of the standard FIFO queue. The queue to be used could be (maybe it already is) configurable based on a new config parameter. Using this queue would require the client to have the same identifier for all of the 100 parallel scans that represent a single logical scan from the clients point of view. With this information, the round robin queue would pick off a task from the queue in a round robin fashion (instead of a strictly FIFO manner) to prevent starvation over interleaved parallelized scans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)