You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Enis Soztutar (JIRA)" <ji...@apache.org> on 2017/01/27 01:26:24 UTC

[jira] [Comment Edited] (HBASE-14850) C++ client implementation

    [ https://issues.apache.org/jira/browse/HBASE-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15840799#comment-15840799 ] 

Enis Soztutar edited comment on HBASE-14850 at 1/27/17 1:25 AM:
----------------------------------------------------------------

I was reviewing HBASE-17465, and was going to leave some notes there but let me put it here instead. 

For those of you not following closely, this patch at HBASE-17465 extends on the earlier work for the RpcClient / RpcChannel using the underlying wangle libs and existing RPC client code. The current RpcClient code can locate a region using meta, and also do a single RPC to the located regionserver. However, it lacks the retry / exception handling and timeout logic. 

The approach we are taking in building the hbase-native client is to follow closely to the async java client in terms of implementation to reduce the development time, and also make the client more maintainable. In terms of the C++ client architecture, we are doing the layered approach where each layer corresponds roughly to the following java layer: 
|| Layer        ||Java Async || C++ ||
| low level async socket   | netty  | wangle |
| thread pools, futures, buffers, etc | netty thread pools, futures and bufs and Java 8 futures | folly Futures, IOBuf, wangle thread pools | 
| tcp connection management/pooling | AsyncRpcClient | connection-pool.cc, rpc-client.cc | 
| Rpc request / response | (netty-based) AsyncRpcChannel, AsyncServerResponseHandler | (wangle-based) pipeline.cc, client-handler.cc | 
| Rpc interface | PB-generated service stubs, HBaseRpcController | PB-generated stubs, rpc-controller.cc (and wangle-based request.cc, service.cc) | 
| Request,response conversion (Get -> GetRequest)  | RequestConverter | request-converter.cc, response-converter.cc |
| Rpc retry, timeout, exception handling | RawAsyncTableImpl, AsyncRpcRetyingCaller, XXRequestCaller | async-rpc-retrying-caller.cc, async-rpc-retrying-caller-factory | 
| meta lookup | ZKAsyncRegistry, curator | location-cache.cc, zk C client|
| meta cache | MetaCache | location-cache.cc |
| Async Client interface (exposed) | AsyncConnection, AsyncTable  | <none for now, we are not exposing this yet> | 
| Sync client implementation over async interfaces | <none existent, plans on the way for TableImpl on top of RawAsyncTable> | table.cc | 
| Sync Client Interface (exposed) | ConnectionFactory, Connection, Table, Configuration, etc | client.h, table.h, configuration.h  | 
| Operations API | Get, Put, Scan, Result, Cell | Get, Put, Scan, Cell| 

So, in a sense, we are not reinventing a wheel, but using wangle / folly instead of netty on the C++ side, and building the client to be similar to the {{TableImpl -> AsyncTable -> RawAsyncTable -> AsyncConnection -> AsyncRpcClient -> AsyncRpcChannel -> Netty}} workflow. Anyway, please feel free to check and review if you are interested. 


was (Author: enis):
I was reviewing HBASE-17465, and some notes there but let me put it here instead. 

For those of you not following closely, this patch at HBASE-17465 extends on the earlier work for the RpcClient / RpcChannel using the underlying wangle libs and existing RPC client code. The current RpcClient code can locate a region using meta, and also do a single RPC to the located regionserver. However, it lacks the retry / exception handling and timeout logic. 

The approach we are taking in building the hbase-native client is to follow closely to the async java client in terms of implementation to reduce the development time, and also make the client more maintainable. In terms of the C++ client architecture, we are doing the layered approach where each layer corresponds roughly to the following java layer: 
|| Layer        ||Java Async || C++ ||
| low level async socket   | netty  | wangle |
| thread pools, futures, buffers, etc | netty thread pools, futures and bufs and Java 8 futures | folly Futures, IOBuf, wangle thread pools | 
| tcp connection management/pooling | AsyncRpcClient | connection-pool.cc, rpc-client.cc | 
| Rpc request / response | (netty-based) AsyncRpcChannel, AsyncServerResponseHandler | (wangle-based) pipeline.cc, client-handler.cc | 
| Rpc interface | PB-generated service stubs, HBaseRpcController | PB-generated stubs, rpc-controller.cc (and wangle-based request.cc, service.cc) | 
| Request,response conversion (Get -> GetRequest)  | RequestConverter | request-converter.cc, response-converter.cc |
| Rpc retry, timeout, exception handling | RawAsyncTableImpl, AsyncRpcRetyingCaller, XXRequestCaller | async-rpc-retrying-caller.cc, async-rpc-retrying-caller-factory | 
| meta lookup | ZKAsyncRegistry, curator | location-cache.cc, zk C client|
| meta cache | MetaCache | location-cache.cc |
| Async Client interface (exposed) | AsyncConnection, AsyncTable  | <none for now, we are not exposing this yet> | 
| Sync client implementation over async interfaces | <none existent, plans on the way for TableImpl on top of RawAsyncTable> | table.cc | 
| Sync Client Interface (exposed) | ConnectionFactory, Connection, Table, Configuration, etc | client.h, table.h, configuration.h  | 
| Operations API | Get, Put, Scan, Result, Cell | Get, Put, Scan, Cell| 

So, in a sense, we are not reinventing a wheel, but using wangle / folly instead of netty on the C++ side, and building the client to be similar to the {{TableImpl -> AsyncTable -> RawAsyncTable -> AsyncConnection -> AsyncRpcClient -> AsyncRpcChannel -> Netty}} workflow. Anyway, please feel free to check and review if you are interested. 

> C++ client implementation
> -------------------------
>
>                 Key: HBASE-14850
>                 URL: https://issues.apache.org/jira/browse/HBASE-14850
>             Project: HBase
>          Issue Type: Task
>            Reporter: Elliott Clark
>
> It's happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)