You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "bounkong khamphousone (JIRA)" <ji...@apache.org> on 2018/06/06 08:04:00 UTC

[jira] [Commented] (HIVE-12222) Define port range in property for RPCServer

    [ https://issues.apache.org/jira/browse/HIVE-12222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502954#comment-16502954 ] 

bounkong khamphousone commented on HIVE-12222:
----------------------------------------------

Issue has been created in https://issues.apache.org/jira/browse/HIVE-19814. Sorry for not beeing reactive. I didn't receive any notifications. Hope this will be fix in 3.0.0 at least :).

> Define port range in property for RPCServer
> -------------------------------------------
>
>                 Key: HIVE-12222
>                 URL: https://issues.apache.org/jira/browse/HIVE-12222
>             Project: Hive
>          Issue Type: Improvement
>          Components: CLI, Spark
>    Affects Versions: 1.2.1
>         Environment: Apache Hadoop 2.7.0
> Apache Hive 1.2.1
> Apache Spark 1.5.1
>            Reporter: Andrew Lee
>            Assignee: Aihua Xu
>            Priority: Major
>              Labels: TODOC2.2
>             Fix For: 2.3.0
>
>         Attachments: HIVE-12222.1.patch, HIVE-12222.2.patch, HIVE-12222.3.patch
>
>
> Creating this JIRA after discussin with Xuefu on the dev mailing list. Would need some help to review and update the fields in this JIRA ticket, thanks.
> I notice that in 
> ./spark-client/src/main/java/org/apache/hive/spark/client/rpc/RpcServer.java
> The port number is assigned with 0 which means it will be a random port every time when the RPC Server is created to talk to Spark in the same session.
> Because of this, this is causing problems to configure firewall between the 
> HiveCLI RPC Server and Spark due to unpredictable port numbers here. In other word, users need to open all hive ports range 
> from Data Node => HiveCLI (edge node).
> {code}
>  this.channel = new ServerBootstrap()
>       .group(group)
>       .channel(NioServerSocketChannel.class)
>       .childHandler(new ChannelInitializer<SocketChannel>() {
>           @Override
>           public void initChannel(SocketChannel ch) throws Exception {
>             SaslServerHandler saslHandler = new SaslServerHandler(config);
>             final Rpc newRpc = Rpc.createServer(saslHandler, config, ch, group);
>             saslHandler.rpc = newRpc;
>             Runnable cancelTask = new Runnable() {
>                 @Override
>                 public void run() {
>                   LOG.warn("Timed out waiting for hello from client.");
>                   newRpc.close();
>                 }
>             };
>             saslHandler.cancelTask = group.schedule(cancelTask,
>                 RpcServer.this.config.getServerConnectTimeoutMs(),
>                 TimeUnit.MILLISECONDS);
>           }
>       })
> {code}
> 2 Main reasons.
> - Most users (what I see and encounter) use HiveCLI as a command line tool, and in order to use that, they need to login to the edge node (via SSH). Now, here comes the interesting part.
> Could be true or not, but this is what I observe and encounter from time to time. Most users will abuse the resource on that edge node (increasing HADOOP_HEAPSIZE, dumping output to local disk, running huge python workflow, etc), this may cause the HS2 process to run into OOME, choke and die, etc. various resource issues including others like login, etc.
> - Analyst connects to Hive via HS2 + ODBC. So HS2 needs to be highly available. This makes sense to run it on the gateway node or a service node and separated from the HiveCLI.
> The logs are located in different location, monitoring and auditing is easier to run HS2 with a daemon user account, etc. so we don't want users to run HiveCLI where HS2 is running.
> It's better to isolate the resource this way to avoid any memory, file handlers, disk space, issues.
> From a security standpoint, 
> - Since users can login to edge node (via SSH), the security on the edge node needs to be fortified and enhanced. Therefore, all the FW comes in and auditing.
> - Regulation/compliance for auditing is another requirement to monitor all traffic, specifying ports and locking down the ports makes it easier since we can focus
> on a range to monitor and audit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)