You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by Sahil Takiar <ta...@gmail.com> on 2018/05/01 14:19:48 UTC

Re: Review Request 66290: HIVE-14388 : Add number of rows inserted message after insert command in Beeline

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66290/#review202193
-----------------------------------------------------------


Fix it, then Ship it!




One minor comment, otherwise LGTM.


jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java
Lines 713 (patched)
<https://reviews.apache.org/r/66290/#comment283931>

    Should we modify beeline so if it gets a value of `-1` it says something like "Unknown number of rows affected", or just don't print anything at all. Right now it would print "No rows affected..." if an overflow happens, which would be odd.


- Sahil Takiar


On April 27, 2018, 11:05 p.m., Bharathkrishna Guruvayoor Murali wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66290/
> -----------------------------------------------------------
> 
> (Updated April 27, 2018, 11:05 p.m.)
> 
> 
> Review request for hive, Sahil Takiar and Vihang Karajgaonkar.
> 
> 
> Bugs: HIVE-14388
>     https://issues.apache.org/jira/browse/HIVE-14388
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> Currently, when you run insert command on beeline, it returns a message saying "No rows affected .."
> A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)"
> 
> Added the numRows parameter as part of QueryState.
> Adding the numRows to the response as well to display in beeline.
> 
> Getting the count in FileSinkOperator and setting it in statsMap, when it operates only on table specific rows for the particular operation. (so that we can get only the insert to table count and avoid counting non-table specific file-sink operations happening during query execution).
> 
> 
> Diffs
> -----
> 
>   jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java 06542cee02e5dc4696f2621bb45cc4f24c67dfda 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 41ad002abf3d2a6969ef0d1d48f7db22e096bb47 
>   ql/src/java/org/apache/hadoop/hive/ql/MapRedStats.java cf9c2273159c0d779ea90ad029613678fb0967a6 
>   ql/src/java/org/apache/hadoop/hive/ql/QueryState.java 706c9ffa48b9c3b4a6fdaae78bab1d39c3d0efda 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java c084fa054cb771bfdb033d244935713e3c7eb874 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java fcdc9967f12a454a9d3f31031e2261f264479118 
>   service-rpc/if/TCLIService.thrift 30f8af7f3e6e0598b410498782900ac27971aef0 
>   service-rpc/src/gen/thrift/gen-cpp/TCLIService_types.h 4321ad6d3c966d30f7a69552f91804cf2f1ba6c4 
>   service-rpc/src/gen/thrift/gen-cpp/TCLIService_types.cpp b2b62c71492b844f4439367364c5c81aa62f3908 
>   service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TGetOperationStatusResp.java 15e8220eb3eb12b72c7b64029410dced33bc0d72 
>   service-rpc/src/gen/thrift/gen-php/Types.php abb7c1ff3a2c8b72dc97689758266b675880e32b 
>   service-rpc/src/gen/thrift/gen-py/TCLIService/ttypes.py 0f8fd0745be0f4ed9e96b7bbe0f092d03649bcdf 
>   service-rpc/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb 60183dae9e9927bd09a9676e49eeb4aea2401737 
>   service/src/java/org/apache/hive/service/cli/CLIService.java c9914ba9bf8653cbcbca7d6612e98a64058c0fcc 
>   service/src/java/org/apache/hive/service/cli/OperationStatus.java 52cc3ae4f26b990b3e4edb52d9de85b3cc25f269 
>   service/src/java/org/apache/hive/service/cli/operation/Operation.java 3706c72abc77ac8bd77947cc1c5d084ddf965e9f 
>   service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java c64c99120ad21ee98af81ec6659a2722e3e1d1c7 
> 
> 
> Diff: https://reviews.apache.org/r/66290/diff/5/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Bharathkrishna Guruvayoor Murali
> 
>