You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2018/01/19 03:45:00 UTC

[jira] [Commented] (HBASE-19818) Scan time limit not work if the filter always filter row key

    [ https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331701#comment-16331701 ] 

Hadoop QA commented on HBASE-19818:
-----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m  0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m  4s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 58s{color} | {color:green} hbase-server: The patch generated 0 new + 253 unchanged - 16 fixed = 253 total (was 269) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m  0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m  4s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 44s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 26s{color} | {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19818 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906736/HBASE-19818.master.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 932ca7d9cd70 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh |
| git revision | master / 8b520ce50d |
| maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | https://builds.apache.org/job/PreCommit-HBASE-Build/11123/artifact/patchprocess/patch-unit-hbase-server.txt |
|  Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/11123/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/11123/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Scan time limit not work if the filter always filter row key
> ------------------------------------------------------------
>
>                 Key: HBASE-19818
>                 URL: https://issues.apache.org/jira/browse/HBASE-19818
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.0.0-beta-2
>            Reporter: Guanghao Zhang
>            Assignee: Guanghao Zhang
>            Priority: Major
>         Attachments: HBASE-19818.master.002.patch
>
>
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java]
> nextInternal() method.
> {code:java}
> // Check if rowkey filter wants to exclude this row. If so, loop to next.
>  // Technically, if we hit limits before on this row, we don't need this call.
>  if (filterRowKey(current)) {
>  incrementCountOfRowsFilteredMetric(scannerContext);
>  // early check, see HBASE-16296
>  if (isFilterDoneInternal()) {
>  return scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  // Typically the count of rows scanned is incremented inside #populateResult. However,
>  // here we are filtering a row based purely on its row key, preventing us from calling
>  // #populateResult. Thus, perform the necessary increment here to rows scanned metric
>  incrementCountOfRowsScannedMetric(scannerContext);
>  boolean moreRows = nextRow(scannerContext, current);
>  if (!moreRows) {
>  return scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  results.clear();
>  continue;
>  }
> // Ok, we are good, let's try to get some results from the main heap.
>  populateResult(results, this.storeHeap, scannerContext, current);
>  if (scannerContext.checkAnyLimitReached(LimitScope.BETWEEN_CELLS)) {
>  if (hasFilterRow) {
>  throw new IncompatibleFilterException(
>  "Filter whose hasFilterRow() returns true is incompatible with scans that must "
>  + " stop mid-row because of a limit. ScannerContext:" + scannerContext);
>  }
>  return true;
>  }
> {code}
> If filterRowKey always return ture, then it skip to checkAnyLimitReached. For batch/size limit, it is ok to skip as we don't read anything. But for time limit, it is not right. If the filter always filter row key, we will stuck here for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)