You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by GitBox <gi...@apache.org> on 2020/11/19 06:12:07 UTC

[GitHub] [phoenix] yanxinyi opened a new pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

yanxinyi opened a new pull request #975:
URL: https://github.com/apache/phoenix/pull/975


   …EW_TTL has expired


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-733568019


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  13m 26s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   3m 55s |  phoenix-core in 4.x has 950 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   4m  3s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   7m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m  1s |  phoenix-core: The patch generated 526 new + 1110 unchanged - 5 fixed = 1636 total (was 1115)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  the patch passed  |
   | -1 :x: |  spotbugs  |   4m 10s |  phoenix-core generated 7 new + 950 unchanged - 0 fixed = 957 total (was 950)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 215m 47s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate ASF License warnings.  |
   |  |   | 255m 58s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Boxed value is unboxed and then immediately reboxed in org.apache.phoenix.mapreduce.PhoenixTTLDeleteJobMapper.deleteExpiredRows(PhoenixConnection, ViewInfoTracker, Configuration, Mapper$Context)  At PhoenixTTLDeleteJobMapper.java:then immediately reboxed in org.apache.phoenix.mapreduce.PhoenixTTLDeleteJobMapper.deleteExpiredRows(PhoenixConnection, ViewInfoTracker, Configuration, Mapper$Context)  At PhoenixTTLDeleteJobMapper.java:[line 146] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.PhoenixTTLTool.parseArgs(String[])  At PhoenixTTLTool.java:org.apache.phoenix.mapreduce.PhoenixTTLTool.parseArgs(String[])  At PhoenixTTLTool.java:[line 115] |
   |  |  Possible null pointer dereference of cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:[line 184] |
   |  |  Integral value cast to double and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.generateSplits(List, Configuration)  At DefaultMultiViewSplitStrategy.java:and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.generateSplits(List, Configuration)  At DefaultMultiViewSplitStrategy.java:[line 39] |
   |  |  Exception is caught when Exception is not thrown in org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getPhoenixMultiViewList(Configuration)  At DefaultPhoenixMultiViewListProvider.java:is not thrown in org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getPhoenixMultiViewList(Configuration)  At DefaultPhoenixMultiViewListProvider.java:[line 141] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 451] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 481] |
   | Failed junit tests | TEST-[GroupByIT_0] |
   |   | TEST-[IntArithmeticIT_0] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/3/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/975 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile |
   | uname | Linux e553143007c3 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / c3818ee |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/3/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/3/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/3/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/3/testReport/ |
   | Max. process+thread count | 5977 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/3/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-738472989


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 4 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  11m 14s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   3m  4s |  phoenix-core in 4.x has 950 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 11s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 10s |  phoenix-core: The patch generated 532 new + 1111 unchanged - 4 fixed = 1643 total (was 1115)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 12s |  phoenix-core generated 5 new + 950 unchanged - 0 fixed = 955 total (was 950)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 134m 34s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  The patch does not generate ASF License warnings.  |
   |  |   | 167m 25s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Possible null pointer dereference of cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:[line 185] |
   |  |  Integral value cast to double and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.getNumberOfMappers(int, int)  At DefaultMultiViewSplitStrategy.java:and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.getNumberOfMappers(int, int)  At DefaultMultiViewSplitStrategy.java:[line 58] |
   |  |  org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getTenantOrViewMultiViewList(Configuration) may fail to clean up java.sql.ResultSet  Obligation to clean up resource created at DefaultPhoenixMultiViewListProvider.java:up java.sql.ResultSet  Obligation to clean up resource created at DefaultPhoenixMultiViewListProvider.java:[line 110] is not discharged |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 451] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 481] |
   | Failed junit tests | phoenix.end2end.DeleteIT |
   |   | phoenix.end2end.DropIndexedColsIT |
   |   | phoenix.end2end.AlterMultiTenantTableWithViewsIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/6/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/975 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile |
   | uname | Linux 7ca47fdc3495 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 5e70f76 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/6/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/6/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/6/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/6/testReport/ |
   | Max. process+thread count | 6693 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/6/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r530563942



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixMapReduceUtil.java
##########
@@ -125,6 +127,20 @@ public static void setInput(final Job job, final Class<? extends DBWritable> inp
         PhoenixConfigurationUtil.setSelectColumnNames(configuration, fieldNames);
     }
 
+    /**
+     *
+     * @param job MR job instance
+     * @param tool ViewTtlTool for Phoenix TTL deletion MR job

Review comment:
       nit: ViewTTL in comments and method names

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixTTLTool.java
##########
@@ -0,0 +1,319 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class PhoenixTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(PhoenixTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        VIEW_FAILED,
+        VIEW_SUCCEED,
+        VIEW_INDEX_FAILED,
+        VIEW_INDEX_SUCCEED
+    }
+
+    public static final String DELETE_ALL_VIEWS = "DELETE_ALL_VIEWS";
+    public static final int DEFAULT_MAPPER_SPLIT_SIZE = 10;
+    public static final int DEFAULT_QUERY_BATCH_SIZE = 100;
+
+    private static final Option DELETE_ALL_VIEWS_OPTION = new Option("a", "all", false,
+            "Delete all views from all tables.");
+    private static final Option VIEW_NAME_OPTION = new Option("v", "view", true,
+            "Delete Phoenix View Name");
+    private static final Option TENANT_ID_OPTION = new Option("i", "id", true,
+            "Delete an view based on the tenant id.");
+    private static final Option JOB_PRIORITY_OPTION = new Option("p", "job-priority", true,
+            "Define job priority from 0(highest) to 4");
+    private static final Option SPLIT_SIZE_OPTION = new Option("s", "split-size-per-mapper", true,
+            "Define split size for each mapper.");
+    private static final Option BATCH_SIZE_OPTION = new Option("b", "batch-size-for-query-more", true,
+            "Define batch size for fetching views metadata from syscat.");
+    private static final Option RUN_FOREGROUND_OPTION = new Option("runfg",
+            "run-foreground", false, "If specified, runs ViewTTLTool " +
+            "in Foreground. Default - Runs the build in background");
+
+    private static final Option HELP_OPTION = new Option("h", "help", false, "Help");
+
+    private Configuration configuration;
+    private Connection connection;
+    private String viewName;
+    private String tenantId;
+    private String jobName;
+    private boolean isDeletingAllViews;
+    private JobPriority jobPriority;
+    private boolean isForeground;
+    private int splitSize;
+    private int batchSize;
+    private Job job;
+
+    public void parseArgs(String[] args) {
+        CommandLine cmdLine;
+        try {
+            cmdLine = parseOptions(args);
+        } catch (IllegalStateException e) {
+            printHelpAndExit(e.getMessage(), getOptions());
+            throw e;
+        }
+
+        if (getConf() == null) {
+            setConf(HBaseConfiguration.create());
+        }
+
+        if (cmdLine.hasOption(DELETE_ALL_VIEWS_OPTION.getOpt())) {
+            this.isDeletingAllViews = true;
+        } else if (cmdLine.hasOption(VIEW_NAME_OPTION.getOpt())) {
+            viewName = cmdLine.getOptionValue(VIEW_NAME_OPTION.getOpt());
+            this.isDeletingAllViews = false;
+        }
+
+        if (cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            tenantId = cmdLine.getOptionValue((TENANT_ID_OPTION.getOpt()));
+        }
+
+        if (cmdLine.hasOption(SPLIT_SIZE_OPTION.getOpt())) {
+            splitSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            splitSize = DEFAULT_MAPPER_SPLIT_SIZE;
+        }
+
+        if (cmdLine.hasOption(BATCH_SIZE_OPTION.getOpt())) {
+            batchSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            batchSize = DEFAULT_QUERY_BATCH_SIZE;
+        }
+
+        isForeground = cmdLine.hasOption(RUN_FOREGROUND_OPTION.getOpt());
+    }
+
+    public String getJobPriority() {
+        return this.jobPriority.toString();
+    }
+
+    private JobPriority getJobPriority(CommandLine cmdLine) {
+        String jobPriorityOption = cmdLine.getOptionValue(JOB_PRIORITY_OPTION.getOpt());
+        if (jobPriorityOption == null) {
+            return JobPriority.NORMAL;
+        }
+
+        switch (jobPriorityOption) {
+            case "0" : return JobPriority.VERY_HIGH;
+            case "1" : return JobPriority.HIGH;
+            case "2" : return JobPriority.NORMAL;
+            case "3" : return JobPriority.LOW;
+            case "4" : return JobPriority.VERY_LOW;
+            default:
+                return JobPriority.NORMAL;
+        }
+    }
+
+    public Job getJob() {
+        return this.job;
+    }
+
+    public boolean isDeletingAllViews() {
+        return this.isDeletingAllViews;
+    }
+
+    public String getTenantId() {
+        return this.tenantId;
+    }
+
+    public String getViewName() {
+        return this.viewName;
+    }
+
+    public int getSplitSize() {
+        return this.splitSize;
+    }
+
+    public int getBatchSize() {
+        return this.batchSize;
+    }
+
+    public CommandLine parseOptions(String[] args) {
+        final Options options = getOptions();
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmdLine = null;
+        try {
+            cmdLine = parser.parse(options, args);
+        } catch (ParseException e) {
+            printHelpAndExit("Error parsing command line options: " + e.getMessage(), options);
+        }
+
+        if (!cmdLine.hasOption(DELETE_ALL_VIEWS_OPTION.getOpt()) &&
+                !cmdLine.hasOption(VIEW_NAME_OPTION.getOpt()) &&
+                !cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            throw new IllegalStateException("No deletion job is specified, " +
+                    "please indicate deletion job for ALL/TABLE/VIEW/TENANT level");
+        }
+
+        if (cmdLine.hasOption(HELP_OPTION.getOpt())) {
+            printHelpAndExit(options, 0);
+        }
+
+        this.jobPriority = getJobPriority(cmdLine);
+
+        return cmdLine;
+    }
+
+    private Options getOptions() {
+        final Options options = new Options();
+        options.addOption(DELETE_ALL_VIEWS_OPTION);
+        options.addOption(VIEW_NAME_OPTION);
+        options.addOption(TENANT_ID_OPTION);
+        options.addOption(HELP_OPTION);
+        options.addOption(JOB_PRIORITY_OPTION);
+        options.addOption(RUN_FOREGROUND_OPTION);
+        options.addOption(SPLIT_SIZE_OPTION);
+        options.addOption(BATCH_SIZE_OPTION);
+
+        return options;
+    }
+
+    private void printHelpAndExit(String errorMessage, Options options) {
+        System.err.println(errorMessage);
+        LOGGER.error(errorMessage);
+        printHelpAndExit(options, 1);
+    }
+
+    private void printHelpAndExit(Options options, int exitCode) {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp("help", options);
+        System.exit(exitCode);
+    }
+
+    public void setJobName(String jobName) {
+        this.jobName = jobName;
+    }
+
+    public String getJobName() {
+        if (this.jobName == null) {
+            String jobName;
+            if (this.isDeletingAllViews) {
+                jobName = DELETE_ALL_VIEWS;
+            } else if (this.getViewName() != null) {
+                jobName = this.getViewName();
+            } else  {
+                jobName = this.tenantId;
+            }
+            this.jobName =  "ViewTTLTool-" + jobName + "-";

Review comment:
       nit: "PhoenixTTLTool" here and few more messages below

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixTTLToolIT.java
##########
@@ -0,0 +1,730 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.RowFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.RegexStringComparator;
+import org.apache.phoenix.mapreduce.PhoenixTTLTool;
+import org.apache.phoenix.mapreduce.util.PhoenixMultiInputUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
+import org.junit.Test;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.Statement;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+public class PhoenixTTLToolIT extends ParallelStatsDisabledIT {
+
+    private final long PHOENIX_TTL_EXPIRE_IN_A_MILLISECOND = 1;
+    private final long PHOENIX_TTL_EXPIRE_IN_A_DAY = 1000 * 60 * 60 * 24;
+
+    private final String VIEW_PREFIX1 = "V01";
+    private final String VIEW_PREFIX2 = "V02";
+    private final String UPSERT_TO_GLOBAL_VIEW_QUERY = "UPSERT INTO %s (PK1,A,B,C,D) VALUES(1,1,1,1,1)";
+    private final String UPSERT_TO_LEAF_VIEW_QUERY = "UPSERT INTO %s (PK1,A,B,C,D,E,F) VALUES(1,1,1,1,1,1,1)";
+    private final String VIEW_DDL_WITH_ID_PREFIX_AND_TTL = "CREATE VIEW %s (" +
+            "PK1 BIGINT PRIMARY KEY,A BIGINT, B BIGINT, C BIGINT, D BIGINT)" +
+            " AS SELECT * FROM %s WHERE ID = '%s' PHOENIX_TTL = %d";
+    private final String VIEW_INDEX_DDL = "CREATE INDEX %s ON %s(%s)";
+    private final String TENANT_VIEW_DDL = "CREATE VIEW %s (E BIGINT, F BIGINT) AS SELECT * FROM %s";
+
+    private void verifyNumberOfRowsFromHBaseLevel(String tableName, String regrex, int expectedRows)
+            throws Exception {
+        try (Table table  = HBaseFactoryProvider.getHConnectionFactory().createConnection(config).getTable(tableName)) {
+            Filter filter = new RowFilter(CompareFilter.CompareOp.EQUAL, new RegexStringComparator(regrex));
+            Scan scan = new Scan();
+            scan.setFilter(filter);
+            assertEquals(expectedRows, getRowCount(table,scan));
+        }
+    }
+
+    private void verifyNumberOfRows(String tableName, String tenantId, int expectedRows,
+                                    Connection conn) throws Exception {
+        String query = "SELECT COUNT(*) FROM " + tableName;
+        if (tenantId != null) {
+            query = query + " WHERE TENANT_ID = '" + tenantId + "'";
+        }
+        try (Statement stm = conn.createStatement()) {
+
+            ResultSet rs = stm.executeQuery(query);
+            assertTrue(rs.next());
+            assertEquals(expectedRows, rs.getInt(1));
+        }
+    }
+
+    private long getRowCount(Table table, Scan scan) throws Exception {
+        ResultScanner scanner = table.getScanner(scan);
+        int count = 0;
+        for (Result dummy : scanner) {
+            count++;
+        }
+        scanner.close();
+        return count;
+    }
+
+    private void createMultiTenantTable(Connection conn, String tableName) throws Exception {
+        String ddl = "CREATE TABLE " + tableName +
+                " (TENANT_ID CHAR(10) NOT NULL, ID CHAR(10) NOT NULL, NUM BIGINT CONSTRAINT " +
+                "PK PRIMARY KEY (TENANT_ID,ID)) MULTI_TENANT=true, COLUMN_ENCODED_BYTES = 0";
+
+        try (Statement stmt = conn.createStatement()) {
+            stmt.execute(ddl);
+        }
+    }
+
+    /*
+                    BaseMultiTenantTable
+                  GlobalView1 with TTL(1 ms)
+                Index1                 Index2
+
+        Creating 2 tenantViews and Upserting data.
+        After running the MR job, it should delete all data.
+     */
+    @Test
+    public void testTenantViewOnGlobalViewWithMoreThanOneIndex() throws Exception {
+        String schema = generateUniqueName();
+        String baseTableFullName = schema + "." + generateUniqueName();
+        String indexTable1 = generateUniqueName() + "_IDX";
+        String indexTable2 = generateUniqueName() + "_IDX";
+        String globalViewName = schema + "." + generateUniqueName();
+        String tenant1 = generateUniqueName();
+        String tenant2 = generateUniqueName();
+        String tenantView1 = schema + "." + generateUniqueName();
+        String tenantView2 = schema + "." + generateUniqueName();
+        String indexTable = "_IDX_" + baseTableFullName;
+
+        try (Connection globalConn = DriverManager.getConnection(getUrl());
+             Connection tenant1Connection = PhoenixMultiInputUtil.buildTenantConnection(getUrl(), tenant1);
+             Connection tenant2Connection = PhoenixMultiInputUtil.buildTenantConnection(getUrl(), tenant2)) {
+
+            createMultiTenantTable(globalConn, baseTableFullName);
+            globalConn.createStatement().execute(String.format(VIEW_DDL_WITH_ID_PREFIX_AND_TTL,
+                    globalViewName, baseTableFullName, VIEW_PREFIX1, PHOENIX_TTL_EXPIRE_IN_A_MILLISECOND));
+
+            globalConn.createStatement().execute(String.format(VIEW_INDEX_DDL, indexTable1, globalViewName, "A,B"));
+            globalConn.createStatement().execute(String.format(VIEW_INDEX_DDL, indexTable2, globalViewName, "C,D"));
+
+            tenant1Connection.createStatement().execute(String.format(TENANT_VIEW_DDL,tenantView1, globalViewName));
+            tenant2Connection.createStatement().execute(String.format(TENANT_VIEW_DDL,tenantView2, globalViewName));
+
+            tenant1Connection.createStatement().execute(String.format(UPSERT_TO_LEAF_VIEW_QUERY, tenantView1));
+            tenant1Connection.commit();
+            verifyNumberOfRows(baseTableFullName, tenant1, 1, globalConn);
+            tenant2Connection.createStatement().execute(String.format(UPSERT_TO_LEAF_VIEW_QUERY, tenantView2));
+            tenant2Connection.commit();
+            verifyNumberOfRows(baseTableFullName, tenant2, 1, globalConn);
+
+            verifyNumberOfRowsFromHBaseLevel(indexTable, ".*" + tenant1 + ".*", 2);

Review comment:
       nit: add comments on why 2 rows are expected

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+
+        if (numViewsInSplit < 1) {

Review comment:
       Sorry, my bad!!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-734123456


   > Mostly LGTM, except for some nits,
   > Also, can u add some unit tests for most of the framework classes?
   
   Added more tests


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-734213846


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  13m  9s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m 12s |  phoenix-core in 4.x has 950 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   4m 20s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   7m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 20s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 12s |  phoenix-core: The patch generated 541 new + 1101 unchanged - 14 fixed = 1642 total (was 1115)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  the patch passed  |
   | -1 :x: |  spotbugs  |   4m 18s |  phoenix-core generated 5 new + 950 unchanged - 0 fixed = 955 total (was 950)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 146m 32s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  The patch does not generate ASF License warnings.  |
   |  |   | 187m 15s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Possible null pointer dereference of cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:[line 185] |
   |  |  Integral value cast to double and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.getNumberOfMappers(int, int)  At DefaultMultiViewSplitStrategy.java:and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.getNumberOfMappers(int, int)  At DefaultMultiViewSplitStrategy.java:[line 58] |
   |  |  org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getTenantOrViewMultiViewList(Configuration) may fail to clean up java.sql.ResultSet  Obligation to clean up resource created at DefaultPhoenixMultiViewListProvider.java:up java.sql.ResultSet  Obligation to clean up resource created at DefaultPhoenixMultiViewListProvider.java:[line 110] is not discharged |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 451] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 481] |
   | Failed junit tests | phoenix.end2end.join.SortMergeJoinLocalIndexIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/5/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/975 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile |
   | uname | Linux a4554af1ffeb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 7ac4dff |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/5/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/5/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/5/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/5/testReport/ |
   | Max. process+thread count | 6726 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/5/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r530521324



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/ViewInfoWritable.java
##########
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+public interface ViewInfoWritable extends Writable {
+    public enum ViewInfoJobState {
+        RUNNING(1),
+        SUCCEEDED(2),
+        FAILED(3),
+        PREP(4),

Review comment:
       Do u want o rename PREP to INITIALIZED?
   Also maybe a good idea to keep the order (enum value) to be the state progression prep(initialized) -> RUNNING -> SUCCEEDED ....




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r528032496



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);
+
+                        if (schema != null && schema.length() > 0) {
+                            fullTableName = SchemaUtil.getTableName(schema, tableName);
+                        }
+
+                        boolean skip = false;
+                        PTable pTable = null;
+                        try {
+                            pTable = PhoenixRuntime.getTable(connection, tenantId, fullTableName);
+                            // we currently only support up to three levels

Review comment:
       actually, we do support multi-levels other than 3, let me remove this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-731502296


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   5m 45s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  11m 29s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   2m 56s |  phoenix-core in 4.x has 950 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m  3s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m  1s |  phoenix-core: The patch generated 536 new + 1101 unchanged - 14 fixed = 1637 total (was 1115)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 15s |  phoenix-core generated 9 new + 950 unchanged - 0 fixed = 959 total (was 950)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 136m 49s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate ASF License warnings.  |
   |  |   | 174m  3s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Dead store to listOfInputSplit in org.apache.phoenix.mapreduce.PhoenixMultiViewInputFormat.getSplits(JobContext)  At PhoenixMultiViewInputFormat.java:org.apache.phoenix.mapreduce.PhoenixMultiViewInputFormat.getSplits(JobContext)  At PhoenixMultiViewInputFormat.java:[line 52] |
   |  |  Boxed value is unboxed and then immediately reboxed in org.apache.phoenix.mapreduce.PhoenixTTLDeleteJobMapper.deleteExpiredRows(PhoenixConnection, PTable, String, Configuration, Mapper$Context, ViewInfoTracker)  At PhoenixTTLDeleteJobMapper.java:then immediately reboxed in org.apache.phoenix.mapreduce.PhoenixTTLDeleteJobMapper.deleteExpiredRows(PhoenixConnection, PTable, String, Configuration, Mapper$Context, ViewInfoTracker)  At PhoenixTTLDeleteJobMapper.java:[line 159] |
   |  |  Invocation of toString on Throwable.getStackTrace() in org.apache.phoenix.mapreduce.PhoenixTTLDeleteJobMapper.initMultiViewJobStatusTracker(Configuration)  At PhoenixTTLDeleteJobMapper.java:in org.apache.phoenix.mapreduce.PhoenixTTLDeleteJobMapper.initMultiViewJobStatusTracker(Configuration)  At PhoenixTTLDeleteJobMapper.java:[line 70] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.PhoenixTTLTool.parseArgs(String[])  At PhoenixTTLTool.java:org.apache.phoenix.mapreduce.PhoenixTTLTool.parseArgs(String[])  At PhoenixTTLTool.java:[line 113] |
   |  |  Possible null pointer dereference of cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:[line 182] |
   |  |  Integral value cast to double and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.generateSplits(List, Configuration)  At DefaultMultiViewSplitStrategy.java:and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.generateSplits(List, Configuration)  At DefaultMultiViewSplitStrategy.java:[line 39] |
   |  |  Exception is caught when Exception is not thrown in org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getPhoenixMultiViewList(Configuration)  At DefaultPhoenixMultiViewListProvider.java:is not thrown in org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getPhoenixMultiViewList(Configuration)  At DefaultPhoenixMultiViewListProvider.java:[line 141] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 451] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 481] |
   | Failed junit tests | phoenix.end2end.DropIndexedColsIT |
   |   | phoenix.end2end.index.GlobalMutableNonTxIndexIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/2/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/975 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile |
   | uname | Linux 2797e9a950bf 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / e57fcc8 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/2/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/2/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/2/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/2/testReport/ |
   | Max. process+thread count | 6596 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/2/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r528027130



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);
+
+                        if (schema != null && schema.length() > 0) {
+                            fullTableName = SchemaUtil.getTableName(schema, tableName);
+                        }
+
+                        boolean skip = false;
+                        PTable pTable = null;
+                        try {
+                            pTable = PhoenixRuntime.getTable(connection, tenantId, fullTableName);
+                            // we currently only support up to three levels
+                            // CASE 1 : BASE_TABLE -> GLOBAL_VIEW -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> VIEW
+                            PTable parentTable = PhoenixRuntime.getTable(connection, null,
+                                    pTable.getParentName().toString());
+                            if (parentTable.getType() == PTableType.VIEW &&
+                                    parentTable.getPhoenixTTL() > 0) {
+                                skip = true;
+                            }
+                        } catch (Exception e) {
+                            skip = true;
+                            LOGGER.error(String.format("Had an issue to process the view: %s, tenantId:" +
+                                    "see error %s ", fullTableName, tenantId, e.getMessage()));
+                        }
+
+                        if (!skip) {
+                            ViewInfoWritable viewInfoTracker = new ViewInfoTracker(
+                                    tenantId,
+                                    fullTableName,
+                                    viewTtlValue,
+                                    pTable.getPhysicalName().getString(),
+                                    false
+
+                            );
+                            viewInfoWritables.add(viewInfoTracker);
+
+                            List<PTable> allIndexesOnView = pTable.getIndexes();
+                            for (PTable viewIndexTable : allIndexesOnView) {
+                                String indexName = viewIndexTable.getTableName().getString();
+                                String indexSchema = viewIndexTable.getSchemaName().getString();
+                                if (indexName.contains(QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR)) {
+                                    indexName = SchemaUtil.getTableNameFromFullName(indexName,
+                                            QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR);
+                                }
+                                indexName = SchemaUtil.getTableNameFromFullName(indexName);
+                                indexName = SchemaUtil.getTableName(indexSchema, indexName);
+                                ViewInfoWritable viewInfoTrackerForIndexEntry = new ViewInfoTracker(
+                                        tenantId,
+                                        fullTableName,
+                                        viewTtlValue,
+                                        indexName,
+                                        true
+
+                                );
+                                viewInfoWritables.add(viewInfoTrackerForIndexEntry);
+                            }
+                        }
+                    }
+                    if (isQueryMore) {
+                        if (fullTableName == null) {

Review comment:
       the fullTableName is checking whether we done the full scan of syscat or not.
   If we do have more rows to query, the current fullTableName cannot be null right? If we don't have more rows to scan, the current rs.next will not have any value, and we reset the fullTableName value to null for every iteration. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r528025843



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+

Review comment:
       blow is the logic to check whether we have a fresh cluster or no view cases




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-730155763


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  docker  |   5m 56s |  Docker failed to build yetus/phoenix:955047a0b.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/phoenix/pull/975 |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r527746539



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixMultiViewInputFormat.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMultiViewListProvider;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.mapreduce.util.MultiViewSplitStrategy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class PhoenixMultiViewInputFormat<T extends Writable> extends InputFormat<NullWritable,T> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(PhoenixMultiViewInputFormat.class);
+
+    public PhoenixMultiViewInputFormat() {
+    }
+

Review comment:
       Nit : remove empty constructor

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixMultiViewInputFormat.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMultiViewListProvider;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.mapreduce.util.MultiViewSplitStrategy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+

Review comment:
       Nit: Class comments

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLDeleteJobMapper.java
##########
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixResultSet;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+import org.apache.phoenix.mapreduce.util.MultiViewJobStatusTracker;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewJobStatusTracker;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.Properties;
+
+public class ViewTTLDeleteJobMapper extends Mapper<NullWritable, ViewInfoTracker, NullWritable, NullWritable> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLDeleteJobMapper.class);
+    private MultiViewJobStatusTracker multiViewJobStatusTracker;
+    private static final int DEFAULT_MAX_RETRIES = 3;
+    private static final int DEFAULT_RETRY_SLEEP_TIME_IN_MS = 10000;
+
+    private void initMultiViewJobStatusTracker(Configuration config) throws Exception {
+        try {
+            Class<?> defaultViewDeletionTrackerClass = DefaultMultiViewJobStatusTracker.class;
+            if (config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ) != null) {
+                LOGGER.info("Using customized tracker class : " +
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+                defaultViewDeletionTrackerClass = Class.forName(
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+            } else {
+                LOGGER.info("Using default tracker class ");
+            }
+            this.multiViewJobStatusTracker = (MultiViewJobStatusTracker) defaultViewDeletionTrackerClass.newInstance();
+        } catch (Exception e) {
+            LOGGER.error("Getting exception While initializing initMultiViewJobStatusTracker with error message");
+            LOGGER.error("stack trace" + e.getStackTrace().toString());
+            throw e;
+        }
+    }
+
+    @Override
+    protected void map(NullWritable key, ViewInfoTracker value, Context context) throws IOException  {
+        try {
+            final Configuration config = context.getConfiguration();
+
+            if (this.multiViewJobStatusTracker == null) {
+                initMultiViewJobStatusTracker(config);
+            }
+
+            LOGGER.debug(String.format("Deleting from view %s, TenantID %s, and TTL value: %d",
+                    value.getViewName(), value.getTenantId(), value.getPhoenixTtl()));
+
+            deletingExpiredRows(value, config, context);
+
+        } catch (SQLException e) {
+            LOGGER.error("Mapper got an exception while deleting expired rows : " + e.getMessage() );
+            throw new IOException(e.getMessage(), e.getCause());
+        } catch (Exception e) {
+            LOGGER.error("Getting IOException while running View TTL Deletion Job mapper with error : "
+                    + e.getMessage());
+            throw new IOException(e.getMessage(), e.getCause());
+        }
+    }
+
+    private void deletingExpiredRows(ViewInfoTracker value, Configuration config, Context context) throws Exception {
+        try (PhoenixConnection connection = (PhoenixConnection) ConnectionUtil.getInputConnection(config)) {
+            if (value.getTenantId() != null && !value.getTenantId().equals("NULL")) {
+                Properties props = new Properties();
+                props.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, value.getTenantId());
+
+                try (PhoenixConnection tenantConnection = (PhoenixConnection)
+                        DriverManager.getConnection(connection.getURL(), props)) {
+                    deletingExpiredRows(tenantConnection, value, config, context);
+                }
+            } else {
+                deletingExpiredRows(connection, value, config, context);
+            }
+        }
+    }
+
+    private void deletingExpiredRows(PhoenixConnection connection, ViewInfoTracker viewInfoTracker,
+                                     Configuration config, Context context) throws Exception {
+        try {
+            PTable ptable = PhoenixRuntime.getTable(connection, viewInfoTracker.getViewName());
+            String deleteIfExpiredStatement = "SELECT /*+ NO_INDEX */ count(*) FROM " + viewInfoTracker.getViewName();
+
+            if (viewInfoTracker.isIndexRelation()) {
+                ptable = PhoenixRuntime.getTable(connection, viewInfoTracker.getRelationName());
+                deleteIfExpiredStatement = "SELECT count(*) FROM " + viewInfoTracker.getRelationName();
+            }
+
+            deletingExpiredRows(connection, ptable, deleteIfExpiredStatement, config, context, viewInfoTracker);
+
+        } catch (Exception e) {
+            LOGGER.error(String.format("Had an issue to process the view: %s, " +
+                    "see error %s ", viewInfoTracker.toString(),e.getMessage()));
+        }
+    }
+
+    /*
+     * Each Mapper that receives a MultiPhoenixViewInputSplit will execute a DeleteMutation/Scan
+     *  (With DELETE_TTL_EXPIRED attribute) per view for all the views and view indexes in the split.
+     * For each DeleteMutation, it bounded by the view start and stop keys for the region and
+     *  TTL attributes and Delete Hint.
+     */
+    private boolean deletingExpiredRows(PhoenixConnection connection, PTable pTable,
+                                        String deleteIfExpiredStatement, Configuration config,
+                                        Context context, ViewInfoTracker viewInfoTracker) throws Exception {
+
+        try (PhoenixStatement pstmt = new PhoenixStatement(connection).unwrap(PhoenixStatement.class)) {
+            String sourceTableName = pTable.getTableName().getString();
+            this.multiViewJobStatusTracker.updateJobStatus(viewInfoTracker, 0,
+                    ViewInfoJobState.PREP.getValue(), config, 0, context.getJobName(), sourceTableName);
+            final QueryPlan queryPlan = pstmt.optimizeQuery(deleteIfExpiredStatement);
+            final Scan scan = queryPlan.getContext().getScan();
+            byte[] emptyColumnFamilyName = SchemaUtil.getEmptyColumnFamily(pTable);
+            byte[] emptyColumnName =
+                    pTable.getEncodingScheme() == PTable.QualifierEncodingScheme.NON_ENCODED_QUALIFIERS ?
+                            QueryConstants.EMPTY_COLUMN_BYTES :
+                            pTable.getEncodingScheme().encode(QueryConstants.ENCODED_EMPTY_COLUMN_NAME);
+
+            scan.setAttribute(BaseScannerRegionObserver.EMPTY_COLUMN_FAMILY_NAME, emptyColumnFamilyName);
+            scan.setAttribute(BaseScannerRegionObserver.EMPTY_COLUMN_QUALIFIER_NAME, emptyColumnName);
+            scan.setAttribute(BaseScannerRegionObserver.DELETE_PHOENIX_TTL_EXPIRED, PDataType.TRUE_BYTES);
+            scan.setAttribute(BaseScannerRegionObserver.MASK_PHOENIX_TTL_EXPIRED, PDataType.FALSE_BYTES);

Review comment:
       Will need to remove this attribute, as there are additional checks => both attributes cannot be set.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLDeleteJobMapper.java
##########
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixResultSet;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+import org.apache.phoenix.mapreduce.util.MultiViewJobStatusTracker;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewJobStatusTracker;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.Properties;
+
+public class ViewTTLDeleteJobMapper extends Mapper<NullWritable, ViewInfoTracker, NullWritable, NullWritable> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLDeleteJobMapper.class);
+    private MultiViewJobStatusTracker multiViewJobStatusTracker;
+    private static final int DEFAULT_MAX_RETRIES = 3;
+    private static final int DEFAULT_RETRY_SLEEP_TIME_IN_MS = 10000;
+
+    private void initMultiViewJobStatusTracker(Configuration config) throws Exception {
+        try {
+            Class<?> defaultViewDeletionTrackerClass = DefaultMultiViewJobStatusTracker.class;
+            if (config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ) != null) {
+                LOGGER.info("Using customized tracker class : " +
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+                defaultViewDeletionTrackerClass = Class.forName(
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+            } else {
+                LOGGER.info("Using default tracker class ");
+            }
+            this.multiViewJobStatusTracker = (MultiViewJobStatusTracker) defaultViewDeletionTrackerClass.newInstance();

Review comment:
       Is it necessary to cast it to the interface?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {

Review comment:
       nit : Class comments, Also should the naming be PhoenixTTL?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);

Review comment:
       nit: keep the order of the getXXXX index?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixMultiViewInputSplit.java
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class PhoenixMultiViewInputSplit extends InputSplit implements Writable {

Review comment:
       nit: Class comments

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLDeleteJobMapper.java
##########
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixResultSet;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+import org.apache.phoenix.mapreduce.util.MultiViewJobStatusTracker;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewJobStatusTracker;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.Properties;
+
+public class ViewTTLDeleteJobMapper extends Mapper<NullWritable, ViewInfoTracker, NullWritable, NullWritable> {

Review comment:
       nit: class comments and do we want to change it PhoenixTTL instead of ViewTTL?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixMultiViewInputFormat.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMultiViewListProvider;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+import org.apache.phoenix.mapreduce.util.MultiViewSplitStrategy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class PhoenixMultiViewInputFormat<T extends Writable> extends InputFormat<NullWritable,T> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(PhoenixMultiViewInputFormat.class);
+
+    public PhoenixMultiViewInputFormat() {
+    }
+
+    @Override public List<InputSplit> getSplits(JobContext context) throws IOException {
+        List<InputSplit> listOfInputSplit = new ArrayList<>();
+        try {
+            final Configuration configuration = context.getConfiguration();
+            Class<?> defaultDeletionMultiInputStrategyClazz = DefaultPhoenixMultiViewListProvider.class;

Review comment:
       nit: remove "deletion" from variable naming, since it is not specific to phoenix-ttl

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixMultiViewReader.java
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+
+public class PhoenixMultiViewReader<T extends Writable> extends RecordReader<NullWritable,T> {

Review comment:
       nit: class comments

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixMultiViewInputSplit.java
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class PhoenixMultiViewInputSplit extends InputSplit implements Writable {
+
+    List<ViewInfoWritable> viewInfoTrackerList;
+
+    public PhoenixMultiViewInputSplit() {
+        this.viewInfoTrackerList = new ArrayList<>();
+    }
+
+    public PhoenixMultiViewInputSplit(List<ViewInfoWritable> viewInfoTracker) {
+        this.viewInfoTrackerList = viewInfoTracker;
+    }
+
+    @Override public void write(DataOutput output) throws IOException {
+        WritableUtils.writeVInt(output, this.viewInfoTrackerList.size());
+        for (ViewInfoWritable viewInfoWritable : this.viewInfoTrackerList) {
+            ViewInfoTracker viewInfoTracker = (ViewInfoTracker)viewInfoWritable;

Review comment:
       Can u add a check for instance type, that way it is future proof

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLDeleteJobMapper.java
##########
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixResultSet;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+import org.apache.phoenix.mapreduce.util.MultiViewJobStatusTracker;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewJobStatusTracker;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.Properties;
+
+public class ViewTTLDeleteJobMapper extends Mapper<NullWritable, ViewInfoTracker, NullWritable, NullWritable> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLDeleteJobMapper.class);
+    private MultiViewJobStatusTracker multiViewJobStatusTracker;
+    private static final int DEFAULT_MAX_RETRIES = 3;
+    private static final int DEFAULT_RETRY_SLEEP_TIME_IN_MS = 10000;
+
+    private void initMultiViewJobStatusTracker(Configuration config) throws Exception {
+        try {
+            Class<?> defaultViewDeletionTrackerClass = DefaultMultiViewJobStatusTracker.class;
+            if (config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ) != null) {
+                LOGGER.info("Using customized tracker class : " +
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+                defaultViewDeletionTrackerClass = Class.forName(
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+            } else {
+                LOGGER.info("Using default tracker class ");
+            }
+            this.multiViewJobStatusTracker = (MultiViewJobStatusTracker) defaultViewDeletionTrackerClass.newInstance();
+        } catch (Exception e) {
+            LOGGER.error("Getting exception While initializing initMultiViewJobStatusTracker with error message");
+            LOGGER.error("stack trace" + e.getStackTrace().toString());
+            throw e;
+        }
+    }
+
+    @Override
+    protected void map(NullWritable key, ViewInfoTracker value, Context context) throws IOException  {
+        try {
+            final Configuration config = context.getConfiguration();
+
+            if (this.multiViewJobStatusTracker == null) {
+                initMultiViewJobStatusTracker(config);
+            }
+
+            LOGGER.debug(String.format("Deleting from view %s, TenantID %s, and TTL value: %d",
+                    value.getViewName(), value.getTenantId(), value.getPhoenixTtl()));
+
+            deletingExpiredRows(value, config, context);
+
+        } catch (SQLException e) {
+            LOGGER.error("Mapper got an exception while deleting expired rows : " + e.getMessage() );
+            throw new IOException(e.getMessage(), e.getCause());
+        } catch (Exception e) {
+            LOGGER.error("Getting IOException while running View TTL Deletion Job mapper with error : "
+                    + e.getMessage());
+            throw new IOException(e.getMessage(), e.getCause());
+        }
+    }
+
+    private void deletingExpiredRows(ViewInfoTracker value, Configuration config, Context context) throws Exception {

Review comment:
       nit: naming - "deleteExpiredRows"

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";
+

Review comment:
       nit: remove empyLines?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLDeleteJobMapper.java
##########
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixResultSet;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+import org.apache.phoenix.mapreduce.util.MultiViewJobStatusTracker;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewJobStatusTracker;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.Properties;
+
+public class ViewTTLDeleteJobMapper extends Mapper<NullWritable, ViewInfoTracker, NullWritable, NullWritable> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLDeleteJobMapper.class);
+    private MultiViewJobStatusTracker multiViewJobStatusTracker;
+    private static final int DEFAULT_MAX_RETRIES = 3;
+    private static final int DEFAULT_RETRY_SLEEP_TIME_IN_MS = 10000;
+
+    private void initMultiViewJobStatusTracker(Configuration config) throws Exception {
+        try {
+            Class<?> defaultViewDeletionTrackerClass = DefaultMultiViewJobStatusTracker.class;
+            if (config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ) != null) {
+                LOGGER.info("Using customized tracker class : " +
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+                defaultViewDeletionTrackerClass = Class.forName(
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+            } else {
+                LOGGER.info("Using default tracker class ");
+            }
+            this.multiViewJobStatusTracker = (MultiViewJobStatusTracker) defaultViewDeletionTrackerClass.newInstance();
+        } catch (Exception e) {
+            LOGGER.error("Getting exception While initializing initMultiViewJobStatusTracker with error message");
+            LOGGER.error("stack trace" + e.getStackTrace().toString());
+            throw e;
+        }
+    }
+
+    @Override
+    protected void map(NullWritable key, ViewInfoTracker value, Context context) throws IOException  {
+        try {
+            final Configuration config = context.getConfiguration();
+
+            if (this.multiViewJobStatusTracker == null) {
+                initMultiViewJobStatusTracker(config);
+            }
+
+            LOGGER.debug(String.format("Deleting from view %s, TenantID %s, and TTL value: %d",
+                    value.getViewName(), value.getTenantId(), value.getPhoenixTtl()));
+
+            deletingExpiredRows(value, config, context);
+
+        } catch (SQLException e) {
+            LOGGER.error("Mapper got an exception while deleting expired rows : " + e.getMessage() );
+            throw new IOException(e.getMessage(), e.getCause());
+        } catch (Exception e) {
+            LOGGER.error("Getting IOException while running View TTL Deletion Job mapper with error : "
+                    + e.getMessage());
+            throw new IOException(e.getMessage(), e.getCause());
+        }
+    }
+
+    private void deletingExpiredRows(ViewInfoTracker value, Configuration config, Context context) throws Exception {
+        try (PhoenixConnection connection = (PhoenixConnection) ConnectionUtil.getInputConnection(config)) {
+            if (value.getTenantId() != null && !value.getTenantId().equals("NULL")) {
+                Properties props = new Properties();
+                props.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, value.getTenantId());
+
+                try (PhoenixConnection tenantConnection = (PhoenixConnection)
+                        DriverManager.getConnection(connection.getURL(), props)) {
+                    deletingExpiredRows(tenantConnection, value, config, context);
+                }
+            } else {
+                deletingExpiredRows(connection, value, config, context);
+            }
+        }
+    }
+
+    private void deletingExpiredRows(PhoenixConnection connection, ViewInfoTracker viewInfoTracker,
+                                     Configuration config, Context context) throws Exception {
+        try {
+            PTable ptable = PhoenixRuntime.getTable(connection, viewInfoTracker.getViewName());
+            String deleteIfExpiredStatement = "SELECT /*+ NO_INDEX */ count(*) FROM " + viewInfoTracker.getViewName();
+
+            if (viewInfoTracker.isIndexRelation()) {
+                ptable = PhoenixRuntime.getTable(connection, viewInfoTracker.getRelationName());
+                deleteIfExpiredStatement = "SELECT count(*) FROM " + viewInfoTracker.getRelationName();
+            }
+
+            deletingExpiredRows(connection, ptable, deleteIfExpiredStatement, config, context, viewInfoTracker);
+
+        } catch (Exception e) {
+            LOGGER.error(String.format("Had an issue to process the view: %s, " +
+                    "see error %s ", viewInfoTracker.toString(),e.getMessage()));
+        }
+    }
+
+    /*
+     * Each Mapper that receives a MultiPhoenixViewInputSplit will execute a DeleteMutation/Scan
+     *  (With DELETE_TTL_EXPIRED attribute) per view for all the views and view indexes in the split.
+     * For each DeleteMutation, it bounded by the view start and stop keys for the region and
+     *  TTL attributes and Delete Hint.
+     */
+    private boolean deletingExpiredRows(PhoenixConnection connection, PTable pTable,
+                                        String deleteIfExpiredStatement, Configuration config,
+                                        Context context, ViewInfoTracker viewInfoTracker) throws Exception {
+
+        try (PhoenixStatement pstmt = new PhoenixStatement(connection).unwrap(PhoenixStatement.class)) {
+            String sourceTableName = pTable.getTableName().getString();
+            this.multiViewJobStatusTracker.updateJobStatus(viewInfoTracker, 0,
+                    ViewInfoJobState.PREP.getValue(), config, 0, context.getJobName(), sourceTableName);
+            final QueryPlan queryPlan = pstmt.optimizeQuery(deleteIfExpiredStatement);
+            final Scan scan = queryPlan.getContext().getScan();
+            byte[] emptyColumnFamilyName = SchemaUtil.getEmptyColumnFamily(pTable);
+            byte[] emptyColumnName =
+                    pTable.getEncodingScheme() == PTable.QualifierEncodingScheme.NON_ENCODED_QUALIFIERS ?
+                            QueryConstants.EMPTY_COLUMN_BYTES :
+                            pTable.getEncodingScheme().encode(QueryConstants.ENCODED_EMPTY_COLUMN_NAME);
+
+            scan.setAttribute(BaseScannerRegionObserver.EMPTY_COLUMN_FAMILY_NAME, emptyColumnFamilyName);
+            scan.setAttribute(BaseScannerRegionObserver.EMPTY_COLUMN_QUALIFIER_NAME, emptyColumnName);
+            scan.setAttribute(BaseScannerRegionObserver.DELETE_PHOENIX_TTL_EXPIRED, PDataType.TRUE_BYTES);
+            scan.setAttribute(BaseScannerRegionObserver.MASK_PHOENIX_TTL_EXPIRED, PDataType.FALSE_BYTES);
+            scan.setAttribute(BaseScannerRegionObserver.PHOENIX_TTL, Bytes.toBytes(Long.valueOf(viewInfoTracker.getPhoenixTtl())));

Review comment:
       Can u also set the attribute BaseScannerRegionObserver.PHOENIX_TTL_SCAN_TABLE_NAME

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";
+
+    public static final int DEFAULT_MAPPER_SPLIT_SIZE = 10;
+
+    public static final int DEFAULT_QUERY_BATCH_SIZE = 100;
+
+    private static final Option DELETE_ALL_VIEW_OPTION = new Option("a", "all", false,

Review comment:
       nit: use plural => DELETE_ALL_VIEWS_OPTION

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+
+        if (numViewsInSplit < 1) {

Review comment:
       numViewSplit <= 0, else can run into possible divide by zero later

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";
+
+    public static final int DEFAULT_MAPPER_SPLIT_SIZE = 10;
+
+    public static final int DEFAULT_QUERY_BATCH_SIZE = 100;
+
+    private static final Option DELETE_ALL_VIEW_OPTION = new Option("a", "all", false,
+            "Delete all views from all tables.");
+    private static final Option VIEW_NAME_OPTION = new Option("v", "view", true,
+            "Delete Phoenix View Name");
+    private static final Option TENANT_ID_OPTION = new Option("i", "id", true,
+            "Delete an view based on the tenant id.");
+    private static final Option JOB_PRIORITY_OPTION = new Option("p", "job-priority", true,
+            "Define job priority from 0(highest) to 4");
+    private static final Option SPLIT_SIZE_OPTION = new Option("s", "split-size-per-mapper", true,
+            "Define split size for each mapper.");
+    private static final Option BATCH_SIZE_OPTION = new Option("b", "batch-size-for-query-more", true,
+            "Define batch size for fetching views metadata from syscat.");
+    private static final Option RUN_FOREGROUND_OPTION = new Option("runfg",
+            "run-foreground", false, "If specified, runs ViewTTLTool " +
+            "in Foreground. Default - Runs the build in background");
+
+    private static final Option HELP_OPTION = new Option("h", "help", false, "Help");
+
+    Configuration configuration;
+    Connection connection;
+
+    private String viewName;
+    private String tenantId;
+    private String jobName;
+    private boolean isDeletingAllViews;
+    private JobPriority jobPriority;
+    private boolean isForeground;
+    private int splitSize;
+    private int batchSize;
+    private Job job;
+
+    public void parseArgs(String[] args) {
+        CommandLine cmdLine;
+        try {
+            cmdLine = parseOptions(args);
+        } catch (IllegalStateException e) {
+            printHelpAndExit(e.getMessage(), getOptions());
+            throw e;
+        }
+
+        if (getConf() == null) {
+            setConf(HBaseConfiguration.create());
+        }
+
+        if (cmdLine.hasOption(DELETE_ALL_VIEW_OPTION.getOpt())) {
+            this.isDeletingAllViews = true;
+        } else if (cmdLine.hasOption(VIEW_NAME_OPTION.getOpt())) {
+            viewName = cmdLine.getOptionValue(VIEW_NAME_OPTION.getOpt());
+            this.isDeletingAllViews = false;
+        }
+
+        if (cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            tenantId = cmdLine.getOptionValue((TENANT_ID_OPTION.getOpt()));
+        }
+
+        jobPriority = getJobPriority(cmdLine);
+        if (cmdLine.hasOption(SPLIT_SIZE_OPTION.getOpt())) {
+            splitSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            splitSize = DEFAULT_MAPPER_SPLIT_SIZE;
+        }
+
+        if (cmdLine.hasOption(BATCH_SIZE_OPTION.getOpt())) {
+            batchSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            batchSize = DEFAULT_QUERY_BATCH_SIZE;
+        }
+
+        isForeground = cmdLine.hasOption(RUN_FOREGROUND_OPTION.getOpt());
+    }
+
+    public String getJobPriority() {
+        return this.jobPriority.toString();
+    }
+
+    private JobPriority getJobPriority(CommandLine cmdLine) {
+        String jobPriorityOption = cmdLine.getOptionValue(JOB_PRIORITY_OPTION.getOpt());
+        if (jobPriorityOption == null) {
+            return JobPriority.NORMAL;
+        }
+
+        switch (jobPriorityOption) {
+            case "0" : return JobPriority.VERY_HIGH;
+            case "1" : return JobPriority.HIGH;
+            case "2" : return JobPriority.NORMAL;
+            case "3" : return JobPriority.LOW;
+            case "4" : return JobPriority.VERY_LOW;
+            default:
+                return JobPriority.NORMAL;
+        }
+    }
+
+    public Job getJob() {
+        return this.job;
+    }
+
+    public boolean isDeletingAllViews() {
+        return this.isDeletingAllViews;
+    }
+
+    public String getTenantId() {
+        return this.tenantId;
+    }
+
+    public String getViewName() {
+        return this.viewName;
+    }
+
+    public int getSplitSize() {
+        return this.splitSize;
+    }
+
+    public int getBatchSize() {
+        return this.batchSize;
+    }
+
+    public CommandLine parseOptions(String[] args) {
+        final Options options = getOptions();
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmdLine = null;
+        try {
+            cmdLine = parser.parse(options, args);
+        } catch (ParseException e) {
+            printHelpAndExit("Error parsing command line options: " + e.getMessage(), options);
+        }
+
+        if (!cmdLine.hasOption(DELETE_ALL_VIEW_OPTION.getOpt()) &&
+                !cmdLine.hasOption(VIEW_NAME_OPTION.getOpt()) &&
+                !cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            throw new IllegalStateException("No deletion job is specified, " +
+                    "please indicate deletion job for ALL/TABLE/VIEW/TENANT level");
+        }
+
+        if (cmdLine.hasOption(HELP_OPTION.getOpt())) {
+            printHelpAndExit(options, 0);
+        }
+
+        this.jobPriority = getJobPriority(cmdLine);
+
+        return cmdLine;
+    }
+
+    private Options getOptions() {
+        final Options options = new Options();
+        options.addOption(DELETE_ALL_VIEW_OPTION);
+        options.addOption(VIEW_NAME_OPTION);
+        options.addOption(TENANT_ID_OPTION);
+        options.addOption(HELP_OPTION);
+        options.addOption(JOB_PRIORITY_OPTION);
+        options.addOption(RUN_FOREGROUND_OPTION);
+        options.addOption(SPLIT_SIZE_OPTION);
+        options.addOption(BATCH_SIZE_OPTION);
+
+        return options;
+    }
+
+    private void printHelpAndExit(String errorMessage, Options options) {
+        System.err.println(errorMessage);

Review comment:
       Do we want it to log it too?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";
+
+    public static final int DEFAULT_MAPPER_SPLIT_SIZE = 10;
+
+    public static final int DEFAULT_QUERY_BATCH_SIZE = 100;
+
+    private static final Option DELETE_ALL_VIEW_OPTION = new Option("a", "all", false,
+            "Delete all views from all tables.");
+    private static final Option VIEW_NAME_OPTION = new Option("v", "view", true,
+            "Delete Phoenix View Name");
+    private static final Option TENANT_ID_OPTION = new Option("i", "id", true,
+            "Delete an view based on the tenant id.");
+    private static final Option JOB_PRIORITY_OPTION = new Option("p", "job-priority", true,
+            "Define job priority from 0(highest) to 4");
+    private static final Option SPLIT_SIZE_OPTION = new Option("s", "split-size-per-mapper", true,
+            "Define split size for each mapper.");
+    private static final Option BATCH_SIZE_OPTION = new Option("b", "batch-size-for-query-more", true,
+            "Define batch size for fetching views metadata from syscat.");
+    private static final Option RUN_FOREGROUND_OPTION = new Option("runfg",
+            "run-foreground", false, "If specified, runs ViewTTLTool " +
+            "in Foreground. Default - Runs the build in background");
+
+    private static final Option HELP_OPTION = new Option("h", "help", false, "Help");
+
+    Configuration configuration;
+    Connection connection;

Review comment:
       Any particular reason these are with default visibility and not private?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+
+        if (numViewsInSplit < 1) {
+            numViewsInSplit = DEFAULT_MAPPER_SPLIT_SIZE;
+        }
+
+        int numberOfMappers = views.size() / numViewsInSplit;
+        if (Math.ceil(views.size() % numViewsInSplit) > 0) {
+            numberOfMappers++;
+        }
+
+        final List<InputSplit> psplits = Lists.newArrayListWithExpectedSize(numberOfMappers);

Review comment:
       nit: camelCase variable naming

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";

Review comment:
       Do you want rename this variable and value to be DELETE_ALL_VIEWS?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {

Review comment:
       nit: class comments

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewJobStatusTracker.java
##########
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class DefaultMultiViewJobStatusTracker implements MultiViewJobStatusTracker {

Review comment:
       nit: class comments

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);
+
+                        if (schema != null && schema.length() > 0) {
+                            fullTableName = SchemaUtil.getTableName(schema, tableName);
+                        }
+
+                        boolean skip = false;
+                        PTable pTable = null;
+                        try {
+                            pTable = PhoenixRuntime.getTable(connection, tenantId, fullTableName);
+                            // we currently only support up to three levels
+                            // CASE 1 : BASE_TABLE -> GLOBAL_VIEW -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> VIEW
+                            PTable parentTable = PhoenixRuntime.getTable(connection, null,
+                                    pTable.getParentName().toString());
+                            if (parentTable.getType() == PTableType.VIEW &&

Review comment:
       Can u add comments here please?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {

Review comment:
       nit: class comments

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";
+
+    public static final int DEFAULT_MAPPER_SPLIT_SIZE = 10;
+
+    public static final int DEFAULT_QUERY_BATCH_SIZE = 100;
+
+    private static final Option DELETE_ALL_VIEW_OPTION = new Option("a", "all", false,
+            "Delete all views from all tables.");
+    private static final Option VIEW_NAME_OPTION = new Option("v", "view", true,
+            "Delete Phoenix View Name");
+    private static final Option TENANT_ID_OPTION = new Option("i", "id", true,
+            "Delete an view based on the tenant id.");
+    private static final Option JOB_PRIORITY_OPTION = new Option("p", "job-priority", true,
+            "Define job priority from 0(highest) to 4");
+    private static final Option SPLIT_SIZE_OPTION = new Option("s", "split-size-per-mapper", true,
+            "Define split size for each mapper.");
+    private static final Option BATCH_SIZE_OPTION = new Option("b", "batch-size-for-query-more", true,
+            "Define batch size for fetching views metadata from syscat.");
+    private static final Option RUN_FOREGROUND_OPTION = new Option("runfg",
+            "run-foreground", false, "If specified, runs ViewTTLTool " +
+            "in Foreground. Default - Runs the build in background");
+
+    private static final Option HELP_OPTION = new Option("h", "help", false, "Help");
+
+    Configuration configuration;
+    Connection connection;
+
+    private String viewName;
+    private String tenantId;
+    private String jobName;
+    private boolean isDeletingAllViews;
+    private JobPriority jobPriority;
+    private boolean isForeground;
+    private int splitSize;
+    private int batchSize;
+    private Job job;
+
+    public void parseArgs(String[] args) {
+        CommandLine cmdLine;
+        try {
+            cmdLine = parseOptions(args);
+        } catch (IllegalStateException e) {
+            printHelpAndExit(e.getMessage(), getOptions());
+            throw e;
+        }
+
+        if (getConf() == null) {
+            setConf(HBaseConfiguration.create());
+        }
+
+        if (cmdLine.hasOption(DELETE_ALL_VIEW_OPTION.getOpt())) {
+            this.isDeletingAllViews = true;
+        } else if (cmdLine.hasOption(VIEW_NAME_OPTION.getOpt())) {
+            viewName = cmdLine.getOptionValue(VIEW_NAME_OPTION.getOpt());
+            this.isDeletingAllViews = false;
+        }
+
+        if (cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            tenantId = cmdLine.getOptionValue((TENANT_ID_OPTION.getOpt()));
+        }
+
+        jobPriority = getJobPriority(cmdLine);
+        if (cmdLine.hasOption(SPLIT_SIZE_OPTION.getOpt())) {
+            splitSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            splitSize = DEFAULT_MAPPER_SPLIT_SIZE;
+        }
+
+        if (cmdLine.hasOption(BATCH_SIZE_OPTION.getOpt())) {
+            batchSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            batchSize = DEFAULT_QUERY_BATCH_SIZE;
+        }
+
+        isForeground = cmdLine.hasOption(RUN_FOREGROUND_OPTION.getOpt());
+    }
+
+    public String getJobPriority() {
+        return this.jobPriority.toString();
+    }
+
+    private JobPriority getJobPriority(CommandLine cmdLine) {
+        String jobPriorityOption = cmdLine.getOptionValue(JOB_PRIORITY_OPTION.getOpt());
+        if (jobPriorityOption == null) {
+            return JobPriority.NORMAL;
+        }
+
+        switch (jobPriorityOption) {
+            case "0" : return JobPriority.VERY_HIGH;
+            case "1" : return JobPriority.HIGH;
+            case "2" : return JobPriority.NORMAL;
+            case "3" : return JobPriority.LOW;
+            case "4" : return JobPriority.VERY_LOW;
+            default:
+                return JobPriority.NORMAL;
+        }
+    }
+
+    public Job getJob() {
+        return this.job;
+    }
+
+    public boolean isDeletingAllViews() {
+        return this.isDeletingAllViews;
+    }
+
+    public String getTenantId() {
+        return this.tenantId;
+    }
+
+    public String getViewName() {
+        return this.viewName;
+    }
+
+    public int getSplitSize() {
+        return this.splitSize;
+    }
+
+    public int getBatchSize() {
+        return this.batchSize;
+    }
+
+    public CommandLine parseOptions(String[] args) {
+        final Options options = getOptions();
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmdLine = null;
+        try {
+            cmdLine = parser.parse(options, args);
+        } catch (ParseException e) {
+            printHelpAndExit("Error parsing command line options: " + e.getMessage(), options);
+        }
+
+        if (!cmdLine.hasOption(DELETE_ALL_VIEW_OPTION.getOpt()) &&
+                !cmdLine.hasOption(VIEW_NAME_OPTION.getOpt()) &&
+                !cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            throw new IllegalStateException("No deletion job is specified, " +
+                    "please indicate deletion job for ALL/TABLE/VIEW/TENANT level");
+        }
+
+        if (cmdLine.hasOption(HELP_OPTION.getOpt())) {
+            printHelpAndExit(options, 0);
+        }
+
+        this.jobPriority = getJobPriority(cmdLine);

Review comment:
       This is getting called in multiple places, can we consolidate?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
##########
@@ -166,6 +166,27 @@
 
     public static final String MAPREDUCE_JOB_TYPE = "phoenix.mapreduce.jobtype";
 
+    // group number of views per mapper to run the deletion job
+    public static final String MAPREDUCE_MULTI_INPUT_MAPPER_SPLIT_SIZE = "phoenix.mapreduce.multi.input.split.size";
+
+    public static final String MAPREDUCE_MULTI_INPUT_QUERY_BATCH_SIZE = "phoenix.mapreduce.multi.input.batch.size";
+
+    // phoenix ttl data deletion job for a specific view
+    public static final String MAPREDUCE_PHOENIX_TTL_DELETE_JOB_PER_VIEW = "phoenix.mapreduce.view_ttl.view";

Review comment:
       nit: variable name and value do not match

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+
+        if (numViewsInSplit < 1) {
+            numViewsInSplit = DEFAULT_MAPPER_SPLIT_SIZE;
+        }
+
+        int numberOfMappers = views.size() / numViewsInSplit;
+        if (Math.ceil(views.size() % numViewsInSplit) > 0) {
+            numberOfMappers++;
+        }
+
+        final List<InputSplit> psplits = Lists.newArrayListWithExpectedSize(numberOfMappers);
+        // Split the views into splits
+
+        for (int i = 0; i < numberOfMappers; i++) {
+            psplits.add(new PhoenixMultiViewInputSplit(views.subList(
+                    i * numViewsInSplit, getUpperBound(numViewsInSplit, i, views.size()))));
+        }
+
+        return psplits;
+    }
+
+    public int getUpperBound(int numViewsInSplit, int i, int viewSize) {

Review comment:
       Does it need to be public?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+

Review comment:
       Check for empty views?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);
+
+                        if (schema != null && schema.length() > 0) {
+                            fullTableName = SchemaUtil.getTableName(schema, tableName);
+                        }
+
+                        boolean skip = false;
+                        PTable pTable = null;
+                        try {
+                            pTable = PhoenixRuntime.getTable(connection, tenantId, fullTableName);
+                            // we currently only support up to three levels

Review comment:
       Why only upto 3 levels?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);
+
+                        if (schema != null && schema.length() > 0) {
+                            fullTableName = SchemaUtil.getTableName(schema, tableName);
+                        }
+
+                        boolean skip = false;
+                        PTable pTable = null;
+                        try {
+                            pTable = PhoenixRuntime.getTable(connection, tenantId, fullTableName);
+                            // we currently only support up to three levels
+                            // CASE 1 : BASE_TABLE -> GLOBAL_VIEW -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> VIEW
+                            PTable parentTable = PhoenixRuntime.getTable(connection, null,
+                                    pTable.getParentName().toString());
+                            if (parentTable.getType() == PTableType.VIEW &&
+                                    parentTable.getPhoenixTTL() > 0) {
+                                skip = true;
+                            }
+                        } catch (Exception e) {
+                            skip = true;
+                            LOGGER.error(String.format("Had an issue to process the view: %s, tenantId:" +
+                                    "see error %s ", fullTableName, tenantId, e.getMessage()));
+                        }
+
+                        if (!skip) {
+                            ViewInfoWritable viewInfoTracker = new ViewInfoTracker(
+                                    tenantId,
+                                    fullTableName,
+                                    viewTtlValue,
+                                    pTable.getPhysicalName().getString(),
+                                    false
+
+                            );
+                            viewInfoWritables.add(viewInfoTracker);
+
+                            List<PTable> allIndexesOnView = pTable.getIndexes();
+                            for (PTable viewIndexTable : allIndexesOnView) {
+                                String indexName = viewIndexTable.getTableName().getString();
+                                String indexSchema = viewIndexTable.getSchemaName().getString();
+                                if (indexName.contains(QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR)) {
+                                    indexName = SchemaUtil.getTableNameFromFullName(indexName,
+                                            QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR);
+                                }
+                                indexName = SchemaUtil.getTableNameFromFullName(indexName);
+                                indexName = SchemaUtil.getTableName(indexSchema, indexName);
+                                ViewInfoWritable viewInfoTrackerForIndexEntry = new ViewInfoTracker(
+                                        tenantId,
+                                        fullTableName,
+                                        viewTtlValue,
+                                        indexName,
+                                        true
+
+                                );
+                                viewInfoWritables.add(viewInfoTrackerForIndexEntry);
+                            }
+                        }
+                    }
+                    if (isQueryMore) {
+                        if (fullTableName == null) {

Review comment:
       fullTableName is not being used anywhere after this, so why the check?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r528000931



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLTool.java
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.Properties;
+
+public class ViewTTLTool extends Configured implements Tool {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLTool.class);
+
+    public static enum MR_COUNTER_METRICS {
+        FAILED,
+        SUCCEED
+    }
+
+    public static final String ADDING_DELETION_MARKS_FOR_ALL_VIEWS = "ADDING_DELETION_MARKS_FOR_ALL_VIEWS";
+
+    public static final int DEFAULT_MAPPER_SPLIT_SIZE = 10;
+
+    public static final int DEFAULT_QUERY_BATCH_SIZE = 100;
+
+    private static final Option DELETE_ALL_VIEW_OPTION = new Option("a", "all", false,
+            "Delete all views from all tables.");
+    private static final Option VIEW_NAME_OPTION = new Option("v", "view", true,
+            "Delete Phoenix View Name");
+    private static final Option TENANT_ID_OPTION = new Option("i", "id", true,
+            "Delete an view based on the tenant id.");
+    private static final Option JOB_PRIORITY_OPTION = new Option("p", "job-priority", true,
+            "Define job priority from 0(highest) to 4");
+    private static final Option SPLIT_SIZE_OPTION = new Option("s", "split-size-per-mapper", true,
+            "Define split size for each mapper.");
+    private static final Option BATCH_SIZE_OPTION = new Option("b", "batch-size-for-query-more", true,
+            "Define batch size for fetching views metadata from syscat.");
+    private static final Option RUN_FOREGROUND_OPTION = new Option("runfg",
+            "run-foreground", false, "If specified, runs ViewTTLTool " +
+            "in Foreground. Default - Runs the build in background");
+
+    private static final Option HELP_OPTION = new Option("h", "help", false, "Help");
+
+    Configuration configuration;
+    Connection connection;
+
+    private String viewName;
+    private String tenantId;
+    private String jobName;
+    private boolean isDeletingAllViews;
+    private JobPriority jobPriority;
+    private boolean isForeground;
+    private int splitSize;
+    private int batchSize;
+    private Job job;
+
+    public void parseArgs(String[] args) {
+        CommandLine cmdLine;
+        try {
+            cmdLine = parseOptions(args);
+        } catch (IllegalStateException e) {
+            printHelpAndExit(e.getMessage(), getOptions());
+            throw e;
+        }
+
+        if (getConf() == null) {
+            setConf(HBaseConfiguration.create());
+        }
+
+        if (cmdLine.hasOption(DELETE_ALL_VIEW_OPTION.getOpt())) {
+            this.isDeletingAllViews = true;
+        } else if (cmdLine.hasOption(VIEW_NAME_OPTION.getOpt())) {
+            viewName = cmdLine.getOptionValue(VIEW_NAME_OPTION.getOpt());
+            this.isDeletingAllViews = false;
+        }
+
+        if (cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            tenantId = cmdLine.getOptionValue((TENANT_ID_OPTION.getOpt()));
+        }
+
+        jobPriority = getJobPriority(cmdLine);
+        if (cmdLine.hasOption(SPLIT_SIZE_OPTION.getOpt())) {
+            splitSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            splitSize = DEFAULT_MAPPER_SPLIT_SIZE;
+        }
+
+        if (cmdLine.hasOption(BATCH_SIZE_OPTION.getOpt())) {
+            batchSize = Integer.valueOf(cmdLine.getOptionValue(SPLIT_SIZE_OPTION.getOpt()));
+        } else {
+            batchSize = DEFAULT_QUERY_BATCH_SIZE;
+        }
+
+        isForeground = cmdLine.hasOption(RUN_FOREGROUND_OPTION.getOpt());
+    }
+
+    public String getJobPriority() {
+        return this.jobPriority.toString();
+    }
+
+    private JobPriority getJobPriority(CommandLine cmdLine) {
+        String jobPriorityOption = cmdLine.getOptionValue(JOB_PRIORITY_OPTION.getOpt());
+        if (jobPriorityOption == null) {
+            return JobPriority.NORMAL;
+        }
+
+        switch (jobPriorityOption) {
+            case "0" : return JobPriority.VERY_HIGH;
+            case "1" : return JobPriority.HIGH;
+            case "2" : return JobPriority.NORMAL;
+            case "3" : return JobPriority.LOW;
+            case "4" : return JobPriority.VERY_LOW;
+            default:
+                return JobPriority.NORMAL;
+        }
+    }
+
+    public Job getJob() {
+        return this.job;
+    }
+
+    public boolean isDeletingAllViews() {
+        return this.isDeletingAllViews;
+    }
+
+    public String getTenantId() {
+        return this.tenantId;
+    }
+
+    public String getViewName() {
+        return this.viewName;
+    }
+
+    public int getSplitSize() {
+        return this.splitSize;
+    }
+
+    public int getBatchSize() {
+        return this.batchSize;
+    }
+
+    public CommandLine parseOptions(String[] args) {
+        final Options options = getOptions();
+        CommandLineParser parser = new PosixParser();
+        CommandLine cmdLine = null;
+        try {
+            cmdLine = parser.parse(options, args);
+        } catch (ParseException e) {
+            printHelpAndExit("Error parsing command line options: " + e.getMessage(), options);
+        }
+
+        if (!cmdLine.hasOption(DELETE_ALL_VIEW_OPTION.getOpt()) &&
+                !cmdLine.hasOption(VIEW_NAME_OPTION.getOpt()) &&
+                !cmdLine.hasOption(TENANT_ID_OPTION.getOpt())) {
+            throw new IllegalStateException("No deletion job is specified, " +
+                    "please indicate deletion job for ALL/TABLE/VIEW/TENANT level");
+        }
+
+        if (cmdLine.hasOption(HELP_OPTION.getOpt())) {
+            printHelpAndExit(options, 0);
+        }
+
+        this.jobPriority = getJobPriority(cmdLine);
+
+        return cmdLine;
+    }
+
+    private Options getOptions() {
+        final Options options = new Options();
+        options.addOption(DELETE_ALL_VIEW_OPTION);
+        options.addOption(VIEW_NAME_OPTION);
+        options.addOption(TENANT_ID_OPTION);
+        options.addOption(HELP_OPTION);
+        options.addOption(JOB_PRIORITY_OPTION);
+        options.addOption(RUN_FOREGROUND_OPTION);
+        options.addOption(SPLIT_SIZE_OPTION);
+        options.addOption(BATCH_SIZE_OPTION);
+
+        return options;
+    }
+
+    private void printHelpAndExit(String errorMessage, Options options) {
+        System.err.println(errorMessage);

Review comment:
       added




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r528034033



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultPhoenixMultiViewListProvider.java
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.ViewUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.charset.StandardCharsets;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.List;
+
+
+public class DefaultPhoenixMultiViewListProvider implements PhoenixMultiViewListProvider {
+    private static final Logger LOGGER =
+            LoggerFactory.getLogger(DefaultPhoenixMultiViewListProvider.class);
+
+    public List<ViewInfoWritable> getPhoenixMultiViewList(Configuration configuration) {
+        List<ViewInfoWritable> viewInfoWritables = new ArrayList<>();
+
+        String query = PhoenixMultiInputUtil.getFetchViewQuery(configuration);
+        boolean isQueryMore = configuration.get(
+                PhoenixConfigurationUtil.MAPREDUCE_PHOENIX_TTL_DELETE_JOB_ALL_VIEWS) != null;
+        int limit = PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(configuration);
+        try (PhoenixConnection connection = (PhoenixConnection)
+                ConnectionUtil.getInputConnection(configuration)){
+            try (Statement stmt = connection.createStatement()) {
+                do {
+                    ResultSet viewRs = stmt.executeQuery(query);
+                    String schema = null;
+                    String tableName = null;
+                    String tenantId = null;
+                    String fullTableName = null;
+
+                    while (viewRs.next()) {
+                        schema = viewRs.getString(2);
+                        tableName = viewRs.getString(3);
+                        tenantId = viewRs.getString(1);
+                        fullTableName = tableName;
+                        Long viewTtlValue = viewRs.getLong(4);
+
+                        if (schema != null && schema.length() > 0) {
+                            fullTableName = SchemaUtil.getTableName(schema, tableName);
+                        }
+
+                        boolean skip = false;
+                        PTable pTable = null;
+                        try {
+                            pTable = PhoenixRuntime.getTable(connection, tenantId, fullTableName);
+                            // we currently only support up to three levels
+                            // CASE 1 : BASE_TABLE -> GLOBAL_VIEW -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> TENANT_VIEW
+                            // CASE 2 : BASE_TABLE -> VIEW
+                            PTable parentTable = PhoenixRuntime.getTable(connection, null,
+                                    pTable.getParentName().toString());
+                            if (parentTable.getType() == PTableType.VIEW &&

Review comment:
       added




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r528026223



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/DefaultMultiViewSplitStrategy.java
##########
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce.util;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.phoenix.mapreduce.PhoenixMultiViewInputSplit;
+
+import java.util.List;
+
+import static org.apache.phoenix.mapreduce.ViewTTLTool.DEFAULT_MAPPER_SPLIT_SIZE;
+
+public class DefaultMultiViewSplitStrategy implements MultiViewSplitStrategy {
+
+    public List<InputSplit> generateSplits(List<ViewInfoWritable> views, Configuration configuration) {
+        int numViewsInSplit = PhoenixConfigurationUtil.getMultiViewSplitSize(configuration);
+
+        if (numViewsInSplit < 1) {

Review comment:
       < 1 and <=0 are the same thing right?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #975:
URL: https://github.com/apache/phoenix/pull/975#discussion_r527994427



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/ViewTTLDeleteJobMapper.java
##########
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixResultSet;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.mapreduce.util.ConnectionUtil;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.mapreduce.util.ViewInfoTracker;
+import org.apache.phoenix.mapreduce.util.ViewInfoWritable.ViewInfoJobState;
+import org.apache.phoenix.mapreduce.util.MultiViewJobStatusTracker;
+import org.apache.phoenix.mapreduce.util.DefaultMultiViewJobStatusTracker;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.Properties;
+
+public class ViewTTLDeleteJobMapper extends Mapper<NullWritable, ViewInfoTracker, NullWritable, NullWritable> {
+    private static final Logger LOGGER = LoggerFactory.getLogger(ViewTTLDeleteJobMapper.class);
+    private MultiViewJobStatusTracker multiViewJobStatusTracker;
+    private static final int DEFAULT_MAX_RETRIES = 3;
+    private static final int DEFAULT_RETRY_SLEEP_TIME_IN_MS = 10000;
+
+    private void initMultiViewJobStatusTracker(Configuration config) throws Exception {
+        try {
+            Class<?> defaultViewDeletionTrackerClass = DefaultMultiViewJobStatusTracker.class;
+            if (config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ) != null) {
+                LOGGER.info("Using customized tracker class : " +
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+                defaultViewDeletionTrackerClass = Class.forName(
+                        config.get(PhoenixConfigurationUtil.MAPREDUCE_MULTI_INPUT_MAPPER_TRACKER_CLAZZ));
+            } else {
+                LOGGER.info("Using default tracker class ");
+            }
+            this.multiViewJobStatusTracker = (MultiViewJobStatusTracker) defaultViewDeletionTrackerClass.newInstance();

Review comment:
       my ide complains the incompatible type if I don't have the cast here




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi closed pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
yanxinyi closed pull request #975:
URL: https://github.com/apache/phoenix/pull/975


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #975: PHOENIX-5592 MapReduce job to asynchronously delete rows where the VI…

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #975:
URL: https://github.com/apache/phoenix/pull/975#issuecomment-734054201


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  11m 16s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   3m 28s |  phoenix-core in 4.x has 950 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 36s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m  5s |  phoenix-core: The patch generated 524 new + 1101 unchanged - 14 fixed = 1625 total (was 1115)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 32s |  phoenix-core generated 5 new + 950 unchanged - 0 fixed = 955 total (was 950)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 147m  2s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate ASF License warnings.  |
   |  |   | 181m 22s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Possible null pointer dereference of cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:cmdLine in org.apache.phoenix.mapreduce.PhoenixTTLTool.parseOptions(String[]) on exception path  Dereferenced at PhoenixTTLTool.java:[line 185] |
   |  |  Integral value cast to double and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.generateSplits(List, Configuration)  At DefaultMultiViewSplitStrategy.java:and then passed to Math.ceil in org.apache.phoenix.mapreduce.util.DefaultMultiViewSplitStrategy.generateSplits(List, Configuration)  At DefaultMultiViewSplitStrategy.java:[line 40] |
   |  |  Exception is caught when Exception is not thrown in org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getPhoenixMultiViewList(Configuration)  At DefaultPhoenixMultiViewListProvider.java:is not thrown in org.apache.phoenix.mapreduce.util.DefaultPhoenixMultiViewListProvider.getPhoenixMultiViewList(Configuration)  At DefaultPhoenixMultiViewListProvider.java:[line 139] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewQueryMoreSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 451] |
   |  |  Boxing/unboxing to parse a primitive org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getMultiViewSplitSize(Configuration)  At PhoenixConfigurationUtil.java:[line 481] |
   | Failed junit tests | phoenix.end2end.index.ViewIndexIT |
   |   | phoenix.end2end.UpsertSelectIT |
   |   | phoenix.end2end.index.GlobalIndexOptimizationIT |
   |   | TEST-[RangeScanIT_0] |
   |   | phoenix.end2end.index.GlobalMutableNonTxIndexWithLazyPostBatchWriteIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/4/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/975 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile |
   | uname | Linux d99da53f93ef 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 7ac4dff |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/4/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/4/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/4/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/4/testReport/ |
   | Max. process+thread count | 6562 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-975/4/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org