You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by GitBox <gi...@apache.org> on 2020/09/11 16:46:29 UTC

[GitHub] [phoenix] jpisaac opened a new pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

jpisaac opened a new pull request #878:
URL: https://github.com/apache/phoenix/pull/878


   1. Added configuration classes and interfaces for multi-tenant workloads


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-757079606


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m  6s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   1m  1s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed  |
   | -1 :x: |  javac  |   0m 34s |  phoenix-pherf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  checkstyle  |   0m 51s |  phoenix-pherf: The patch generated 756 new + 826 unchanged - 53 fixed = 1582 total (was 879)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  1s |  The patch 13 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  9s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 16s |  phoenix-pherf generated 25 new + 32 unchanged - 0 fixed = 57 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  8s |  phoenix-pherf generated 3 new + 41 unchanged - 1 fixed = 44 total (was 42)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 30s |  phoenix-pherf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate ASF License warnings.  |
   |  |   |  42m 25s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 59] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.UpsertOperationSupplier$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:[line 81] is not discharged |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 4ab9f513e864 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 2a530da |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | javac | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/diff-compile-javac-phoenix-pherf.txt |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/testReport/ |
   | Max. process+thread count | 1629 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-691203281


   @ChinmaySKulkarni @yanxinyi @gokceni 
   
   I am breaking this PR into multiple commits so that it is easy to review.
   This is the first of the PRs. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-691203281






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487289670



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-805475768


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   4m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m  1s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 56s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   1m  2s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 56s |  phoenix-pherf: The patch generated 759 new + 1019 unchanged - 54 fixed = 1778 total (was 1073)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 14 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  9s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 17s |  phoenix-pherf generated 25 new + 32 unchanged - 0 fixed = 57 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  6s |  phoenix-pherf generated 9 new + 41 unchanged - 1 fixed = 50 total (was 42)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 56s |  phoenix-pherf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate ASF License warnings.  |
   |  |   |  40m 50s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  input must be non-null but is marked as nullable  At IdleTimeOperationSupplier.java:is marked as nullable  At IdleTimeOperationSupplier.java:[lines 52-74] |
   |  |  input must be non-null but is marked as nullable  At PreScenarioOperationSupplier.java:is marked as nullable  At PreScenarioOperationSupplier.java:[lines 51-80] |
   |  |  input must be non-null but is marked as nullable  At QueryOperationSupplier.java:is marked as nullable  At QueryOperationSupplier.java:[lines 54-87] |
   |  |  Possible null pointer dereference in org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkHandler.onEvent(TenantOperationWorkload$TenantOperationEvent) due to return value of called method  Dereferenced at TenantOperationWorkHandler.java:org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkHandler.onEvent(TenantOperationWorkload$TenantOperationEvent) due to return value of called method  Dereferenced at TenantOperationWorkHandler.java:[line 58] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 59] |
   |  |  input must be non-null but is marked as nullable  At UpsertOperationSupplier.java:is marked as nullable  At UpsertOperationSupplier.java:[lines 56-136] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.UpsertOperationSupplier$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:[line 81] is not discharged |
   |  |  input must be non-null but is marked as nullable  At UserDefinedOperationSupplier.java:is marked as nullable  At UserDefinedOperationSupplier.java:[lines 44-46] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux cb076b502a1a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 7198196 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/testReport/ |
   | Max. process+thread count | 1729 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/10/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-691203281


   @ChinmaySKulkarni @yanxinyi @gokceni 
   
   I am breaking this PR into multiple commits so that it is easy to review.
   This is the first of the PRs. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r490416650



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {

Review comment:
       @ChinmaySKulkarni added the headers, since you commented let me know if I missed anything.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r490412173



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType
+public class Noop {

Review comment:
       @ChinmaySKulkarni This holds the idle time to be used for waiting. Modeled it as an operation, thus follows the same pattern as other operations.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r488166131



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Upsert.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import org.apache.phoenix.pherf.rules.RulesApplier;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class Upsert {
+
+    private String id;
+    private String upsertGroup;
+    private String statement;
+    private List<Column> columns;
+    private Pattern pattern;
+    private long timeoutDuration = Long.MAX_VALUE;
+
+    public Upsert() {
+    	pattern = Pattern.compile("\\[.*?\\]");
+    }
+    
+
+    public String getDynamicStatement(RulesApplier ruleApplier, Scenario scenario) throws Exception {
+    	String ret = this.statement;
+    	String needQuotes = "";
+    	Matcher m = pattern.matcher(ret);
+        while(m.find()) {
+        	String dynamicField = m.group(0).replace("[", "").replace("]", "");
+        	Column dynamicColumn = ruleApplier.getRule(dynamicField, scenario);
+			needQuotes = (dynamicColumn.getType() == DataTypeMapping.CHAR || dynamicColumn
+					.getType() == DataTypeMapping.VARCHAR) ? "'" : "";
+			ret = ret.replace("[" + dynamicField + "]",
+					needQuotes + ruleApplier.getDataValue(dynamicColumn).getValue() + needQuotes);
+     }

Review comment:
       nit: weird indent




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487291399



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r545472778



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationIT.java
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.end2end.BaseHBaseManagedTimeIT;
+import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.schema.SchemaReader;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationEventGenerator;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.NoopTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.PreScenarioTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.QueryTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UpsertTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UserDefinedOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactoryTest;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationIT extends MultiTenantOperationBaseIT {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationIT.class);
+
+    @Test
+    public void testVariousOperations() throws Exception {
+        int numTenantGroups = 3;
+        int numOpGroups = 5;
+        int numRuns = 10;
+        int numOperations = 10;
+
+        PhoenixUtil pUtil = PhoenixUtil.create();
+        DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+        for (Scenario scenario : model.getScenarios()) {
+            LOGGER.debug(String.format("Testing %s", scenario.getName()));
+            LoadProfile loadProfile = scenario.getLoadProfile();
+            assertTrue("tenant group size is not as expected: ",
+                    loadProfile.getTenantDistribution().size() == numTenantGroups);
+            assertTrue("operation group size is not as expected: ",
+                    loadProfile.getOpDistribution().size() == numOpGroups);
+
+            TenantOperationFactory opFactory = new TenantOperationFactory(pUtil, model, scenario);
+            TenantOperationEventGenerator evtGen = new TenantOperationEventGenerator(
+                    opFactory.getOperationsForScenario(), model, scenario);
+
+            assertTrue("operation group size from the factory is not as expected: ",
+                    opFactory.getOperationsForScenario().size() == numOpGroups);
+
+            int numRowsInserted = 0;
+            for (int i = 0; i < numRuns; i++) {
+                int ops = numOperations;
+                loadProfile.setNumOperations(ops);
+                while (ops-- > 0) {
+                    TenantOperationInfo info = evtGen.next();
+                    TenantOperationImpl op = opFactory.getOperation(info);
+                    int row = TestOperationGroup.valueOf(info.getOperationGroupId()).ordinal();

Review comment:
       can't we just use the enum value instead of referring to its ordinal here?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Upsert.java
##########
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import org.apache.phoenix.pherf.rules.RulesApplier;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class Upsert {
+
+    private String id;
+    private String upsertGroup;
+    private String statement;
+    private List<Column> columns;
+    private boolean useGlobalConnection;
+    private Pattern pattern;
+    private long timeoutDuration = Long.MAX_VALUE;
+
+    public Upsert() {
+    	pattern = Pattern.compile("\\[.*?\\]");
+    }
+
+    public String getDynamicStatement(RulesApplier ruleApplier, Scenario scenario)

Review comment:
       Can `Query` and `Upsert` and other such classes be derived from some common base class/ implement a common interface? Seems like a lot of the behavior is common (at the methods if not implementations)

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;
+                        }
+                    }
+                    return new OperationStats(input, startTime, 0, resultRowCount, queryElapsedTime);
+                }
+            };
+        }
+    }
+
+    class UpsertTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+
+                    final int batchSize = loadProfile.getBatchSize();
+                    final boolean useBatchApi = batchSize != 0;
+                    final int rowCount = useBatchApi ? batchSize : 1;
+
+                    final UpsertOperation operation = (UpsertOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final Upsert upsert = operation.getUpsert();
+                    final String tableName = input.getTableName();
+                    final String scenarioName = input.getScenarioName();
+                    final List<Column> columns = upsert.getColumn();
+
+                    final String opName = String.format("%s:%s:%s:%s:%s",
+                            scenarioName, tableName, opGroup, tenantGroup, tenantId);
+
+                    long rowsCreated = 0;
+                    long startTime = 0, duration, totalDuration;
+                    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+                    try (Connection connection = phoenixUtil.getConnection(tenantId)) {
+                        connection.setAutoCommit(true);
+                        startTime = EnvironmentEdgeManager.currentTimeMillis();
+                        String sql = phoenixUtil.buildSql(columns, tableName);
+                        PreparedStatement stmt = null;
+                        try {
+                            stmt = connection.prepareStatement(sql);
+                            for (long i = rowCount; i > 0; i--) {
+                                LOGGER.debug("Operation " + opName + " executing ");
+                                stmt = phoenixUtil.buildStatement(rulesApplier, scenario, columns, stmt, simpleDateFormat);
+                                if (useBatchApi) {
+                                    stmt.addBatch();
+                                } else {
+                                    rowsCreated += stmt.executeUpdate();
+                                }
+                            }
+                        } catch (SQLException e) {
+                            LOGGER.error("Operation " + opName + " failed with exception ", e);

Review comment:
       Might be better to just let the outer catch block catch the exception since you're rethrowing it anyways. And then we can log there. 

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationIT.java
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.end2end.BaseHBaseManagedTimeIT;
+import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.schema.SchemaReader;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationEventGenerator;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.NoopTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.PreScenarioTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.QueryTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UpsertTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UserDefinedOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactoryTest;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationIT extends MultiTenantOperationBaseIT {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationIT.class);
+
+    @Test
+    public void testVariousOperations() throws Exception {
+        int numTenantGroups = 3;
+        int numOpGroups = 5;
+        int numRuns = 10;
+        int numOperations = 10;
+
+        PhoenixUtil pUtil = PhoenixUtil.create();
+        DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+        for (Scenario scenario : model.getScenarios()) {
+            LOGGER.debug(String.format("Testing %s", scenario.getName()));
+            LoadProfile loadProfile = scenario.getLoadProfile();
+            assertTrue("tenant group size is not as expected: ",

Review comment:
       nit: Use assertEquals() instead

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/MultiTenantOperationBaseIT.java
##########
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.schema.SchemaReader;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.Workload;
+import org.junit.BeforeClass;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class MultiTenantOperationBaseIT extends ParallelStatsDisabledIT {
+    static enum TestOperationGroup {
+        op1, op2, op3, op4, op5

Review comment:
       There seems to be some inherent assumption what each operation group does i.e. upsert vs NoOp, etc. as per my understanding. Can you rename the enum values and/or add some comments to clarify this?

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationIT.java
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.end2end.BaseHBaseManagedTimeIT;
+import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.schema.SchemaReader;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationEventGenerator;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.NoopTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.PreScenarioTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.QueryTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UpsertTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UserDefinedOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactoryTest;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationIT extends MultiTenantOperationBaseIT {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationIT.class);
+
+    @Test
+    public void testVariousOperations() throws Exception {
+        int numTenantGroups = 3;
+        int numOpGroups = 5;
+        int numRuns = 10;
+        int numOperations = 10;
+
+        PhoenixUtil pUtil = PhoenixUtil.create();
+        DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+        for (Scenario scenario : model.getScenarios()) {
+            LOGGER.debug(String.format("Testing %s", scenario.getName()));
+            LoadProfile loadProfile = scenario.getLoadProfile();
+            assertTrue("tenant group size is not as expected: ",
+                    loadProfile.getTenantDistribution().size() == numTenantGroups);
+            assertTrue("operation group size is not as expected: ",
+                    loadProfile.getOpDistribution().size() == numOpGroups);
+
+            TenantOperationFactory opFactory = new TenantOperationFactory(pUtil, model, scenario);
+            TenantOperationEventGenerator evtGen = new TenantOperationEventGenerator(
+                    opFactory.getOperationsForScenario(), model, scenario);
+
+            assertTrue("operation group size from the factory is not as expected: ",
+                    opFactory.getOperationsForScenario().size() == numOpGroups);
+
+            int numRowsInserted = 0;
+            for (int i = 0; i < numRuns; i++) {
+                int ops = numOperations;
+                loadProfile.setNumOperations(ops);
+                while (ops-- > 0) {
+                    TenantOperationInfo info = evtGen.next();
+                    TenantOperationImpl op = opFactory.getOperation(info);
+                    int row = TestOperationGroup.valueOf(info.getOperationGroupId()).ordinal();
+                    OperationStats stats = op.getMethod().apply(info);
+                    LOGGER.info(pUtil.getGSON().toJson(stats));
+                    if (info.getOperation().getType() == Operation.OperationType.PRE_RUN) continue;
+                    switch (row) {
+                    case 0:

Review comment:
       Why not use switch on the enum values themselves rather than the ordinal?

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkloadIT.java
##########
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.clearspring.analytics.util.Lists;
+import com.google.common.collect.Maps;
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.Workload;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkload.TenantOperationEvent;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationWorkloadIT extends MultiTenantOperationBaseIT {
+
+    private static class EventCountingWorkHandler implements
+            WorkHandler<TenantOperationEvent>, LifecycleAware {
+        private final String handlerId;
+        private final TenantOperationFactory tenantOperationFactory;
+        private static final Logger LOGGER = LoggerFactory.getLogger(EventCountingWorkHandler.class);
+        private final Map<String, CountDownLatch> latches;
+        public EventCountingWorkHandler(TenantOperationFactory tenantOperationFactory,
+                String handlerId, Map<String, CountDownLatch> latches) {
+            this.handlerId = handlerId;
+            this.tenantOperationFactory = tenantOperationFactory;
+            this.latches = latches;
+        }
+
+        @Override public void onStart() {}
+
+        @Override public void onShutdown() {}
+
+        @Override public void onEvent(TenantOperationEvent event)
+                throws Exception {
+            TenantOperationInfo input = event.getTenantOperationInfo();
+            TenantOperationImpl op = tenantOperationFactory.getOperation(input);
+            OperationStats stats = op.getMethod().apply(input);
+            LOGGER.info(tenantOperationFactory.getPhoenixUtil().getGSON().toJson(stats));
+            assertTrue(stats.getStatus() == 0);
+            latches.get(handlerId).countDown();
+        }
+    }
+
+    @Test
+    public void testWorkloadWithOneHandler() throws Exception {
+        int numOpGroups = 5;
+        int numHandlers = 1;
+        int totalOperations = 50;
+        int perHandlerCount = 50;
+
+        ExecutorService executor = null;
+        try {
+            executor = Executors.newFixedThreadPool(numHandlers);
+            PhoenixUtil pUtil = PhoenixUtil.create();
+            DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+            for (Scenario scenario : model.getScenarios()) {
+                // Set the total number of operations for this load profile
+                scenario.getLoadProfile().setNumOperations(totalOperations);
+                TenantOperationFactory opFactory = new TenantOperationFactory(pUtil, model, scenario);
+                assertTrue("operation group size from the factory is not as expected: ",

Review comment:
       ditto

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+    private static final int MIN_BATCH_SIZE = 1;
+    private static final String DEFAULT_TENANT_ID_FMT = "00D%s%07d";
+    private static final int DEFAULT_GROUP_ID_LEN = 5;
+    private static final int DEFAULT_TENANT_ID_LEN = 15;
+
+    // Holds the batch size to be used in upserts.
+    private int batchSize;
+    // Holds the number of operations to be generated.
+    private long numOperations;
+    /**
+     * Holds the format to be used when generating tenantIds.
+     * TenantId format should typically have 2 parts -
+     * 1. string fmt - that hold the tenant group id.
+     * 2. int fmt - that holds a random number between 1 and max tenants
+     * for e.g DEFAULT_TENANT_ID_FMT = "00D%s%07d";
+     */
+    private String tenantIdFormat;
+    private int groupIdLength;
+    private int tenantIdLength;
+    // Holds the desired tenant distribution for this load.
+    List<TenantGroup> tenantDistribution;
+    // Holds the desired operation distribution for this load.
+    List<OperationGroup> opDistribution;

Review comment:
       Change these lists to `private`?

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkloadIT.java
##########
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.clearspring.analytics.util.Lists;
+import com.google.common.collect.Maps;
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.Workload;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkload.TenantOperationEvent;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationWorkloadIT extends MultiTenantOperationBaseIT {
+
+    private static class EventCountingWorkHandler implements
+            WorkHandler<TenantOperationEvent>, LifecycleAware {
+        private final String handlerId;
+        private final TenantOperationFactory tenantOperationFactory;
+        private static final Logger LOGGER = LoggerFactory.getLogger(EventCountingWorkHandler.class);
+        private final Map<String, CountDownLatch> latches;
+        public EventCountingWorkHandler(TenantOperationFactory tenantOperationFactory,
+                String handlerId, Map<String, CountDownLatch> latches) {
+            this.handlerId = handlerId;
+            this.tenantOperationFactory = tenantOperationFactory;
+            this.latches = latches;
+        }
+
+        @Override public void onStart() {}
+
+        @Override public void onShutdown() {}
+
+        @Override public void onEvent(TenantOperationEvent event)
+                throws Exception {
+            TenantOperationInfo input = event.getTenantOperationInfo();
+            TenantOperationImpl op = tenantOperationFactory.getOperation(input);
+            OperationStats stats = op.getMethod().apply(input);
+            LOGGER.info(tenantOperationFactory.getPhoenixUtil().getGSON().toJson(stats));
+            assertTrue(stats.getStatus() == 0);

Review comment:
       Use assertEquals instead

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkloadIT.java
##########
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.clearspring.analytics.util.Lists;
+import com.google.common.collect.Maps;
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.Workload;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkload.TenantOperationEvent;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationWorkloadIT extends MultiTenantOperationBaseIT {

Review comment:
       Please add class-level comments for all of these new classes

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkloadIT.java
##########
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.clearspring.analytics.util.Lists;
+import com.google.common.collect.Maps;
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.Workload;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkload.TenantOperationEvent;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationWorkloadIT extends MultiTenantOperationBaseIT {
+
+    private static class EventCountingWorkHandler implements
+            WorkHandler<TenantOperationEvent>, LifecycleAware {
+        private final String handlerId;
+        private final TenantOperationFactory tenantOperationFactory;
+        private static final Logger LOGGER = LoggerFactory.getLogger(EventCountingWorkHandler.class);
+        private final Map<String, CountDownLatch> latches;
+        public EventCountingWorkHandler(TenantOperationFactory tenantOperationFactory,
+                String handlerId, Map<String, CountDownLatch> latches) {
+            this.handlerId = handlerId;
+            this.tenantOperationFactory = tenantOperationFactory;
+            this.latches = latches;
+        }
+
+        @Override public void onStart() {}
+
+        @Override public void onShutdown() {}
+
+        @Override public void onEvent(TenantOperationEvent event)
+                throws Exception {
+            TenantOperationInfo input = event.getTenantOperationInfo();
+            TenantOperationImpl op = tenantOperationFactory.getOperation(input);
+            OperationStats stats = op.getMethod().apply(input);
+            LOGGER.info(tenantOperationFactory.getPhoenixUtil().getGSON().toJson(stats));
+            assertTrue(stats.getStatus() == 0);
+            latches.get(handlerId).countDown();
+        }
+    }
+
+    @Test
+    public void testWorkloadWithOneHandler() throws Exception {
+        int numOpGroups = 5;
+        int numHandlers = 1;
+        int totalOperations = 50;
+        int perHandlerCount = 50;
+
+        ExecutorService executor = null;
+        try {
+            executor = Executors.newFixedThreadPool(numHandlers);
+            PhoenixUtil pUtil = PhoenixUtil.create();
+            DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+            for (Scenario scenario : model.getScenarios()) {
+                // Set the total number of operations for this load profile
+                scenario.getLoadProfile().setNumOperations(totalOperations);
+                TenantOperationFactory opFactory = new TenantOperationFactory(pUtil, model, scenario);
+                assertTrue("operation group size from the factory is not as expected: ",
+                        opFactory.getOperationsForScenario().size() == numOpGroups);
+
+                // populate the handlers and countdown latches.
+                String handlerId = String.format("%s.%d", InetAddress.getLocalHost().getHostName(), numHandlers);
+                List<WorkHandler> workers = Lists.newArrayList();
+                Map<String, CountDownLatch> latches = Maps.newConcurrentMap();
+                workers.add(new EventCountingWorkHandler(opFactory, handlerId, latches));
+                latches.put(handlerId, new CountDownLatch(perHandlerCount));
+                // submit the workload
+                Workload workload = new TenantOperationWorkload(pUtil, model, scenario, workers, properties);
+                Future status = executor.submit(workload.execute());
+                // Just make sure there are no exceptions
+                status.get();
+
+                // Wait for the handlers to count down
+                for (Map.Entry<String, CountDownLatch> latch : latches.entrySet()) {
+                    assertTrue(latch.getValue().await(60, TimeUnit.SECONDS));
+                }
+            }
+        } finally {
+            if (executor != null) {
+                executor.shutdown();
+            }
+        }
+    }
+
+    @Test
+    public void testWorkloadWithManyHandlers() throws Exception {
+        int numOpGroups = 5;
+        int numHandlers = 5;
+        int totalOperations = 500;
+        int perHandlerCount = 50;
+
+        ExecutorService executor = Executors.newFixedThreadPool(numHandlers);
+        PhoenixUtil pUtil = PhoenixUtil.create();
+        DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+        for (Scenario scenario : model.getScenarios()) {
+            // Set the total number of operations for this load profile
+            scenario.getLoadProfile().setNumOperations(totalOperations);
+            TenantOperationFactory opFactory = new TenantOperationFactory(pUtil, model, scenario);
+            assertTrue("operation group size from the factory is not as expected: ",

Review comment:
       ditto

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/rules/RulesApplier.java
##########
@@ -59,13 +61,34 @@
 
     private Map<Column,RuleBasedDataGenerator> columnRuleBasedDataGeneratorMap = new HashMap<>();
 
+    // Support for multiple models, but rules are only relevant each model
+    // TODO : This is a step towards getting the above comment fixed.

Review comment:
       I didn't understand these comments. Can you please clarify? 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/rules/RulesApplier.java
##########
@@ -422,9 +446,15 @@ private void populateModelList() {
         if (!modelList.isEmpty()) {
             return;
         }
-        
+
         // Support for multiple models, but rules are only relevant each model
-        for (DataModel model : parser.getDataModels()) {
+        // TODO : This is a step towards getting the above comment fixed.

Review comment:
       same here. Didn't understand which comment we are fixing?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java
##########
@@ -335,7 +353,7 @@ public void executeScenarioDdl(List<Ddl> ddls, String tenantId, DataLoadTimeSumm
      * @param tableName
      * @throws InterruptedException
      */
-    private void waitForAsyncIndexToFinish(String tableName) throws InterruptedException {
+    public void waitForAsyncIndexToFinish(String tableName) throws InterruptedException {

Review comment:
       is this change necessary?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java
##########
@@ -45,14 +57,15 @@
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
 
 public class PhoenixUtil {

Review comment:
       nit: Make class `final` and add a private constructor if it doesn't exist since this is a Util.

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {

Review comment:
       Can we break this method up too instead of putting it all in the constructor itself?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/OperationStats.java
##########
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt;
+
+import org.apache.phoenix.pherf.result.ResultValue;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Holds metrics + contextual info on the operation run.
+ */
+public class OperationStats {
+    private final String modelName;
+    private final String scenarioName;
+    private final String tableName;
+    private final String tenantId;
+    private final String tenantGroup;
+    private final String operationGroup;
+    private final Operation.OperationType opType;
+    private String handlerId;

Review comment:
       handlerId can't be private?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;
+                        }
+                    }
+                    return new OperationStats(input, startTime, 0, resultRowCount, queryElapsedTime);
+                }
+            };
+        }
+    }
+
+    class UpsertTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+
+                    final int batchSize = loadProfile.getBatchSize();
+                    final boolean useBatchApi = batchSize != 0;
+                    final int rowCount = useBatchApi ? batchSize : 1;
+
+                    final UpsertOperation operation = (UpsertOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final Upsert upsert = operation.getUpsert();
+                    final String tableName = input.getTableName();
+                    final String scenarioName = input.getScenarioName();
+                    final List<Column> columns = upsert.getColumn();
+
+                    final String opName = String.format("%s:%s:%s:%s:%s",
+                            scenarioName, tableName, opGroup, tenantGroup, tenantId);
+
+                    long rowsCreated = 0;
+                    long startTime = 0, duration, totalDuration;
+                    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+                    try (Connection connection = phoenixUtil.getConnection(tenantId)) {
+                        connection.setAutoCommit(true);
+                        startTime = EnvironmentEdgeManager.currentTimeMillis();
+                        String sql = phoenixUtil.buildSql(columns, tableName);
+                        PreparedStatement stmt = null;
+                        try {

Review comment:
       Use try-with-resources.

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationEventGenerator.java
##########
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.sun.org.apache.xpath.internal.operations.Mod;
+import org.apache.commons.math3.distribution.EnumeratedDistribution;
+import org.apache.commons.math3.util.Pair;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.OperationGroup;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+/**
+ * A perf load event generator based on the supplied load profile.
+ */
+
+public class TenantOperationEventGenerator
+        implements EventGenerator<TenantOperationInfo> {
+
+    private static class WeightedRandomSampler {
+        private final Random RANDOM = new Random();
+        private final LoadProfile loadProfile;
+        private final String modelName;
+        private final String scenarioName;
+        private final String tableName;
+        private final EnumeratedDistribution<String> distribution;
+
+        private final Map<String, TenantGroup> tenantGroupMap = Maps.newHashMap();
+        private final Map<String, Operation> operationMap = Maps.newHashMap();
+        private final Map<String, OperationGroup> operationGroupMap = Maps.newHashMap();
+
+        public WeightedRandomSampler(List<Operation> operationList, DataModel model, Scenario scenario) {
+            this.modelName = model.getName();
+            this.scenarioName = scenario.getName();
+            this.tableName = scenario.getTableName();
+            this.loadProfile = scenario.getLoadProfile();
+
+            for (Operation op : operationList) {
+                for (OperationGroup og : loadProfile.getOpDistribution()) {
+                    if (op.getId().compareTo(og.getId()) == 0) {
+                        operationMap.put(op.getId(), op);
+                        operationGroupMap.put(op.getId(), og);
+                    }
+                }
+            }
+            Preconditions.checkArgument(!operationMap.isEmpty(),
+                    "Operation list and load profile operation do not match");
+
+            double totalTenantGroupWeight = 0.0f;

Review comment:
       nit: Extract some of these steps into their own methods? This will make unit testing easier too.

##########
File path: phoenix-pherf/src/test/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationEventGeneratorTest.java
##########
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class TenantOperationEventGeneratorTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationEventGeneratorTest.class);
+    private enum TestOperationGroup {
+        op1, op2, op3, op4, op5
+    }
+
+    private enum TestTenantGroup {
+        tg1, tg2, tg3
+    }
+
+    public DataModel readTestDataModel(String resourceName) throws Exception {
+        URL scenarioUrl = XMLConfigParserTest.class.getResource(resourceName);
+        assertNotNull(scenarioUrl);
+        Path p = Paths.get(scenarioUrl.toURI());
+        try {
+            return XMLConfigParser.readDataModel(p);
+        } catch (UnmarshalException e) {
+            // If we don't parse the DTD, the variable 'name' won't be defined in the XML
+            LOGGER.warn("Caught expected exception", e);
+        }
+        return null;

Review comment:
       Are we handling this null value in the caller?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Query.java
##########
@@ -51,20 +51,24 @@ public Query() {
     public String getStatement() {
         return statement;
     }
-    
-    public String getDynamicStatement(RulesApplier ruleApplier, Scenario scenario) throws Exception {
-    	String ret = this.statement;
-    	String needQuotes = "";
-    	Matcher m = pattern.matcher(ret);
-        while(m.find()) {
-        	String dynamicField = m.group(0).replace("[", "").replace("]", "");
-        	Column dynamicColumn = ruleApplier.getRule(dynamicField, scenario);
-			needQuotes = (dynamicColumn.getType() == DataTypeMapping.CHAR || dynamicColumn
-					.getType() == DataTypeMapping.VARCHAR) ? "'" : "";
-			ret = ret.replace("[" + dynamicField + "]",
-					needQuotes + ruleApplier.getDataValue(dynamicColumn).getValue() + needQuotes);
-     }
-      	return ret;    	
+
+    public String getDynamicStatement(RulesApplier ruleApplier, Scenario scenario)

Review comment:
       Is this method to generate queries on the fly or something else? Can you add a comment?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/ResourceList.java
##########
@@ -74,9 +76,11 @@ public ResourceList(String rootResourceDir) {
     private Collection<Path> getResourcesPaths(
             final Pattern pattern) throws Exception {
 
-        final String classPath = System.getProperty("java.class.path", ".");
+        //final String classPath = System.getProperty("java.class.path", ".");
+        // TODO remove

Review comment:
       Do we want to remove this comment and also the one above it?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;

Review comment:
       nit: You can use Closables.closeQuietly() for these

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;
+                        }
+                    }
+                    return new OperationStats(input, startTime, 0, resultRowCount, queryElapsedTime);
+                }
+            };
+        }
+    }
+
+    class UpsertTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+
+                    final int batchSize = loadProfile.getBatchSize();
+                    final boolean useBatchApi = batchSize != 0;
+                    final int rowCount = useBatchApi ? batchSize : 1;
+
+                    final UpsertOperation operation = (UpsertOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final Upsert upsert = operation.getUpsert();
+                    final String tableName = input.getTableName();
+                    final String scenarioName = input.getScenarioName();
+                    final List<Column> columns = upsert.getColumn();
+
+                    final String opName = String.format("%s:%s:%s:%s:%s",
+                            scenarioName, tableName, opGroup, tenantGroup, tenantId);
+
+                    long rowsCreated = 0;
+                    long startTime = 0, duration, totalDuration;
+                    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+                    try (Connection connection = phoenixUtil.getConnection(tenantId)) {
+                        connection.setAutoCommit(true);

Review comment:
       We call commit() later anyways. Is setting this necessary? In case we run UPSERT SELECTS or DELETES with auto-commit, that will change the execution (client side vs server side) and this might be undesirable.

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkHandler.java
##########
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkload.TenantOperationEvent;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * TODO Documentation

Review comment:
       TODO

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {

Review comment:
       nit: Might be worth refactoring all these anonymous classes 

##########
File path: phoenix-pherf/src/test/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationEventGeneratorTest.java
##########
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class TenantOperationEventGeneratorTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationEventGeneratorTest.class);
+    private enum TestOperationGroup {
+        op1, op2, op3, op4, op5
+    }
+
+    private enum TestTenantGroup {
+        tg1, tg2, tg3
+    }
+
+    public DataModel readTestDataModel(String resourceName) throws Exception {
+        URL scenarioUrl = XMLConfigParserTest.class.getResource(resourceName);
+        assertNotNull(scenarioUrl);
+        Path p = Paths.get(scenarioUrl.toURI());
+        try {
+            return XMLConfigParser.readDataModel(p);
+        } catch (UnmarshalException e) {
+            // If we don't parse the DTD, the variable 'name' won't be defined in the XML
+            LOGGER.warn("Caught expected exception", e);
+        }
+        return null;
+    }
+
+    /**
+     * Case 1 : where some operations have zero weight
+     * Case 2 : where some tenant groups have zero weight
+     * Case 3 : where no operations and tenant groups have zero weight
+     * Case 4 : where some combinations of operation and tenant groups have zero weight
+     *
+     * @throws Exception
+     */
+    @Test
+    public void testVariousEventGeneration() throws Exception {
+        int numRuns = 10;
+        int numOperations = 100000;
+        int allowedVariance = 1000;
+        int normalizedOperations = (numOperations * numRuns) / 10000;
+        int numTenantGroups = 3;
+        int numOpGroups = 5;
+
+        PhoenixUtil pUtil = PhoenixUtil.create();
+        DataModel model = readTestDataModel("/scenario/test_evt_gen1.xml");
+        for (Scenario scenario : model.getScenarios()) {
+            LOGGER.debug(String.format("Testing %s", scenario.getName()));
+            LoadProfile loadProfile = scenario.getLoadProfile();
+            assertTrue("tenant group size is not as expected: ",

Review comment:
       nit: assertEquals throughout

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/OperationStats.java
##########
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt;
+
+import org.apache.phoenix.pherf.result.ResultValue;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Holds metrics + contextual info on the operation run.
+ */
+public class OperationStats {

Review comment:
       Shouldn't operationStats really be different per Operation? Operation is an interface whereas this is a concrete class. Each operation type might have their own stats, no? Maybe make this an abstract class instead and have each operation type implement their own stats which extend this?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;
+                        }
+                    }
+                    return new OperationStats(input, startTime, 0, resultRowCount, queryElapsedTime);
+                }
+            };
+        }
+    }
+
+    class UpsertTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+
+                    final int batchSize = loadProfile.getBatchSize();
+                    final boolean useBatchApi = batchSize != 0;
+                    final int rowCount = useBatchApi ? batchSize : 1;
+
+                    final UpsertOperation operation = (UpsertOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final Upsert upsert = operation.getUpsert();
+                    final String tableName = input.getTableName();
+                    final String scenarioName = input.getScenarioName();
+                    final List<Column> columns = upsert.getColumn();
+
+                    final String opName = String.format("%s:%s:%s:%s:%s",
+                            scenarioName, tableName, opGroup, tenantGroup, tenantId);
+
+                    long rowsCreated = 0;
+                    long startTime = 0, duration, totalDuration;
+                    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+                    try (Connection connection = phoenixUtil.getConnection(tenantId)) {
+                        connection.setAutoCommit(true);
+                        startTime = EnvironmentEdgeManager.currentTimeMillis();
+                        String sql = phoenixUtil.buildSql(columns, tableName);
+                        PreparedStatement stmt = null;
+                        try {
+                            stmt = connection.prepareStatement(sql);
+                            for (long i = rowCount; i > 0; i--) {
+                                LOGGER.debug("Operation " + opName + " executing ");
+                                stmt = phoenixUtil.buildStatement(rulesApplier, scenario, columns, stmt, simpleDateFormat);
+                                if (useBatchApi) {
+                                    stmt.addBatch();
+                                } else {
+                                    rowsCreated += stmt.executeUpdate();
+                                }
+                            }
+                        } catch (SQLException e) {
+                            LOGGER.error("Operation " + opName + " failed with exception ", e);
+                            throw e;
+                        } finally {
+                            // Need to keep the statement open to send the remaining batch of updates
+                            if (!useBatchApi && stmt != null) {
+                                stmt.close();
+                            }
+                            if (connection != null) {
+                                if (useBatchApi && stmt != null) {
+                                    int[] results = stmt.executeBatch();
+                                    for (int x = 0; x < results.length; x++) {
+                                        int result = results[x];
+                                        if (result < 1) {
+                                            final String msg =
+                                                    "Failed to write update in batch (update count="
+                                                            + result + ")";
+                                            throw new RuntimeException(msg);
+                                        }
+                                        rowsCreated += result;
+                                    }
+                                    // Close the statement after our last batch execution.
+                                    stmt.close();
+                                }
+
+                                try {
+                                    connection.commit();
+                                    duration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                                    LOGGER.info("Writer ( " + Thread.currentThread().getName()
+                                            + ") committed Final Batch. Duration (" + duration + ") Ms");
+                                    connection.close();
+                                } catch (SQLException e) {
+                                    // Swallow since we are closing anyway
+                                    e.printStackTrace();
+                                }
+                            }
+                        }
+                    } catch (SQLException throwables) {
+                        throw new RuntimeException(throwables);
+                    } catch (Exception e) {
+                        throw new RuntimeException(e);
+                    }
+
+                    totalDuration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                    return new OperationStats(input, startTime, 0, rowsCreated, totalDuration);
+                }
+            };
+        }
+    }
+
+    class PreScenarioTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+                @Override public OperationStats apply(final TenantOperationInfo input) {
+                    final PreScenarioOperation operation = (PreScenarioOperation) input.getOperation();
+                    final String tenantId = input.getTenantId();
+                    final String tableName = scenario.getTableName();
+
+                    long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    if (!operation.getPreScenarioDdls().isEmpty()) {
+                        try (Connection conn = phoenixUtil.getConnection(tenantId)) {
+                            for (Ddl ddl : scenario.getPreScenarioDdls()) {
+                                LOGGER.info("\nExecuting DDL:" + ddl + " on tenantId:" + tenantId);
+                                phoenixUtil.executeStatement(ddl.toString(), conn);
+                                if (ddl.getStatement().toUpperCase().contains(phoenixUtil.ASYNC_KEYWORD)) {
+                                    phoenixUtil.waitForAsyncIndexToFinish(ddl.getTableName());
+                                }
+                            }
+                        } catch (SQLException throwables) {
+                            throw new RuntimeException(throwables);
+                        } catch (Exception e) {
+                            throw new RuntimeException(e);
+                        }
+                    }
+                    long totalDuration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                    return new OperationStats(input, startTime,0, operation.getPreScenarioDdls().size(), totalDuration);
+
+                }
+            };
+        }
+    }
+
+    @VisibleForTesting
+    class NoopTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+                @Override public OperationStats apply(final TenantOperationInfo input) {
+
+                    final NoopOperation operation = (NoopOperation) input.getOperation();
+                    final Noop noop = operation.getNoop();
+
+                    long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    // Sleep for the specified time to simulate idle time.
+                    try {
+                        TimeUnit.MILLISECONDS.sleep(noop.getIdleTime());
+                        long duration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                        return new OperationStats(input, startTime, 0, 0, duration);
+                    } catch (InterruptedException e) {
+                        e.printStackTrace();
+                        long duration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                        return new OperationStats(input, startTime,-1, 0, duration);
+                    }
+                }
+            };
+        }
+    }
+
+    class UserDefinedOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+                @Override public OperationStats apply(final TenantOperationInfo input) {
+                    // TODO : implement user defined operation invocation.

Review comment:
       Can you create a Jira for this and link that in the TODO so we keep track of it?

##########
File path: phoenix-pherf/src/test/java/org/apache/phoenix/pherf/ConfigurationParserTest.java
##########
@@ -122,22 +125,61 @@ public void testConfigReader() {
         }
     }
 
-    private URL getResourceUrl() {
-        URL resourceUrl = getClass().getResource("/scenario/test_scenario.xml");
+    @Test
+    public void testWorkloadWithLoadProfile() throws Exception {
+        String testResourceName = "/scenario/test_scenario_with_load_profile.xml";
+        Set<String> scenarioNames = Sets.newHashSet("scenario_11", "scenario_12");
+        List<Scenario> scenarioList = getScenarios(testResourceName);
+        Scenario target = null;
+        for (Scenario scenario : scenarioList) {
+            if (scenarioNames.contains(scenario.getName())) {
+                target = scenario;
+            }
+            assertNotNull("Could not find scenario: " + scenario.getName(), target);
+        }
+
+        Scenario testScenarioWithLoadProfile = scenarioList.get(0);
+        LoadProfile loadProfile = testScenarioWithLoadProfile.getLoadProfile();
+        assertTrue("batch size not as expected: ",

Review comment:
       Nit: Use assertEquals() everywhere

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());

Review comment:
       nit: Logger.error(String, e) should be sufficient

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;
+                        }
+                    }
+                    return new OperationStats(input, startTime, 0, resultRowCount, queryElapsedTime);
+                }
+            };
+        }
+    }
+
+    class UpsertTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+
+                    final int batchSize = loadProfile.getBatchSize();
+                    final boolean useBatchApi = batchSize != 0;
+                    final int rowCount = useBatchApi ? batchSize : 1;
+
+                    final UpsertOperation operation = (UpsertOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final Upsert upsert = operation.getUpsert();
+                    final String tableName = input.getTableName();
+                    final String scenarioName = input.getScenarioName();
+                    final List<Column> columns = upsert.getColumn();
+
+                    final String opName = String.format("%s:%s:%s:%s:%s",
+                            scenarioName, tableName, opGroup, tenantGroup, tenantId);
+
+                    long rowsCreated = 0;
+                    long startTime = 0, duration, totalDuration;
+                    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+                    try (Connection connection = phoenixUtil.getConnection(tenantId)) {
+                        connection.setAutoCommit(true);
+                        startTime = EnvironmentEdgeManager.currentTimeMillis();
+                        String sql = phoenixUtil.buildSql(columns, tableName);
+                        PreparedStatement stmt = null;
+                        try {
+                            stmt = connection.prepareStatement(sql);
+                            for (long i = rowCount; i > 0; i--) {
+                                LOGGER.debug("Operation " + opName + " executing ");
+                                stmt = phoenixUtil.buildStatement(rulesApplier, scenario, columns, stmt, simpleDateFormat);
+                                if (useBatchApi) {
+                                    stmt.addBatch();
+                                } else {
+                                    rowsCreated += stmt.executeUpdate();
+                                }
+                            }
+                        } catch (SQLException e) {
+                            LOGGER.error("Operation " + opName + " failed with exception ", e);
+                            throw e;
+                        } finally {
+                            // Need to keep the statement open to send the remaining batch of updates
+                            if (!useBatchApi && stmt != null) {
+                                stmt.close();
+                            }
+                            if (connection != null) {
+                                if (useBatchApi && stmt != null) {
+                                    int[] results = stmt.executeBatch();
+                                    for (int x = 0; x < results.length; x++) {
+                                        int result = results[x];
+                                        if (result < 1) {
+                                            final String msg =
+                                                    "Failed to write update in batch (update count="
+                                                            + result + ")";
+                                            throw new RuntimeException(msg);
+                                        }
+                                        rowsCreated += result;
+                                    }
+                                    // Close the statement after our last batch execution.
+                                    stmt.close();
+                                }
+
+                                try {
+                                    connection.commit();
+                                    duration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                                    LOGGER.info("Writer ( " + Thread.currentThread().getName()
+                                            + ") committed Final Batch. Duration (" + duration + ") Ms");
+                                    connection.close();
+                                } catch (SQLException e) {
+                                    // Swallow since we are closing anyway
+                                    e.printStackTrace();
+                                }
+                            }
+                        }
+                    } catch (SQLException throwables) {
+                        throw new RuntimeException(throwables);
+                    } catch (Exception e) {
+                        throw new RuntimeException(e);
+                    }
+
+                    totalDuration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                    return new OperationStats(input, startTime, 0, rowsCreated, totalDuration);
+                }
+            };
+        }
+    }
+
+    class PreScenarioTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+                @Override public OperationStats apply(final TenantOperationInfo input) {
+                    final PreScenarioOperation operation = (PreScenarioOperation) input.getOperation();
+                    final String tenantId = input.getTenantId();
+                    final String tableName = scenario.getTableName();
+
+                    long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    if (!operation.getPreScenarioDdls().isEmpty()) {
+                        try (Connection conn = phoenixUtil.getConnection(tenantId)) {
+                            for (Ddl ddl : scenario.getPreScenarioDdls()) {
+                                LOGGER.info("\nExecuting DDL:" + ddl + " on tenantId:" + tenantId);
+                                phoenixUtil.executeStatement(ddl.toString(), conn);
+                                if (ddl.getStatement().toUpperCase().contains(phoenixUtil.ASYNC_KEYWORD)) {
+                                    phoenixUtil.waitForAsyncIndexToFinish(ddl.getTableName());
+                                }
+                            }
+                        } catch (SQLException throwables) {
+                            throw new RuntimeException(throwables);
+                        } catch (Exception e) {
+                            throw new RuntimeException(e);
+                        }
+                    }
+                    long totalDuration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                    return new OperationStats(input, startTime,0, operation.getPreScenarioDdls().size(), totalDuration);
+
+                }
+            };
+        }
+    }
+
+    @VisibleForTesting
+    class NoopTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+                @Override public OperationStats apply(final TenantOperationInfo input) {
+
+                    final NoopOperation operation = (NoopOperation) input.getOperation();
+                    final Noop noop = operation.getNoop();
+
+                    long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    // Sleep for the specified time to simulate idle time.
+                    try {
+                        TimeUnit.MILLISECONDS.sleep(noop.getIdleTime());
+                        long duration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                        return new OperationStats(input, startTime, 0, 0, duration);
+                    } catch (InterruptedException e) {
+                        e.printStackTrace();

Review comment:
       log instead

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);

Review comment:
       Can you explain what you mean by 'dynamic statements' and do we want to uncomment/remove this?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationImpl.java
##########
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.base.Function;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+
+/**
+ * An interface that implementers can use to provide a function that takes
+ * @see {@link TenantOperationInfo} as an input and gives @see {@link OperationStats} as output.
+ * This @see {@link Function} will invoked by the
+ * @see {@link TenantOperationWorkHandler#onEvent(TenantOperationWorkload.TenantOperationEvent)}
+ * when handling the events.
+ */
+public interface TenantOperationImpl {

Review comment:
       nit: Interface name probably shouldn't end with `Impl`

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {

Review comment:
       I meant class-level comments for all the new classes

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.

Review comment:
       Are we doing this TODO?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {

Review comment:
       Use try-with-resources instead

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationFactory.java
##########
@@ -0,0 +1,501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+import com.google.common.hash.BloomFilter;
+import com.google.common.hash.Funnel;
+import com.google.common.hash.PrimitiveSink;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Ddl;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Noop;
+import org.apache.phoenix.pherf.configuration.Query;
+import org.apache.phoenix.pherf.configuration.QuerySet;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.TenantGroup;
+import org.apache.phoenix.pherf.configuration.Upsert;
+import org.apache.phoenix.pherf.configuration.UserDefined;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.EventGenerator;
+import org.apache.phoenix.pherf.workload.mt.NoopOperation;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.PreScenarioOperation;
+import org.apache.phoenix.pherf.workload.mt.QueryOperation;
+import org.apache.phoenix.pherf.workload.mt.UpsertOperation;
+import org.apache.phoenix.pherf.workload.mt.UserDefinedOperation;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Factory class for operations.
+ * The class is responsible for creating new instances of various operation types.
+ * Operations typically implement @see {@link TenantOperationImpl}
+ * Operations that need to be executed are generated
+ * by @see {@link EventGenerator}
+ */
+public class TenantOperationFactory {
+
+    private static class TenantView {
+        private final String tenantId;
+        private final String viewName;
+
+        public TenantView(String tenantId, String viewName) {
+            this.tenantId = tenantId;
+            this.viewName = viewName;
+        }
+
+        public String getTenantId() {
+            return tenantId;
+        }
+
+        public String getViewName() {
+            return viewName;
+        }
+    }
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationFactory.class);
+    private final PhoenixUtil phoenixUtil;
+    private final DataModel model;
+    private final Scenario scenario;
+    private final XMLConfigParser parser;
+
+    private final RulesApplier rulesApplier;
+    private final LoadProfile loadProfile;
+    private final List<Operation> operationList = Lists.newArrayList();
+
+    private final BloomFilter<TenantView> tenantsLoaded;
+
+    public TenantOperationFactory(PhoenixUtil phoenixUtil, DataModel model, Scenario scenario) {
+        this.phoenixUtil = phoenixUtil;
+        this.model = model;
+        this.scenario = scenario;
+        this.parser = null;
+        this.rulesApplier = new RulesApplier(model);
+        this.loadProfile = this.scenario.getLoadProfile();
+        Funnel<TenantView> tenantViewFunnel = new Funnel<TenantView>() {
+            @Override
+            public void funnel(TenantView tenantView, PrimitiveSink into) {
+                into.putString(tenantView.getTenantId(), Charsets.UTF_8)
+                        .putString(tenantView.getViewName(), Charsets.UTF_8);
+            }
+        };
+
+        int numTenants = 0;
+        for (TenantGroup tg : loadProfile.getTenantDistribution()) {
+            numTenants += tg.getNumTenants();
+        }
+
+        // This holds the info whether the tenant view was created (initialized) or not.
+        tenantsLoaded = BloomFilter.create(tenantViewFunnel, numTenants, 0.01);
+
+        // Read the scenario definition and load the various operations.
+        for (final Noop noOp : scenario.getNoop()) {
+            Operation noopOperation = new NoopOperation() {
+                @Override public Noop getNoop() {
+                    return noOp;
+                }
+                @Override public String getId() {
+                    return noOp.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.NO_OP;
+                }
+            };
+            operationList.add(noopOperation);
+        }
+
+        for (final Upsert upsert : scenario.getUpsert()) {
+            Operation upsertOp = new UpsertOperation() {
+                @Override public Upsert getUpsert() {
+                    return upsert;
+                }
+
+                @Override public String getId() {
+                    return upsert.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.UPSERT;
+                }
+            };
+            operationList.add(upsertOp);
+        }
+        for (final QuerySet querySet : scenario.getQuerySet()) {
+            for (final Query query : querySet.getQuery()) {
+                Operation queryOp = new QueryOperation() {
+                    @Override public Query getQuery() {
+                        return query;
+                    }
+
+                    @Override public String getId() {
+                        return query.getId();
+                    }
+
+                    @Override public OperationType getType() {
+                        return OperationType.SELECT;
+                    }
+                };
+                operationList.add(queryOp);
+            }
+        }
+
+        for (final UserDefined udf : scenario.getUdf()) {
+            Operation udfOperation = new UserDefinedOperation() {
+                @Override public UserDefined getUserFunction() {
+                    return udf;
+                }
+
+                @Override public String getId() {
+                    return udf.getId();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.USER_DEFINED;
+                }
+            };
+            operationList.add(udfOperation);
+        }
+    }
+
+    public PhoenixUtil getPhoenixUtil() {
+        return phoenixUtil;
+    }
+
+    public DataModel getModel() {
+        return model;
+    }
+
+    public Scenario getScenario() {
+        return scenario;
+    }
+
+    public List<Operation> getOperationsForScenario() {
+        return operationList;
+    }
+
+    public TenantOperationImpl getOperation(final TenantOperationInfo input) {
+        TenantView tenantView = new TenantView(input.getTenantId(), scenario.getTableName());
+
+        // Check if pre run ddls are needed.
+        if (!tenantsLoaded.mightContain(tenantView)) {
+            // Initialize the tenant using the pre scenario ddls.
+            final PreScenarioOperation operation = new PreScenarioOperation() {
+                @Override public List<Ddl> getPreScenarioDdls() {
+                    List<Ddl> ddls = scenario.getPreScenarioDdls();
+                    return ddls == null ? Lists.<Ddl>newArrayList() : ddls;
+                }
+
+                @Override public String getId() {
+                    return OperationType.PRE_RUN.name();
+                }
+
+                @Override public OperationType getType() {
+                    return OperationType.PRE_RUN;
+                }
+            };
+            // Initialize with the pre run operation.
+            TenantOperationInfo preRunSample = new TenantOperationInfo(
+                    input.getModelName(),
+                    input.getScenarioName(),
+                    input.getTableName(),
+                    input.getTenantGroupId(),
+                    Operation.OperationType.PRE_RUN.name(),
+                    input.getTenantId(), operation);
+
+            TenantOperationImpl impl = new PreScenarioTenantOperationImpl();
+            try {
+                // Run the initialization operation.
+                OperationStats stats = impl.getMethod().apply(preRunSample);
+                LOGGER.info(phoenixUtil.getGSON().toJson(stats));
+            } catch (Exception e) {
+                LOGGER.error(
+                        String.format("Failed to initialize tenant. [%s, %s] ",
+                                tenantView.tenantId,
+                                tenantView.viewName
+                        ), e.fillInStackTrace());
+            }
+            tenantsLoaded.put(tenantView);
+        }
+
+        switch (input.getOperation().getType()) {
+        case NO_OP:
+            return new NoopTenantOperationImpl();
+        case SELECT:
+            return new QueryTenantOperationImpl();
+        case UPSERT:
+            return new UpsertTenantOperationImpl();
+        case USER_DEFINED:
+            return new UserDefinedOperationImpl();
+        default:
+            throw new IllegalArgumentException("Unknown operation type");
+        }
+    }
+
+    class QueryTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+                    final QueryOperation operation = (QueryOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final String scenarioName = input.getScenarioName();
+                    final String tableName = input.getTableName();
+                    final Query query = operation.getQuery();
+                    final long opCounter = 1;
+
+                    String opName = String.format("%s:%s:%s:%s:%s", scenarioName, tableName,
+                            opGroup, tenantGroup, tenantId);
+                    LOGGER.info("\nExecuting query " + query.getStatement());
+                    // TODO add explain plan output to the stats.
+
+                    Connection conn = null;
+                    PreparedStatement statement = null;
+                    ResultSet rs = null;
+                    Long startTime = EnvironmentEdgeManager.currentTimeMillis();
+                    Long resultRowCount = 0L;
+                    Long queryElapsedTime = 0L;
+                    String queryIteration = opName + ":" + opCounter;
+                    try {
+                        conn = phoenixUtil.getConnection(tenantId);
+                        conn.setAutoCommit(true);
+                        // TODO dynamic statements
+                        //final String statementString = query.getDynamicStatement(rulesApplier, scenario);
+                        statement = conn.prepareStatement(query.getStatement());
+                        boolean isQuery = statement.execute();
+                        if (isQuery) {
+                            rs = statement.getResultSet();
+                            boolean isSelectCountStatement = query.getStatement().toUpperCase().trim().contains("COUNT(") ? true : false;
+                            org.apache.hadoop.hbase.util.Pair<Long, Long>
+                                    r = phoenixUtil.getResults(query, rs, queryIteration, isSelectCountStatement, startTime);
+                            resultRowCount = r.getFirst();
+                            queryElapsedTime = r.getSecond();
+                        } else {
+                            conn.commit();
+                        }
+                    } catch (Exception e) {
+                        LOGGER.error("Exception while executing query iteration " + queryIteration, e);
+                    } finally {
+                        try {
+                            if (rs != null) rs.close();
+                            if (statement != null) statement.close();
+                            if (conn != null) conn.close();
+
+                        } catch (Throwable t) {
+                            // swallow;
+                        }
+                    }
+                    return new OperationStats(input, startTime, 0, resultRowCount, queryElapsedTime);
+                }
+            };
+        }
+    }
+
+    class UpsertTenantOperationImpl implements TenantOperationImpl {
+
+        @Override public Function<TenantOperationInfo, OperationStats> getMethod() {
+            return new Function<TenantOperationInfo, OperationStats>() {
+
+                @Nullable @Override public OperationStats apply(@Nullable TenantOperationInfo input) {
+
+                    final int batchSize = loadProfile.getBatchSize();
+                    final boolean useBatchApi = batchSize != 0;
+                    final int rowCount = useBatchApi ? batchSize : 1;
+
+                    final UpsertOperation operation = (UpsertOperation) input.getOperation();
+                    final String tenantGroup = input.getTenantGroupId();
+                    final String opGroup = input.getOperationGroupId();
+                    final String tenantId = input.getTenantId();
+                    final Upsert upsert = operation.getUpsert();
+                    final String tableName = input.getTableName();
+                    final String scenarioName = input.getScenarioName();
+                    final List<Column> columns = upsert.getColumn();
+
+                    final String opName = String.format("%s:%s:%s:%s:%s",
+                            scenarioName, tableName, opGroup, tenantGroup, tenantId);
+
+                    long rowsCreated = 0;
+                    long startTime = 0, duration, totalDuration;
+                    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+                    try (Connection connection = phoenixUtil.getConnection(tenantId)) {
+                        connection.setAutoCommit(true);
+                        startTime = EnvironmentEdgeManager.currentTimeMillis();
+                        String sql = phoenixUtil.buildSql(columns, tableName);
+                        PreparedStatement stmt = null;
+                        try {
+                            stmt = connection.prepareStatement(sql);
+                            for (long i = rowCount; i > 0; i--) {
+                                LOGGER.debug("Operation " + opName + " executing ");
+                                stmt = phoenixUtil.buildStatement(rulesApplier, scenario, columns, stmt, simpleDateFormat);
+                                if (useBatchApi) {
+                                    stmt.addBatch();
+                                } else {
+                                    rowsCreated += stmt.executeUpdate();
+                                }
+                            }
+                        } catch (SQLException e) {
+                            LOGGER.error("Operation " + opName + " failed with exception ", e);
+                            throw e;
+                        } finally {
+                            // Need to keep the statement open to send the remaining batch of updates
+                            if (!useBatchApi && stmt != null) {
+                                stmt.close();
+                            }
+                            if (connection != null) {
+                                if (useBatchApi && stmt != null) {
+                                    int[] results = stmt.executeBatch();
+                                    for (int x = 0; x < results.length; x++) {
+                                        int result = results[x];
+                                        if (result < 1) {
+                                            final String msg =
+                                                    "Failed to write update in batch (update count="
+                                                            + result + ")";
+                                            throw new RuntimeException(msg);
+                                        }
+                                        rowsCreated += result;
+                                    }
+                                    // Close the statement after our last batch execution.
+                                    stmt.close();
+                                }
+
+                                try {
+                                    connection.commit();
+                                    duration = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+                                    LOGGER.info("Writer ( " + Thread.currentThread().getName()
+                                            + ") committed Final Batch. Duration (" + duration + ") Ms");
+                                    connection.close();
+                                } catch (SQLException e) {
+                                    // Swallow since we are closing anyway
+                                    e.printStackTrace();

Review comment:
       Log instead

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationIT.java
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.end2end.BaseHBaseManagedTimeIT;
+import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.schema.SchemaReader;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationEventGenerator;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.NoopTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.PreScenarioTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.QueryTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UpsertTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UserDefinedOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactoryTest;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationIT extends MultiTenantOperationBaseIT {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationIT.class);
+
+    @Test
+    public void testVariousOperations() throws Exception {
+        int numTenantGroups = 3;
+        int numOpGroups = 5;
+        int numRuns = 10;
+        int numOperations = 10;
+
+        PhoenixUtil pUtil = PhoenixUtil.create();
+        DataModel model = readTestDataModel("/scenario/test_mt_workload.xml");
+        for (Scenario scenario : model.getScenarios()) {
+            LOGGER.debug(String.format("Testing %s", scenario.getName()));
+            LoadProfile loadProfile = scenario.getLoadProfile();
+            assertTrue("tenant group size is not as expected: ",

Review comment:
       Same for other such instances 

##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/PherfMainIT.java
##########
@@ -50,7 +51,7 @@
     @Rule
     public final ExpectedSystemExit exit = ExpectedSystemExit.none();
 
-    @Test
+    @Ignore

Review comment:
       Same question

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType
+public class Noop {

Review comment:
       Ping @jpisaac I think we should still consider renaming the class so it is clear that it is introduced for the sole purpose of adding wait time. Maybe call it `IdleOp`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r490415418



##########
File path: phoenix-pherf/src/test/resources/scenario/test_scenario_with_load_profile.xml
##########
@@ -0,0 +1,362 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

Review comment:
       @yanxinyi This file is used in testWorkloadWithLoadProfile in ConfigurationParserTest. So I think it makes sense to keep it in the test/resources folder.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487289670



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r488133844



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType
+public class Noop {

Review comment:
       I think this is for no operation that simulates the idle time. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r538002203



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/rules/RulesApplier.java
##########
@@ -138,9 +161,10 @@ public DataValue getDataForRule(Scenario scenario, Column phxMetaColumn) throws
             // Assume the first rule map
             Map<DataTypeMapping, List> ruleMap = modelList.get(0);
             List<Column> ruleList = ruleMap.get(phxMetaColumn.getType());
+            //LOGGER.info(String.format("Did not found a correct override column rule, %s, %s", phxMetaColumn.getName(), phxMetaColumn.getType()));

Review comment:
       nit: remove unused statement




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r537949628



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/MultiTenantOperationBaseIT.java
##########
@@ -0,0 +1,68 @@
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;

Review comment:
       nit:  apache header




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r490415606



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Upsert.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import org.apache.phoenix.pherf.rules.RulesApplier;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+public class Upsert {
+
+    private String id;
+    private String upsertGroup;
+    private String statement;
+    private List<Column> columns;
+    private Pattern pattern;
+    private long timeoutDuration = Long.MAX_VALUE;
+
+    public Upsert() {
+    	pattern = Pattern.compile("\\[.*?\\]");
+    }
+    
+
+    public String getDynamicStatement(RulesApplier ruleApplier, Scenario scenario) throws Exception {
+    	String ret = this.statement;
+    	String needQuotes = "";
+    	Matcher m = pattern.matcher(ret);
+        while(m.find()) {
+        	String dynamicField = m.group(0).replace("[", "").replace("]", "");
+        	Column dynamicColumn = ruleApplier.getRule(dynamicField, scenario);
+			needQuotes = (dynamicColumn.getType() == DataTypeMapping.CHAR || dynamicColumn
+					.getType() == DataTypeMapping.VARCHAR) ? "'" : "";
+			ret = ret.replace("[" + dynamicField + "]",
+					needQuotes + ruleApplier.getDataValue(dynamicColumn).getValue() + needQuotes);
+     }

Review comment:
       Will try and fix that!!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-786922359


   @ChinmaySKulkarni @yanxinyi Rebased it to 4.x tip


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r488120623



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType
+public class Noop {

Review comment:
       Why do we need this NoOp class for?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+    public static int MIN_BATCH_SIZE = 1;

Review comment:
       not: can this be private?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {

Review comment:
       Can you add header comments for all newly introduced classes?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487289953



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-737602734


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 9 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  11m 25s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 50s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   0m 57s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 30s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 48s |  phoenix-pherf: The patch generated 676 new + 826 unchanged - 53 fixed = 1502 total (was 879)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 12 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  7s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 15s |  phoenix-pherf generated 31 new + 32 unchanged - 0 fixed = 63 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  2s |  phoenix-pherf generated 9 new + 41 unchanged - 1 fixed = 50 total (was 42)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 51s |  phoenix-pherf in the patch failed.  |
   | -1 :x: |  asflicense  |   0m  9s |  The patch generated 5 ASF License warnings.  |
   |  |   |  24m 24s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  Should org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$NoopTenantOperationImpl be a _static_ inner class?  At TenantOperationFactory.java:inner class?  At TenantOperationFactory.java:[lines 459-462] |
   |  |  Dead store to tableName in org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$PreScenarioTenantOperationImpl$1.apply(TenantOperationInfo)  At TenantOperationFactory.java:org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$PreScenarioTenantOperationImpl$1.apply(TenantOperationInfo)  At TenantOperationFactory.java:[line 432] |
   |  |  input must be non-null but is marked as nullable  At TenantOperationFactory.java:is marked as nullable  At TenantOperationFactory.java:[lines 278-327] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$QueryTenantOperationImpl$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement on checked exception  Obligation to clean up resource created at TenantOperationFactory.java:up java.sql.Statement on checked exception  Obligation to clean up resource created at TenantOperationFactory.java:[line 303] is not discharged |
   |  |  input must be non-null but is marked as nullable  At TenantOperationFactory.java:is marked as nullable  At TenantOperationFactory.java:[lines 340-419] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$UpsertTenantOperationImpl$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at TenantOperationFactory.java:up java.sql.Statement  Obligation to clean up resource created at TenantOperationFactory.java:[line 365] is not discharged |
   |  |  Should org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$UserDefinedOperationImpl be a _static_ inner class?  At TenantOperationFactory.java:inner class?  At TenantOperationFactory.java:[lines 484-487] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 54] |
   | Failed junit tests | phoenix.pherf.PherfTest |
   |   | phoenix.pherf.RuleGeneratorTest |
   |   | phoenix.pherf.ResourceTest |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 76cc49e66698 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / b8cb658 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/patch-unit-phoenix-pherf.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 102 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/3/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r542922448



##########
File path: phoenix-pherf/src/test/resources/datamodel/test_schema_mt_view.sql
##########
@@ -0,0 +1,27 @@
+/*
+  -- Licensed to the Apache Software Foundation (ASF) under one
+  -- or more contributor license agreements.  See the NOTICE file
+  -- distributed with this work for additional information
+  -- regarding copyright ownership.  The ASF licenses this file
+  -- to you under the Apache License, Version 2.0 (the
+  -- "License"); you may not use this file except in compliance
+  -- with the License.  You may obtain a copy of the License at
+  --
+  -- http://www.apache.org/licenses/LICENSE-2.0
+  --
+  -- Unless required by applicable law or agreed to in writing, software
+  -- distributed under the License is distributed on an "AS IS" BASIS,
+  -- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  -- See the License for the specific language governing permissions and
+  -- limitations under the License.
+*/
+
+CREATE VIEW IF NOT EXISTS PHERF.TEST_GLOBAL_VIEW (
+    GID CHAR(15) NOT NULL,
+    FIELD1 VARCHAR,
+    OTHER_INT INTEGER
+    CONSTRAINT PK PRIMARY KEY
+    (
+        GID
+    )
+) AS SELECT * FROM PHERF.TEST_MULTI_TENANT_TABLE WHERE IDENTIFIER = 'EV1'

Review comment:
       I didn't find the DDL for PHERF.TEST_MULTI_TENANT_TABLE




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-755574885


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   6m 49s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 9 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 21s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   1m 16s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   8m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  the patch passed  |
   | -1 :x: |  javac  |   0m 46s |  phoenix-pherf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  checkstyle  |   0m 40s |  phoenix-pherf: The patch generated 742 new + 824 unchanged - 57 fixed = 1566 total (was 881)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 13 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m 10s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 18s |  phoenix-pherf generated 25 new + 32 unchanged - 0 fixed = 57 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m 24s |  phoenix-pherf generated 3 new + 41 unchanged - 1 fixed = 44 total (was 42)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 36s |  phoenix-pherf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate ASF License warnings.  |
   |  |   |  55m 19s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 59] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.UpsertOperationSupplier$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:[line 81] is not discharged |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 1ff5e5911e99 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 43b56d4 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | javac | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/diff-compile-javac-phoenix-pherf.txt |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/testReport/ |
   | Max. process+thread count | 1742 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/5/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-805466418


   @yanxinyi @ChinmaySKulkarni  for 4.x


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-691203281






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r542780893



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationIT.java
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.lmax.disruptor.LifecycleAware;
+import com.lmax.disruptor.WorkHandler;
+import org.apache.phoenix.end2end.BaseHBaseManagedTimeIT;
+import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.XMLConfigParserTest;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.LoadProfile;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.schema.SchemaReader;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+import org.apache.phoenix.pherf.workload.mt.Operation;
+import org.apache.phoenix.pherf.workload.mt.OperationStats;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationEventGenerator;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.NoopTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.PreScenarioTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.QueryTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UpsertTenantOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory.UserDefinedOperationImpl;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactoryTest;
+import org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationInfo;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.xml.bind.UnmarshalException;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class TenantOperationIT extends MultiTenantOperationBaseIT {
+    private static final Logger LOGGER = LoggerFactory.getLogger(TenantOperationIT.class);

Review comment:
       nit: do you wanna remove logging here since we are not testing logging?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-805379456


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 55s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   1m  6s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 37s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 39s |  phoenix-pherf: The patch generated 759 new + 1017 unchanged - 54 fixed = 1776 total (was 1071)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 9 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  9s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 17s |  phoenix-pherf generated 25 new + 32 unchanged - 0 fixed = 57 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m 11s |  phoenix-pherf generated 9 new + 41 unchanged - 1 fixed = 50 total (was 42)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 33s |  phoenix-pherf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate ASF License warnings.  |
   |  |   |  39m 59s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  input must be non-null but is marked as nullable  At IdleTimeOperationSupplier.java:is marked as nullable  At IdleTimeOperationSupplier.java:[lines 52-74] |
   |  |  input must be non-null but is marked as nullable  At PreScenarioOperationSupplier.java:is marked as nullable  At PreScenarioOperationSupplier.java:[lines 51-80] |
   |  |  input must be non-null but is marked as nullable  At QueryOperationSupplier.java:is marked as nullable  At QueryOperationSupplier.java:[lines 54-87] |
   |  |  Possible null pointer dereference in org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkHandler.onEvent(TenantOperationWorkload$TenantOperationEvent) due to return value of called method  Dereferenced at TenantOperationWorkHandler.java:org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkHandler.onEvent(TenantOperationWorkload$TenantOperationEvent) due to return value of called method  Dereferenced at TenantOperationWorkHandler.java:[line 58] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 59] |
   |  |  input must be non-null but is marked as nullable  At UpsertOperationSupplier.java:is marked as nullable  At UpsertOperationSupplier.java:[lines 56-136] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.UpsertOperationSupplier$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:[line 81] is not discharged |
   |  |  input must be non-null but is marked as nullable  At UserDefinedOperationSupplier.java:is marked as nullable  At UserDefinedOperationSupplier.java:[lines 44-46] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 83b7773edaf8 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 7198196 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/testReport/ |
   | Max. process+thread count | 1715 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/9/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-742885157


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   4m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 9 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m 53s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 50s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   0m 57s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed  |
   | -1 :x: |  javac  |   0m 29s |  phoenix-pherf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  checkstyle  |   0m 48s |  phoenix-pherf: The patch generated 682 new + 822 unchanged - 57 fixed = 1504 total (was 879)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 12 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  7s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 16s |  phoenix-pherf generated 31 new + 32 unchanged - 0 fixed = 63 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  4s |  phoenix-pherf generated 9 new + 41 unchanged - 1 fixed = 50 total (was 42)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 52s |  phoenix-pherf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate ASF License warnings.  |
   |  |   |  27m 57s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  Should org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$NoopTenantOperationImpl be a _static_ inner class?  At TenantOperationFactory.java:inner class?  At TenantOperationFactory.java:[lines 461-464] |
   |  |  Dead store to tableName in org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$PreScenarioTenantOperationImpl$1.apply(TenantOperationInfo)  At TenantOperationFactory.java:org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$PreScenarioTenantOperationImpl$1.apply(TenantOperationInfo)  At TenantOperationFactory.java:[line 434] |
   |  |  input must be non-null but is marked as nullable  At TenantOperationFactory.java:is marked as nullable  At TenantOperationFactory.java:[lines 279-329] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$QueryTenantOperationImpl$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement on checked exception  Obligation to clean up resource created at TenantOperationFactory.java:up java.sql.Statement on checked exception  Obligation to clean up resource created at TenantOperationFactory.java:[line 305] is not discharged |
   |  |  input must be non-null but is marked as nullable  At TenantOperationFactory.java:is marked as nullable  At TenantOperationFactory.java:[lines 342-421] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$UpsertTenantOperationImpl$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at TenantOperationFactory.java:up java.sql.Statement  Obligation to clean up resource created at TenantOperationFactory.java:[line 367] is not discharged |
   |  |  Should org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationFactory$UserDefinedOperationImpl be a _static_ inner class?  At TenantOperationFactory.java:inner class?  At TenantOperationFactory.java:[lines 486-489] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 52] |
   | Failed junit tests | phoenix.pherf.PherfTest |
   |   | phoenix.pherf.ResourceTest |
   |   | phoenix.pherf.RuleGeneratorTest |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 19a3f8c4f3a8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / a3b6d0b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | javac | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/diff-compile-javac-phoenix-pherf.txt |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/artifact/yetus-general-check/output/patch-unit-phoenix-pherf.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/testReport/ |
   | Max. process+thread count | 102 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/4/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r538760233



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkloadIT.java
##########
@@ -0,0 +1,141 @@
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;

Review comment:
       same here 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi merged pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi merged pull request #878:
URL: https://github.com/apache/phoenix/pull/878


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-694380032


   > Had a quick glance and looks good overall. Is it helpful to add interfaces for some of the `configuration` classes instead of directly adding solid implementations?
   
   @ChinmaySKulkarni All configuration classes are mapped to XML files (configs/definitions) so making them interfaces may not help and not add much value. Since they typically will have only getters and setters and there will be only one concrete implementation which matches the underlying XML structure.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-700177324


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  docker  |   3m 23s |  Docker failed to build yetus/phoenix:871ed211e.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-730410080


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 4 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  11m 47s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 53s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 31s |  phoenix-pherf: The patch generated 645 new + 662 unchanged - 33 fixed = 1307 total (was 695)  |
   | -1 :x: |  markdownlint  |   0m  2s |  The patch generated 56 new + 0 unchanged - 0 fixed = 56 total (was 0)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 13 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  4s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 16s |  phoenix-pherf generated 28 new + 32 unchanged - 0 fixed = 60 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  7s |  phoenix-pherf generated 7 new + 42 unchanged - 0 fixed = 49 total (was 42)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 39s |  phoenix-pherf in the patch failed.  |
   | -1 :x: |  asflicense  |   0m  9s |  The patch generated 2 ASF License warnings.  |
   |  |   |  25m 31s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory.buildStatement(Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory.buildStatement(Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At TenantOperationFactory.java:[line 550] |
   |  |  Should org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory$NoopTenantOperationImpl be a _static_ inner class?  At TenantOperationFactory.java:inner class?  At TenantOperationFactory.java:[lines 444-447] |
   |  |  Dead store to tableName in org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory$PreScenarioTenantOperationImpl$1.apply(TenantOperationInfo)  At TenantOperationFactory.java:org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory$PreScenarioTenantOperationImpl$1.apply(TenantOperationInfo)  At TenantOperationFactory.java:[line 418] |
   |  |  input must be non-null but is marked as nullable  At TenantOperationFactory.java:is marked as nullable  At TenantOperationFactory.java:[lines 264-313] |
   |  |  org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory$QueryTenantOperationImpl$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement on checked exception  Obligation to clean up resource created at TenantOperationFactory.java:up java.sql.Statement on checked exception  Obligation to clean up resource created at TenantOperationFactory.java:[line 289] is not discharged |
   |  |  input must be non-null but is marked as nullable  At TenantOperationFactory.java:is marked as nullable  At TenantOperationFactory.java:[lines 326-405] |
   |  |  org.apache.phoenix.pherf.workload.continuous.tenantoperation.TenantOperationFactory$UpsertTenantOperationImpl$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at TenantOperationFactory.java:up java.sql.Statement  Obligation to clean up resource created at TenantOperationFactory.java:[line 351] is not discharged |
   | Failed junit tests | phoenix.pherf.ResourceTest |
   |   | phoenix.pherf.PherfTest |
   |   | phoenix.pherf.RuleGeneratorTest |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense markdownlint javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 3533df368186 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / ed7f1a6 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | markdownlint | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/diff-patch-markdownlint.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/patch-unit-phoenix-pherf.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 92 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/2/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 markdownlint=0.22.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-786936851


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  13m 55s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   1m  1s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 35s |  the patch passed  |
   | -1 :x: |  checkstyle  |   0m 54s |  phoenix-pherf: The patch generated 752 new + 933 unchanged - 49 fixed = 1685 total (was 982)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 13 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  9s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 17s |  phoenix-pherf generated 25 new + 32 unchanged - 0 fixed = 57 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  8s |  phoenix-pherf generated 9 new + 41 unchanged - 1 fixed = 50 total (was 42)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 37s |  phoenix-pherf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate ASF License warnings.  |
   |  |   |  36m 40s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  input must be non-null but is marked as nullable  At IdleTimeOperationSupplier.java:is marked as nullable  At IdleTimeOperationSupplier.java:[lines 52-74] |
   |  |  input must be non-null but is marked as nullable  At PreScenarioOperationSupplier.java:is marked as nullable  At PreScenarioOperationSupplier.java:[lines 51-80] |
   |  |  input must be non-null but is marked as nullable  At QueryOperationSupplier.java:is marked as nullable  At QueryOperationSupplier.java:[lines 54-87] |
   |  |  Possible null pointer dereference in org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkHandler.onEvent(TenantOperationWorkload$TenantOperationEvent) due to return value of called method  Dereferenced at TenantOperationWorkHandler.java:org.apache.phoenix.pherf.workload.mt.tenantoperation.TenantOperationWorkHandler.onEvent(TenantOperationWorkload$TenantOperationEvent) due to return value of called method  Dereferenced at TenantOperationWorkHandler.java:[line 58] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 59] |
   |  |  input must be non-null but is marked as nullable  At UpsertOperationSupplier.java:is marked as nullable  At UpsertOperationSupplier.java:[lines 56-136] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.UpsertOperationSupplier$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:[line 81] is not discharged |
   |  |  input must be non-null but is marked as nullable  At UserDefinedOperationSupplier.java:is marked as nullable  At UserDefinedOperationSupplier.java:[lines 44-46] |
   | Failed junit tests | phoenix.pherf.workload.mt.tenantoperation.TenantOperationIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 131d4912d2fd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 8e44658 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/artifact/yetus-general-check/output/patch-unit-phoenix-pherf.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/testReport/ |
   | Max. process+thread count | 1823 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/7/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487289670



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,29 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       please add apache license 

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/OperationGroup.java
##########
@@ -0,0 +1,26 @@
+package org.apache.phoenix.pherf.configuration;

Review comment:
       nit: please add apache license
   
   

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?

##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       I didn't see anywhere calling this set method. Where is the place that we are setting this batch size value?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r538760994



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationWorkloadIT.java
##########
@@ -0,0 +1,141 @@
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;
+
+import com.clearspring.analytics.util.Lists;
+import com.google.common.collect.Maps;

Review comment:
       please remember using phoenix third party at the master branch




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r538002449



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java
##########
@@ -455,4 +473,156 @@ public String getExplainPlan(Query query, Scenario scenario, RulesApplier ruleAp
         }
         return buf.toString();
     }
+
+    public PreparedStatement buildStatement(RulesApplier rulesApplier, Scenario scenario, List<Column> columns,
+            PreparedStatement statement, SimpleDateFormat simpleDateFormat) throws Exception {
+
+        int count = 1;
+        for (Column column : columns) {
+            DataValue dataValue = rulesApplier.getDataForRule(scenario, column);
+            switch (column.getType()) {
+            case VARCHAR:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.VARCHAR);
+                } else {
+                    statement.setString(count, dataValue.getValue());
+                }
+                break;
+            case CHAR:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.CHAR);
+                } else {
+                    statement.setString(count, dataValue.getValue());
+                }
+                break;
+            case DECIMAL:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.DECIMAL);
+                } else {
+                    statement.setBigDecimal(count, new BigDecimal(dataValue.getValue()));
+                }
+                break;
+            case INTEGER:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.INTEGER);
+                } else {
+                    statement.setInt(count, Integer.parseInt(dataValue.getValue()));
+                }
+                break;
+            case UNSIGNED_LONG:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.OTHER);
+                } else {
+                    statement.setLong(count, Long.parseLong(dataValue.getValue()));
+                }
+                break;
+            case BIGINT:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.BIGINT);
+                } else {
+                    statement.setLong(count, Long.parseLong(dataValue.getValue()));
+                }
+                break;
+            case TINYINT:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.TINYINT);
+                } else {
+                    statement.setLong(count, Integer.parseInt(dataValue.getValue()));
+                }
+                break;
+            case DATE:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.DATE);
+                } else {
+                    Date
+                            date =

Review comment:
       nit: style issue




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487290832



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;

Review comment:
       why we are setting this to a negative number? Should be at lease 1?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] jpisaac commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
jpisaac commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r487563654



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/LoadProfile.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   "License"); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an "AS IS" BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+import java.util.List;
+
+@XmlType
+public class LoadProfile {
+
+    private int batchSize;
+    private int numOperations;
+    List<TenantGroup> tenantDistribution;
+    List<OperationGroup> opDistribution;
+
+    public LoadProfile() {
+        this.batchSize = Integer.MIN_VALUE;
+    }
+
+    public int getBatchSize() {
+        return batchSize;
+    }
+
+    public void setBatchSize(int batchSize) {

Review comment:
       XML serialization/deserialization methods use getters and setter to marshal/unmarshal XML files into Objects.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r601909270



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/PherfMainIT.java
##########
@@ -23,6 +23,7 @@
 import org.apache.phoenix.pherf.result.ResultValue;
 import org.apache.phoenix.pherf.result.file.ResultFileDetails;
 import org.apache.phoenix.pherf.result.impl.CSVFileResultHandler;
+import org.junit.Ignore;

Review comment:
       nit: can you remove this unused import from the next PR. Don't need to fix it now 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r488194630



##########
File path: phoenix-pherf/src/test/resources/scenario/test_scenario_with_load_profile.xml
##########
@@ -0,0 +1,362 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

Review comment:
       I prefer to put this scenario file under /phoenix-pherf/src/main/resources/scenario/ dir since this is an example of how to run instead of a test case.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r539735765



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/Noop.java
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.configuration;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlType;
+
+@XmlType
+public class Noop {

Review comment:
       Can we rename it so it reflects an operation aimed at injecting "idle/wait time"?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-767962186


   @jpisaac can you handle the above comments and solve the conflicts?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #878:
URL: https://github.com/apache/phoenix/pull/878#issuecomment-757079606


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m  6s |  4.x passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   0m 54s |  phoenix-pherf in 4.x has 42 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   1m  1s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed  |
   | -1 :x: |  javac  |   0m 34s |  phoenix-pherf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  checkstyle  |   0m 51s |  phoenix-pherf: The patch generated 756 new + 826 unchanged - 53 fixed = 1582 total (was 879)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | -1 :x: |  whitespace  |   0m  1s |  The patch 13 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  9s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 16s |  phoenix-pherf generated 25 new + 32 unchanged - 0 fixed = 57 total (was 32)  |
   | -1 :x: |  spotbugs  |   1m  8s |  phoenix-pherf generated 3 new + 41 unchanged - 1 fixed = 44 total (was 42)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 30s |  phoenix-pherf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate ASF License warnings.  |
   |  |   |  42m 25s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-pherf |
   |  |  Found reliance on default encoding in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat):in org.apache.phoenix.pherf.util.PhoenixUtil.buildStatement(RulesApplier, Scenario, List, PreparedStatement, SimpleDateFormat): String.getBytes()  At PhoenixUtil.java:[line 557] |
   |  |  Return value of TenantOperationFactory.getPhoenixUtil() ignored, but method has no side effect  At TenantOperationWorkHandler.java:but method has no side effect  At TenantOperationWorkHandler.java:[line 59] |
   |  |  org.apache.phoenix.pherf.workload.mt.tenantoperation.UpsertOperationSupplier$1.apply(TenantOperationInfo) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:up java.sql.Statement  Obligation to clean up resource created at UpsertOperationSupplier.java:[line 81] is not discharged |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/878 |
   | JIRA Issue | PHOENIX-6118 |
   | Optional Tests | dupname asflicense javac javadoc unit xml compile spotbugs hbaseanti checkstyle |
   | uname | Linux 4ab9f513e864 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 2a530da |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | javac | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/diff-compile-javac-phoenix-pherf.txt |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/diff-checkstyle-phoenix-pherf.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/whitespace-eol.txt |
   | whitespace | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/whitespace-tabs.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-pherf.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/artifact/yetus-general-check/output/new-spotbugs-phoenix-pherf.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/testReport/ |
   | Max. process+thread count | 1629 (vs. ulimit of 30000) |
   | modules | C: phoenix-pherf U: phoenix-pherf |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-878/6/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r537953846



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/workload/mt/tenantoperation/TenantOperationIT.java
##########
@@ -0,0 +1,113 @@
+package org.apache.phoenix.pherf.workload.mt.tenantoperation;

Review comment:
       nit:apache header




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r537948431



##########
File path: phoenix-pherf/src/it/java/org/apache/phoenix/pherf/PherfMainIT.java
##########
@@ -50,7 +51,7 @@
     @Rule
     public final ExpectedSystemExit exit = ExpectedSystemExit.none();
 
-    @Test
+    @Ignore

Review comment:
       why we ignore this test?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] yanxinyi commented on a change in pull request #878: PHOENIX-6118: Multi Tenant Workloads using PHERF

Posted by GitBox <gi...@apache.org>.
yanxinyi commented on a change in pull request #878:
URL: https://github.com/apache/phoenix/pull/878#discussion_r601917412



##########
File path: phoenix-pherf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java
##########
@@ -455,4 +473,156 @@ public String getExplainPlan(Query query, Scenario scenario, RulesApplier ruleAp
         }
         return buf.toString();
     }
+
+    public PreparedStatement buildStatement(RulesApplier rulesApplier, Scenario scenario, List<Column> columns,
+            PreparedStatement statement, SimpleDateFormat simpleDateFormat) throws Exception {
+
+        int count = 1;
+        for (Column column : columns) {
+            DataValue dataValue = rulesApplier.getDataForRule(scenario, column);
+            switch (column.getType()) {
+            case VARCHAR:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.VARCHAR);
+                } else {
+                    statement.setString(count, dataValue.getValue());
+                }
+                break;
+            case CHAR:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.CHAR);
+                } else {
+                    statement.setString(count, dataValue.getValue());
+                }
+                break;
+            case DECIMAL:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.DECIMAL);
+                } else {
+                    statement.setBigDecimal(count, new BigDecimal(dataValue.getValue()));
+                }
+                break;
+            case INTEGER:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.INTEGER);
+                } else {
+                    statement.setInt(count, Integer.parseInt(dataValue.getValue()));
+                }
+                break;
+            case UNSIGNED_LONG:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.OTHER);
+                } else {
+                    statement.setLong(count, Long.parseLong(dataValue.getValue()));
+                }
+                break;
+            case BIGINT:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.BIGINT);
+                } else {
+                    statement.setLong(count, Long.parseLong(dataValue.getValue()));
+                }
+                break;
+            case TINYINT:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.TINYINT);
+                } else {
+                    statement.setLong(count, Integer.parseInt(dataValue.getValue()));
+                }
+                break;
+            case DATE:
+                if (dataValue.getValue().equals("")) {
+                    statement.setNull(count, Types.DATE);
+                } else {
+                    Date
+                            date =

Review comment:
       can you address this in the next PR




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org