You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by GitBox <gi...@apache.org> on 2020/10/22 16:55:28 UTC

[GitHub] [phoenix] gjacoby126 opened a new pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

gjacoby126 opened a new pull request #935:
URL: https://github.com/apache/phoenix/pull/935


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-728303922


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  11m 10s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   9m 39s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  9s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 55s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 25s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 30s |  root: The patch generated 395 new + 25081 unchanged - 238 fixed = 25476 total (was 25319)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 44s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  6s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  9s |  phoenix-core generated 2 new + 945 unchanged - 1 fixed = 947 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  7s |  root generated 2 new + 999 unchanged - 1 fixed = 1001 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 133m 43s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  The patch does not generate ASF License warnings.  |
   |  |   | 198m 23s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.AlterTableWithViewsIT |
   |   | phoenix.end2end.ExplainPlanWithStatsEnabledIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux 2d10649caf18 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 110f5b7 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/testReport/ |
   | Max. process+thread count | 6653 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/13/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-730064049


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 42s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  10m 49s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  6s |  root in 4.x has 1003 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 40s |  phoenix-core in 4.x has 949 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 24s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 22s |  root: The patch generated 338 new + 25077 unchanged - 257 fixed = 25415 total (was 25334)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  javadoc  |   0m 44s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  5s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  3s |  phoenix-core generated 2 new + 948 unchanged - 1 fixed = 950 total (was 949)  |
   | -1 :x: |  spotbugs  |   4m  6s |  root generated 2 new + 1002 unchanged - 1 fixed = 1004 total (was 1003)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 151m 53s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  The patch does not generate ASF License warnings.  |
   |  |   | 216m 16s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.AlterMultiTenantTableWithViewsIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux f73c56c70ae4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / ed7f1a6 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/testReport/ |
   | Max. process+thread count | 6680 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/18/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r520188127



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2037,7 +2049,10 @@ public void createTable(RpcController controller, CreateTableRequest request,
                     // view's property in case they are different from the parent
                     ViewUtil.addTagsToPutsForViewAlteredProperties(tableMetadata, parentTable);
                 }
-
+                //set the last DDL timestamp to the current server time since we're creating the
+                // table
+                tableMetadata.add(MetaDataUtil.getLastDDLTimestampUpdate(tableKey,
+                    clientTimeStamp, EnvironmentEdgeManager.currentTimeMillis()));

Review comment:
       Do we want to restrict this to just tables and views i.e. 'u' and 'v' table_types? The upgrade code only adds a ts for existing tables and views, but not indexes and SYSTEM tables, but here we do it for all types. There will be inconsistency in that case between an index created before the 4.16 metadata upgrade (no ts) vs an index created after the 4.16 metadata upgrade (has ts), not to mention fresh clusters will have a ts for all entities.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525571557



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior

Review comment:
       That doesn't seem like desirable behavior to me as a feature. As we discussed offline, I think diverged views should have been written to get all ancestor DDL changes. (Also, I think we should allow column projection in view definitions -- SELECT COL1, COL2 FROM FOO rather than SELECT * FROM FOO--, rather than having diverged views at all, but that's a whole new feature and out of scope for this discussion.) 
   
   But setting all that aside, it's the _column drops_ that are the dangerous operations, because they can break existing queries, not _column adds_, which are always benign. (Don't care about a new column? Don't select it!) So if I were trying to allow for a view to split off from its parent and be semi-independent, it's the _drops_ I'd try to shield it from, not additions. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726889819


   > I'm curious -- what is the source is for the "split lines indent 8 spaces" rule you mention? 
   
   Tried searching for the property name but couldn't find one. But other places in phoenix code base uses 8 spaces. Eg: https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java#L883, https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java#L1042
   Again this is a very very nit comment. please feel free to ignore also. :)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r523037861



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
##########
@@ -1326,7 +1328,69 @@ public void testAddingColumnsToTablesAndViews() throws Exception {
             assertSequenceNumber(schemaName, viewName, PTable.INITIAL_SEQ_NUM + 1);
         }
     }
-	
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String columnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {

Review comment:
       just 1 minor nit. DriverManager has method getConnection without Properties argument. We can use that in all newly added test cases since we don't override any properties here. If it is too much change please feel free to ignore also.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
##########
@@ -1326,7 +1328,69 @@ public void testAddingColumnsToTablesAndViews() throws Exception {
             assertSequenceNumber(schemaName, viewName, PTable.INITIAL_SEQ_NUM + 1);
         }
     }
-	
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String columnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testSetPropertyDoesntUpdateDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String setPropertyDDL = "ALTER TABLE " + dataTableFullName +
+            " SET UPDATE_CACHE_FREQUENCY=300000 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates

Review comment:
       is this comment carried forward from previous test and no longer relevant here ?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,

Review comment:
       I understand it is strictly not required but should we sleep for 1 millisecond just to make sure that 1 ms passed between divergeDDL and viewColumnAddDDL execution and the same argument applies for other ddl statements that we execute subsequently in this test.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);

Review comment:
       Does it make sense to add a check that after dropping column from view it didn't change the ddlTimestamp of the base table ?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampWithChildViews() throws Exception {
+        Assume.assumeTrue(isMultiTenant);

Review comment:
       Any reason why we want this test to run only if multiTenant is true ? I am not that much aware of this testsuite.
   If we want to ensure isMultiTenant is true then can we change the test name to testLastDDLTimestampWith_Tenant_ChildViews to be more specific.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampWithChildViews() throws Exception {
+        Assume.assumeTrue(isMultiTenant);
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String globalViewName = "V_" + generateUniqueName();
+        String tenantViewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
+        String tenantViewFullName = SchemaUtil.getTableName(schemaName, tenantViewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        //create a table with a child global view, who then has a child tenant view
+        String globalViewDDL =
+            "CREATE VIEW " + globalViewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String tenantViewDDL =
+            "CREATE VIEW " + tenantViewFullName + " AS SELECT * FROM " + globalViewFullName;
+
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        long tableDDLTimestamp, globalViewDDLTimestamp;
+
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(globalViewDDL);
+            tableDDLTimestamp = getLastDDLTimestamp(conn, dataTableFullName);
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn, globalViewFullName);
+        }
+        props.setProperty(TENANT_ID_ATTRIB, TENANT1);
+        try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+            tenantConn.createStatement().execute(tenantViewDDL);
+        }
+        // First, check that adding a child view didn't change the timestamps

Review comment:
       nit: could you please change the comment to be more specific.
   ` // First, check that adding a child view didn't change the timestamps of base table `

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2802,7 +2827,8 @@ private MetaDataMutationResult mutateColumn(
                         getParentPhysicalTableName(table), table.getType());
 
                 result = mutator.validateAndAddMetadata(table, rowKeyMetaData, tableMetadata,
-                        region, invalidateList, locks, clientTimeStamp, clientVersion);
+                        region, invalidateList, locks, clientTimeStamp, clientVersion,
+                    isAddingOrDroppingColumns);

Review comment:
       nit: indentation mismatch.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
##########
@@ -910,6 +911,41 @@ public void testTableDescriptorPriority() throws SQLException, IOException {
         }
     }
 
+    @Test
+    public void testCreateTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        final String schemaName = generateUniqueName();
+        final String tableName = generateUniqueName();
+        final String dataTableFullName = SchemaUtil.getTableName(schemaName, tableName);
+        String ddl =
+            "CREATE TABLE " + dataTableFullName + " (\n" + "ID1 VARCHAR(15) NOT NULL,\n"
+                + "ID2 VARCHAR(15) NOT NULL,\n" + "CREATED_DATE DATE,\n"
+                + "CREATION_TIME BIGINT,\n" + "LAST_USED DATE,\n"
+                + "CONSTRAINT PK PRIMARY KEY (ID1, ID2)) ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(ddl);
+            verifyLastDDLTimestamp(schemaName, tableName, dataTableFullName, startTS, conn);
+        }
+    }
+
+    public static long verifyLastDDLTimestamp(String schemaName, String tableName,
+                                              String dataTableFullName, long startTS, Connection conn) throws SQLException {
+        return verifyLastDDLTimestamp("", schemaName, tableName, dataTableFullName, startTS, conn);
+    }
+
+    public static long verifyLastDDLTimestamp(String tenantId, String schemaName, String tableName,
+                                              String dataTableFullName, long startTS, Connection conn) throws SQLException {

Review comment:
       I don't see any usage of first 3 arguments `String tenantId, String schemaName, String tableName`
   Maybe you used them in earlier draft then removed them ?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);

Review comment:
       Acc to comment, shouldn't the argument for startTime be "viewDDLTimestamp + 1"  since we are assuming that timestamp should change.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampWithChildViews() throws Exception {
+        Assume.assumeTrue(isMultiTenant);
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String globalViewName = "V_" + generateUniqueName();
+        String tenantViewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
+        String tenantViewFullName = SchemaUtil.getTableName(schemaName, tenantViewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        //create a table with a child global view, who then has a child tenant view
+        String globalViewDDL =
+            "CREATE VIEW " + globalViewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String tenantViewDDL =
+            "CREATE VIEW " + tenantViewFullName + " AS SELECT * FROM " + globalViewFullName;
+
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        long tableDDLTimestamp, globalViewDDLTimestamp;
+
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(globalViewDDL);
+            tableDDLTimestamp = getLastDDLTimestamp(conn, dataTableFullName);
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn, globalViewFullName);
+        }
+        props.setProperty(TENANT_ID_ATTRIB, TENANT1);
+        try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+            tenantConn.createStatement().execute(tenantViewDDL);
+        }
+        // First, check that adding a child view didn't change the timestamps
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            long newTableDDLTimestamp = getLastDDLTimestamp(conn, dataTableFullName);
+            assertEquals(tableDDLTimestamp, newTableDDLTimestamp);
+
+            long newGlobalViewDDLTimestamp = getLastDDLTimestamp(conn, globalViewFullName);
+            assertEquals(globalViewDDLTimestamp, newGlobalViewDDLTimestamp);
+        }
+        Thread.sleep(1);
+        //now add / drop a column from the tenant view and make sure it doesn't change its
+        // ancestors' timestamps
+        String tenantViewColumnAddDDL = "ALTER VIEW " + tenantViewFullName + " ADD COL3 varchar" +
+            "(50) " + "NULL ";
+        String tenantViewColumnDropDDL = "ALTER VIEW " + tenantViewFullName + " DROP COLUMN COL3 ";
+
+        try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+            tenantConn.createStatement().execute(tenantViewColumnAddDDL);
+            long newTableDDLTimestamp = getLastDDLTimestamp(tenantConn, dataTableFullName);
+            assertEquals(tableDDLTimestamp, newTableDDLTimestamp);
+
+            long afterTenantColumnAddViewDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                globalViewFullName);
+            assertEquals(globalViewDDLTimestamp, afterTenantColumnAddViewDDLTimestamp);
+
+            tenantConn.createStatement().execute(tenantViewColumnDropDDL);
+            //update the tenant view timestamp (we'll need it later)
+            long afterTenantColumnDropTableDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                dataTableFullName);
+            assertEquals(tableDDLTimestamp, afterTenantColumnDropTableDDLTimestamp);
+
+            long afterTenantColumnDropViewDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                globalViewFullName);
+            assertEquals(globalViewDDLTimestamp, afterTenantColumnDropViewDDLTimestamp);
+        }
+        Thread.sleep(1);
+        //now add / drop a column from the base table and make sure it changes the timestamps for
+        // both the global view (its child) and the tenant view (its grandchild)
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) " + "NULL ";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableColumnAddDDL);
+            tableDDLTimestamp = getLastDDLTimestamp(conn, dataTableFullName);
+            try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+                long tenantViewDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                    tenantViewFullName);
+                assertEquals(tableDDLTimestamp, tenantViewDDLTimestamp);
+            }
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn,
+                globalViewFullName);
+            assertEquals(tableDDLTimestamp, globalViewDDLTimestamp);
+
+            conn.createStatement().execute(tableColumnDropDDL);
+            tableDDLTimestamp = getLastDDLTimestamp(conn, dataTableFullName);
+            try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+                long tenantViewDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                    tenantViewFullName);
+                assertEquals(tableDDLTimestamp, tenantViewDDLTimestamp);
+            }
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn,
+                globalViewFullName);
+            assertEquals(tableDDLTimestamp, globalViewDDLTimestamp);
+        }
+
+        //now add / drop a column from the global view and make sure it doesn't change its
+        // parent (the base table) but does change the timestamp for its child (the tenant view)
+        String globalViewColumnAddDDL = "ALTER VIEW " + globalViewFullName + " ADD COL5 varchar" +
+            "(50) " + "NULL ";
+        String globalViewColumnDropDDL = "ALTER VIEW " + globalViewFullName + " DROP COLUMN COL5 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(globalViewColumnAddDDL);
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn, globalViewFullName);
+            long newTableDDLTimestamp = getLastDDLTimestamp(conn,
+                dataTableFullName);
+            //table DDL timestamp shouldn't have changed
+            assertEquals(tableDDLTimestamp, newTableDDLTimestamp);
+            try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+                long tenantViewDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                    tenantViewFullName);
+                //but tenant timestamp should have changed
+                assertEquals(globalViewDDLTimestamp, tenantViewDDLTimestamp);
+            }
+
+            conn.createStatement().execute(globalViewColumnDropDDL);
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn, globalViewFullName);
+            newTableDDLTimestamp = getLastDDLTimestamp(conn,
+                dataTableFullName);
+            //table DDL timestamp shouldn't have changed
+            assertEquals(tableDDLTimestamp, newTableDDLTimestamp);
+            try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+                long tenantViewDDLTimestamp = getLastDDLTimestamp(tenantConn,
+                    tenantViewFullName);
+                //but tenant timestamp should have changed
+                assertEquals(globalViewDDLTimestamp, tenantViewDDLTimestamp);
+            }
+        }
+
+    }
+
+    public static long getLastDDLTimestamp(Connection conn, String dataTableFullName) throws SQLException {

Review comment:
       Should we move this helper method to CreateTableIT since we are doing almost same thing in CreateTableIT#verifyLastDDLTimestamp and additional timestamp bounds verification.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
##########
@@ -2285,6 +2286,7 @@ public MetaDataResponse call(MetaDataService instance) throws IOException {
                     builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
                     if (parentTable!=null)
                         builder.setParentTable(PTableImpl.toProto(parentTable));
+                    builder.setAddingColumns(addingColumns);

Review comment:
       Thank you for educating me. :)

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -107,6 +107,29 @@
             HColumnDescriptor.KEEP_DELETED_CELLS,
             HColumnDescriptor.REPLICATION_SCOPE);
 
+    public static Put getLastDDLTimestampUpdate(byte[] tableHeaderRowKey,
+                                                     long clientTimestamp,
+                                                     long lastDDLTimestamp) {
+        //use client timestamp as the timestamp of the Cell, to match the other Cells that might
+        // be created by this DDL. But the actual value will be a _server_ timestamp
+        Put p = new Put(tableHeaderRowKey, clientTimestamp);
+        byte[] lastDDLTimestampBytes = PLong.INSTANCE.toBytes(lastDDLTimestamp);
+        p.addColumn(PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES,
+            PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES, lastDDLTimestampBytes);
+        return p;
+    }
+
+    /**
+     * Checks if a table is meant to be queried directly (and hence is relevant to external
+     * systems tracking Phoenix schema)
+     * @param tableType
+     * @return True if a table or view, false otherwise (such as for an index, system table, or
+     * subquery)
+     */
+    public static boolean isTableQueryable(PTableType tableType) {

Review comment:
       I am ok with the name just wanted to learn more context. thank you !

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3063,60 +3063,62 @@ public boolean isViewReferenced() {
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
             PTable table = new PTableImpl.Builder()
-                    .setType(tableType)

Review comment:
       I know this is a very nit pick.. but for split lines the spacing rule is 8 spaces. For conditional/loop statements body I agree it is 4 spaces.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726986497


   Reran IndexMetaDataIT and ViewMetadataIT locally and they both passed. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726892294


   > I'm curious -- what is the source is for the "split lines indent 8 spaces" rule you mention?
   
   Actually found the property. It uses this property:
   `<setting id="org.eclipse.jdt.core.formatter.continuation_indentation" value="2"/>`
   The value 2 is multiplier of tabulation size property.
   `<setting id="org.eclipse.jdt.core.formatter.tabulation.size" value="4"/>
   `
   Thats how it comes to 8 spaces. :)
   `


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r513080213



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
##########
@@ -387,6 +388,31 @@ public void testViewUsesTableLocalIndex() throws Exception {
         }
     }
 
+    @Test
+    public void testCreateViewTimestamp() throws Exception {
+        Properties props = new Properties();
+        final String schemaName = "S_" + generateUniqueName();
+        final String tableName = "T_" + generateUniqueName();
+        final String viewName = "V_" + generateUniqueName();
+        final String dataTableFullName = SchemaUtil.getTableName(schemaName, tableName);
+        final String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+        String tableDDL =
+            "CREATE TABLE " + dataTableFullName + " (\n" + "ID1 VARCHAR(15) NOT NULL,\n"
+                + "ID2 VARCHAR(15) NOT NULL,\n" + "CREATED_DATE DATE,\n"
+                + "CREATION_TIME BIGINT,\n" + "LAST_USED DATE,\n"
+                + "CONSTRAINT PK PRIMARY KEY (ID1, ID2)) ";
+        String viewDDL = "CREATE VIEW " + viewFullName  + " AS SELECT * " +
+            "FROM " + dataTableFullName;
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            conn.commit();

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726976676


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 40s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m 12s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  0s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 43s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 22s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 24s |  root: The patch generated 394 new + 25081 unchanged - 238 fixed = 25475 total (was 25319)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 43s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  8s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  5s |  phoenix-core generated 3 new + 945 unchanged - 1 fixed = 948 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  5s |  root generated 3 new + 999 unchanged - 1 fixed = 1002 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 130m  1s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  The patch does not generate ASF License warnings.  |
   |  |   | 194m 45s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2604] is not discharged |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) passes a nonconstant String to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2604] is not discharged |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) passes a nonconstant String to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.index.IndexMetadataIT |
   |   | phoenix.end2end.ViewMetadataIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux c47dbff0ba08 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 565b0ea |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/testReport/ |
   | Max. process+thread count | 6543 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/11/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-727086069


   @gjacoby126 Are any checkstyle, javadocs, spotbugs, findbugs warnings relevant here ? I know that there is some noise but don't know whether any of them are related to patch ?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r513080295



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
##########
@@ -1326,7 +1327,40 @@ public void testAddingColumnsToTablesAndViews() throws Exception {
             assertSequenceNumber(schemaName, viewName, PTable.INITIAL_SEQ_NUM + 1);
         }
     }
-	
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String columnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }

Review comment:
       Done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-715574319


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 46s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   6m 49s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m 12s |  root in 4.x has 1008 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 43s |  phoenix-core in 4.x has 954 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 30s |  the patch passed  |
   | -1 :x: |  checkstyle  |   5m 35s |  root: The patch generated 116 new + 14751 unchanged - 41 fixed = 14867 total (was 14792)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 43s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  3s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m 14s |  phoenix-core generated 1 new + 954 unchanged - 0 fixed = 955 total (was 954)  |
   | -1 :x: |  spotbugs  |   4m 23s |  root generated 1 new + 1008 unchanged - 0 fixed = 1009 total (was 1008)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 159m 53s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 26s |  The patch does not generate ASF License warnings.  |
   |  |   | 215m  3s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 332] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 332] |
   | Failed junit tests | phoenix.end2end.IndexExtendedIT |
   |   | phoenix.end2end.UpgradeIT |
   |   | phoenix.end2end.index.ImmutableIndexIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux cd7751438141 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 605656c |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/testReport/ |
   | Max. process+thread count | 6864 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/3/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-728288768


   @ChinmaySKulkarni , @shahrs87 , assuming the test runs are OK, are there any other changes you'd like, or is this ready to go? Thanks for your reviews. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-728333598


   The two test failures were both timeouts and passed locally. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r521031241



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2037,7 +2049,10 @@ public void createTable(RpcController controller, CreateTableRequest request,
                     // view's property in case they are different from the parent
                     ViewUtil.addTagsToPutsForViewAlteredProperties(tableMetadata, parentTable);
                 }
-
+                //set the last DDL timestamp to the current server time since we're creating the
+                // table
+                tableMetadata.add(MetaDataUtil.getLastDDLTimestampUpdate(tableKey,
+                    clientTimeStamp, EnvironmentEdgeManager.currentTimeMillis()));

Review comment:
       Thanks, didn't realize those other types also went through that code path. Will add the restriction. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525336310



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
##########
@@ -94,7 +94,7 @@
     public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_13_0 = MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
     public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0 = MIN_TABLE_TIMESTAMP + 28;
     public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0 = MIN_TABLE_TIMESTAMP + 29;
-    public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 = MIN_TABLE_TIMESTAMP + 31;
+    public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 = MIN_TABLE_TIMESTAMP + 33;

Review comment:
       As I mentioned in a different comment, each syscat column's timestamp needs to be unique




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-729312736


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 43s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m 19s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  0s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 40s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 27s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 32s |  root: The patch generated 340 new + 25058 unchanged - 258 fixed = 25398 total (was 25316)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  javadoc  |   0m 43s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  2s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  6s |  phoenix-core generated 2 new + 945 unchanged - 1 fixed = 947 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m 11s |  root generated 2 new + 999 unchanged - 1 fixed = 1001 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 131m 36s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 34s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.join.HashJoinGlobalIndexIT |
   |   | phoenix.end2end.index.AlterIndexIT |
   |   | phoenix.end2end.index.txn.RollbackIT |
   |   | phoenix.end2end.DropIndexedColsIT |
   |   | phoenix.end2end.IndexToolForNonTxGlobalIndexIT |
   |   | TEST-[GroupByIT_0] |
   |   | phoenix.end2end.UpsertSelectIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux ae73e107ab8f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / cd657db |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/testReport/ |
   | Max. process+thread count | 6186 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/16/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525588559



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2846,21 +2872,25 @@ private MetaDataMutationResult mutateColumn(
                 separateLocalAndRemoteMutations(region, tableMetadata, localMutations,
                         remoteMutations);
                 if (!remoteMutations.isEmpty()) {
-                    // there should only be remote mutations if we are adding a column to a view
+                    // there should only be remote mutations if we are updating the last ddl
+                    // timestamp for child views, or we are adding a column to a view
                     // that uses encoded column qualifiers (the remote mutations are to update the
                     // encoded column qualifier counter on the parent table)
-                    if (mutator.getMutateColumnType() == ColumnMutator.MutateColumnType.ADD_COLUMN
+                    if (childViews.size() > 0 || ( mutator.getMutateColumnType() == ColumnMutator.MutateColumnType.ADD_COLUMN

Review comment:
       I added the childViews.size() check when I was sending the ddl timestamp mutations remotely to the child view header rows, which I stopped doing in the last draft. That part of the if clause should just be removed right? Or was the existing logic wrong if a child view existed?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r522489964



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DropColumnMutator.java
##########
@@ -268,7 +272,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table,
             }
 
         }
-        tableMetaData.addAll(additionalTableMetaData);
+        if (isDroppingColumns) {
+            //We're changing the application-facing schema by dropping a column, so update the DDL
+            // timestamp to current _server_ timestamp
+            if (MetaDataUtil.isTableQueryable(table.getType())) {
+                long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+                additionalTableMetaData.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                    clientTimeStamp, serverTimestamp));
+            }
+            //we don't need to update the DDL timestamp for any child views we may have, because
+            // when we look up a PTable for any of those child views, we'll take the max timestamp
+            // of the view and all its ancestors
+            tableMetaData.addAll(additionalTableMetaData);

Review comment:
       `tableMetaData.addAll(additionalTableMetaData);`
   The above line won't be executed if isDroppingColumns is false. Earlier it was getting executed even if we are dropping columns.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -293,7 +293,9 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                                          List<ImmutableBytesPtr> invalidateList,
                                                          List<Region.RowLock> locks,
                                                          long clientTimeStamp,
-                                                         long clientVersion) {
+                                                         long clientVersion,
+                                                         boolean isAddingColumns,
+                                                         List<PTable> childViews) {

Review comment:
       Do we need to add childViews to validateAndAddMetadata method ? Maybe I am reading it wrong but I don't see childViews getting used either in AddColumnMutator#validateAndAddMetadata or DropColumnMutator#validateAndAddMetadata.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DropColumnMutator.java
##########
@@ -268,7 +272,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table,
             }
 
         }
-        tableMetaData.addAll(additionalTableMetaData);
+        if (isDroppingColumns) {

Review comment:
       The changes here looks exactly same as the changes in AddColumnMutator. Can we create a helper method which accepts table and additionalTableMetaData as argument ?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3063,60 +3063,62 @@ public boolean isViewReferenced() {
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
             PTable table = new PTableImpl.Builder()
-                    .setType(tableType)

Review comment:
       Here we are adding just 1 filed to builder method. Can we please undo the formatting changes which is making the diff look more than actual changes.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -107,6 +107,29 @@
             HColumnDescriptor.KEEP_DELETED_CELLS,
             HColumnDescriptor.REPLICATION_SCOPE);
 
+    public static Put getLastDDLTimestampUpdate(byte[] tableHeaderRowKey,
+                                                     long clientTimestamp,
+                                                     long lastDDLTimestamp) {
+        //use client timestamp as the timestamp of the Cell, to match the other Cells that might
+        // be created by this DDL. But the actual value will be a _server_ timestamp
+        Put p = new Put(tableHeaderRowKey, clientTimestamp);
+        byte[] lastDDLTimestampBytes = PLong.INSTANCE.toBytes(lastDDLTimestamp);
+        p.addColumn(PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES,
+            PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES, lastDDLTimestampBytes);
+        return p;
+    }
+
+    /**
+     * Checks if a table is meant to be queried directly (and hence is relevant to external
+     * systems tracking Phoenix schema)
+     * @param tableType
+     * @return True if a table or view, false otherwise (such as for an index, system table, or
+     * subquery)
+     */
+    public static boolean isTableQueryable(PTableType tableType) {

Review comment:
       I am not that much aware of the change here but isn't system table directly queryable ?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
##########
@@ -2285,6 +2286,7 @@ public MetaDataResponse call(MetaDataService instance) throws IOException {
                     builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
                     if (parentTable!=null)
                         builder.setParentTable(PTableImpl.toProto(parentTable));
+                    builder.setAddingColumns(addingColumns);

Review comment:
       Unable to understand the need of this field addingColumns in builder ? If we need this, then why not a corresponding field in DropColumn request.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-722648415


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 38s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m  5s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  1s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 43s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 21s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 11s |  root: The patch generated 276 new + 24786 unchanged - 127 fixed = 25062 total (was 24913)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 41s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  5s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  6s |  phoenix-core generated 2 new + 945 unchanged - 1 fixed = 947 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  5s |  root generated 2 new + 999 unchanged - 1 fixed = 1001 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 132m  0s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 25s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(PhoenixConnection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2602] is not discharged |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(PhoenixConnection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2602] is not discharged |
   | Failed junit tests | phoenix.end2end.index.SaltedIndexIT |
   |   | phoenix.end2end.ReverseScanIT |
   |   | phoenix.util.IndexScrutinyIT |
   |   | phoenix.end2end.index.MutableIndexReplicationIT |
   |   | phoenix.end2end.index.InvalidIndexStateClientSideIT |
   |   | phoenix.end2end.index.IndexCoprocIT |
   |   | phoenix.end2end.index.IndexVerificationResultRepositoryIT |
   |   | phoenix.end2end.UnionAllIT |
   |   | phoenix.end2end.index.IndexVerificationOutputRepositoryIT |
   |   | phoenix.end2end.index.GlobalImmutableNonTxIndexIT |
   |   | phoenix.end2end.SequenceIT |
   |   | phoenix.end2end.DeleteIT |
   |   | phoenix.end2end.index.MutableIndexSplitForwardScanIT |
   |   | phoenix.end2end.index.LocalImmutableNonTxIndexIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux a365babc095a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 20d2a6d |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/testReport/ |
   | Max. process+thread count | 6871 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/8/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525580079



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
##########
@@ -654,10 +654,17 @@ public static PTable addDerivedColumnsFromParent(PTable view, PTable parentTable
         }
 
         long maxTableTimestamp = view.getTimeStamp();
+        long maxDDLTimestamp = view.getLastDDLTimestamp() != null ? view.getLastDDLTimestamp() : 0L;
         int numPKCols = view.getPKColumns().size();
-        // set the final table timestamp as the max timestamp of the view/view index or its
-        // ancestors
+        // set the final table timestamp and DDL timestamp as the respective max timestamps of the
+        // view/view index or its ancestors
         maxTableTimestamp = Math.max(maxTableTimestamp, parentTable.getTimeStamp());
+        //Diverged views no longer inherit ddl timestamps from their ancestors because they don't
+        // inherit column changes

Review comment:
       I'd be more inclined to: 
   Step 1. Update the last ddl timestamp on either add or drop column, and accept that in a future JIRA the schema registry will be redundantly updated in the (hopefully rare) event that a base table of a diverged view adds a column. Essentially this removes the diverged view check here in ViewUtil. (Easy)
   
   Optional Step 2. Then in a second JIRA change the diverged view logic to inherit changes on both add column and drop column so the updates aren't really redundant. (Harder, but leaves us in a better state)
   
   If there's some compelling reason to keep the existing logic, I'm interested to learn about it, but right now it doesn't seem to justify the enormous complexity (which just keeps compounding with time!) that it requires. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r522541892



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -293,7 +293,9 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                                          List<ImmutableBytesPtr> invalidateList,
                                                          List<Region.RowLock> locks,
                                                          long clientTimeStamp,
-                                                         long clientVersion) {
+                                                         long clientVersion,
+                                                         boolean isAddingColumns,
+                                                         List<PTable> childViews) {

Review comment:
       Good catch, it was necessary in a prior draft but the code that used it was just removed. I'll remove the parameter. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-728431268


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 29s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m 21s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  2s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 35s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 22s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 31s |  root: The patch generated 340 new + 25061 unchanged - 258 fixed = 25401 total (was 25319)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 42s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  4s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  2s |  phoenix-core generated 2 new + 945 unchanged - 1 fixed = 947 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  4s |  root generated 2 new + 999 unchanged - 1 fixed = 1001 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 135m 35s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  The patch does not generate ASF License warnings.  |
   |  |   | 200m 30s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.PointInTimeQueryIT |
   |   | phoenix.end2end.UpgradeIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux 29d23cc37380 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 110f5b7 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/testReport/ |
   | Max. process+thread count | 6458 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/14/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r521480773



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +401,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        if (isAddingColumns) {
+            //We're changing the application-facing schema by adding a column, so update the DDL
+            // timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+            for (PTable viewTable : childViews) {

Review comment:
       @ChinmaySKulkarni - I see how inheritance logic could work for PTables, but how would we make that work for the JDBC metadata API? (see PhoenixDatabaseMetaData) I'm asserting throughout my tests that PTables and the metadata API give the same answers, but the metadata API is, I believe, just querying the table or view header row in System.Catalog. 
   
   (_Technically_, the JDBC metadata API is the "public" one and the PTables API is the "private" one, though in all the external Phoenix-based applications I work with for my day job, I've switched to using PTables because the JDBC metadata API is just too inefficient since it never caches results.)
   
   Also just want to note that in a follow-up JIRA to this, we're going to need to (optionally) call to an external schema registry when we create a table/view or add/remove a column from a table/view, and that _will_ need to be synchronous, because the schema needs to be in the schema registry before DML using that schema starts being processed by the replication pipeline.  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512334756



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
##########
@@ -387,6 +388,31 @@ public void testViewUsesTableLocalIndex() throws Exception {
         }
     }
 
+    @Test
+    public void testCreateViewTimestamp() throws Exception {
+        Properties props = new Properties();
+        final String schemaName = "S_" + generateUniqueName();
+        final String tableName = "T_" + generateUniqueName();
+        final String viewName = "V_" + generateUniqueName();
+        final String dataTableFullName = SchemaUtil.getTableName(schemaName, tableName);
+        final String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+        String tableDDL =
+            "CREATE TABLE " + dataTableFullName + " (\n" + "ID1 VARCHAR(15) NOT NULL,\n"
+                + "ID2 VARCHAR(15) NOT NULL,\n" + "CREATED_DATE DATE,\n"
+                + "CREATION_TIME BIGINT,\n" + "LAST_USED DATE,\n"
+                + "CONSTRAINT PK PRIMARY KEY (ID1, ID2)) ";
+        String viewDDL = "CREATE VIEW " + viewFullName  + " AS SELECT * " +
+            "FROM " + dataTableFullName;
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            conn.commit();

Review comment:
       nit: Don't need commit() since they are DDL statements.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1217,50 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }

Review comment:
       Once we converge on the expected behavior for column inheritance from a parent to its child views, we should add some tests for those scenarios too.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2100,6 +2115,13 @@ public void createTable(RpcController controller, CreateTableRequest request,
                     builder.setViewIndexIdType(PLong.INSTANCE.getSqlType());
                 }
                 builder.setMutationTime(currentTimeStamp);
+                //send the newly built table back because we generated the DDL timestamp server
+                // side and the client doesn't have it.
+                PTable newTable = buildTable(tableKey, cacheKey, region,
+                    clientTimeStamp, clientVersion);
+                if (newTable != null) {

Review comment:
       It might be better to just send the DDL timestamp from the server instead of the entire PTable. This is because on the client, we set certain properties which may be derived from the parent, etc. so we probably can't blindly cache the PTable that is returned from the server anyhow. 

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -4416,10 +4423,15 @@ else if (columnToDrop.isViewReferenced()) {
                     // client-side cache as it would be too painful. Just let it pull it over from
                     // the server when needed.
                     if (tableColumnsToDrop.size() > 0) {
-                        if (removedIndexTableOrColumn)
+                        if (removedIndexTableOrColumn) {
                             connection.removeTable(tenantId, tableName, table.getParentName() == null ? null : table.getParentName().getString(), table.getTimeStamp());
-                        else
-                            connection.removeColumn(tenantId, SchemaUtil.getTableName(schemaName, tableName) , tableColumnsToDrop, result.getMutationTime(), seqNum, TransactionUtil.getResolvedTime(connection, result));
+                        }
+                        else {
+                            //replace the cache of this table with the updated one we got back
+                            // from the server
+                            connection.addTable(result.getTable(),

Review comment:
       Similar kind of concern here?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3048,7 +3048,13 @@ public boolean isViewReferenced() {
              * the counter as NULL_COUNTER for extra safety.
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
-            PTable table = new PTableImpl.Builder()
+            PTable table;
+            //better to use the table sent back from the server so we get an accurate DDL
+            // timestamp, which is server-generated.
+            if (result.getTable() != null ) {

Review comment:
       This seems risky since we are relying on the fact that the PTable returned from the server has all the necessary attributes set as the PTable we create on the client-side. There are some that we set explicitly inside `MetaDataClient` which depend on the parent so I'm not sure we still have those set as expected.
   
   Instead, to be safe we can maybe `getDDLTimestamp()` from this returned PTable and set that in the builder. Better yet, we could just send the DDL timestamp in the server response rather than the entire PTable. We'd also have to add a setter and getter to the PTable builder for this new attribute.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
##########
@@ -1326,7 +1327,40 @@ public void testAddingColumnsToTablesAndViews() throws Exception {
             assertSequenceNumber(schemaName, viewName, PTable.INITIAL_SEQ_NUM + 1);
         }
     }
-	
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String columnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }

Review comment:
       Can we add a test which confirms that ALTER SET <properties> doesn't modify the timestamp?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
##########
@@ -910,6 +911,48 @@ public void testTableDescriptorPriority() throws SQLException, IOException {
         }
     }
 
+    @Test
+    public void testCreateTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        final String schemaName = generateUniqueName();
+        final String tableName = generateUniqueName();
+        final String dataTableFullName = SchemaUtil.getTableName(schemaName, tableName);
+        String ddl =
+            "CREATE TABLE " + dataTableFullName + " (\n" + "ID1 VARCHAR(15) NOT NULL,\n"
+                + "ID2 VARCHAR(15) NOT NULL,\n" + "CREATED_DATE DATE,\n"
+                + "CREATION_TIME BIGINT,\n" + "LAST_USED DATE,\n"
+                + "CONSTRAINT PK PRIMARY KEY (ID1, ID2)) ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(ddl);
+            verifyLastDDLTimestamp(schemaName, tableName, dataTableFullName, startTS, conn);
+        }
+    }
+
+    public static long verifyLastDDLTimestamp(String schemaName, String tableName,
+                                              String dataTableFullName, long startTS, Connection conn) throws SQLException {
+        long endTS = EnvironmentEdgeManager.currentTimeMillis();
+        //First try the JDBC metadata API
+        PhoenixDatabaseMetaData metadata = (PhoenixDatabaseMetaData) conn.getMetaData();
+        ResultSet rs = metadata.getTables("", schemaName, tableName, null);
+        assertTrue("No metadata returned", rs.next());
+        Long ddlTimestamp = rs.getLong(PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP);
+        assertNotNull("JDBC DDL timestamp is null!", ddlTimestamp);
+        assertTrue("JDBC DDL Timestamp not in the right range!",
+            ddlTimestamp >= startTS && ddlTimestamp <= endTS);
+        //Now try the PTable API
+        PTable table = PhoenixRuntime.getTableNoCache(conn, dataTableFullName);

Review comment:
       Do we need some tests to confirm tenant-view behavior? And then subsequently make this helper method work for resolving those?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +399,10 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        //We're changing the application-facing schema by adding a column, so update the DDL
+        // timestamp
+        additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,

Review comment:
       This will get triggered even for `ALTER TABLE/VIEW SET <property>`. I thought we didn't want to update the ddl timestamp in those cases?

##########
File path: pom.xml
##########
@@ -318,7 +318,7 @@
             <execution>
               <id>ParallelStatsDisabledTest</id>
               <configuration>
-                <reuseForks>true</reuseForks>
+                <reuseForks>false</reuseForks>

Review comment:
       What is this change for?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -107,6 +107,18 @@
             HColumnDescriptor.KEEP_DELETED_CELLS,
             HColumnDescriptor.REPLICATION_SCOPE);
 
+    public static Mutation getLastDDLTimestampUpdate(byte[] tableHeaderRowKey,

Review comment:
       Can we add a simple unit test for this inside `MetaDataUtilTest`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r513080477



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +399,10 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        //We're changing the application-facing schema by adding a column, so update the DDL
+        // timestamp
+        additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525573050



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";

Review comment:
       I'll do as you suggest, but should it make a difference?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-730791845


   Thanks for the reviews, @ChinmaySKulkarni and @shahrs87 ! Squashed and merged. I've opened up a PR at #977 for the master branch and will also commit there if tests are green. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r522549948



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
##########
@@ -2285,6 +2286,7 @@ public MetaDataResponse call(MetaDataService instance) throws IOException {
                     builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
                     if (parentTable!=null)
                         builder.setParentTable(PTableImpl.toProto(parentTable));
+                    builder.setAddingColumns(addingColumns);

Review comment:
       In a drop column request, the answer is always "true" so there's no need to set the variable; it's hard-coded when the MetadataEndpointImpl creates the DropColumnMutator. (AddColumnRequests can either be to add columns or alter table properties, and we need to distinguish to know whether to update the DDL timestamp; a drop is always a drop.)  
   
   I'll remove the (useless) check on the variable in DropColumnMutator to make that more clear.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512841786



##########
File path: pom.xml
##########
@@ -318,7 +318,7 @@
             <execution>
               <id>ParallelStatsDisabledTest</id>
               <configuration>
-                <reuseForks>true</reuseForks>
+                <reuseForks>false</reuseForks>

Review comment:
       oops, didn't mean to push that one. Makes ParallelStatsDisabledTests more stable when running on a RAM-starved laptop, at the cost of some speed. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525539735



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";

Review comment:
       We should ideally drop a pre-existing column from the parent for this test rather than a column that was newly added to the parent.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726405428


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 32s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m 22s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  1s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 42s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 22s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 29s |  root: The patch generated 401 new + 25081 unchanged - 238 fixed = 25482 total (was 25319)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 43s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  4s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  5s |  phoenix-core generated 3 new + 945 unchanged - 1 fixed = 948 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  5s |  root generated 3 new + 999 unchanged - 1 fixed = 1002 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 22s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 21s |  The patch does not generate ASF License warnings.  |
   |  |   |  64m 13s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2604] is not discharged |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) passes a nonconstant String to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2604] is not discharged |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) passes a nonconstant String to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.util.PhoenixRuntimeTest |
   |   | phoenix.compile.WhereCompilerTest |
   |   | phoenix.compile.TenantSpecificViewIndexCompileTest |
   |   | phoenix.compile.WhereOptimizerTest |
   |   | phoenix.mapreduce.util.PhoenixConfigurationUtilTest |
   |   | phoenix.compile.ViewCompilerTest |
   |   | phoenix.compile.QueryCompilerTest |
   |   | phoenix.compile.QueryOptimizerTest |
   |   | phoenix.query.QueryPlanTest |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux e994088abb66 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 565b0ea |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/testReport/ |
   | Max. process+thread count | 486 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/9/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r526307958



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
##########
@@ -654,10 +654,17 @@ public static PTable addDerivedColumnsFromParent(PTable view, PTable parentTable
         }
 
         long maxTableTimestamp = view.getTimeStamp();
+        long maxDDLTimestamp = view.getLastDDLTimestamp() != null ? view.getLastDDLTimestamp() : 0L;
         int numPKCols = view.getPKColumns().size();
-        // set the final table timestamp as the max timestamp of the view/view index or its
-        // ancestors
+        // set the final table timestamp and DDL timestamp as the respective max timestamps of the
+        // view/view index or its ancestors
         maxTableTimestamp = Math.max(maxTableTimestamp, parentTable.getTimeStamp());
+        //Diverged views no longer inherit ddl timestamps from their ancestors because they don't
+        // inherit column changes

Review comment:
       @ChinmaySKulkarni - Pushed up a commit that implements Step 1 described above. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512895873



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +399,10 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        //We're changing the application-facing schema by adding a column, so update the DDL
+        // timestamp
+        additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,

Review comment:
       Yes, thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512843496



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +399,10 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        //We're changing the application-facing schema by adding a column, so update the DDL
+        // timestamp
+        additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,

Review comment:
       SET property goes through the column add logic? Thanks, didn't know that, will change the logic to not update the timestamp if no columns are being added, then add a test as you recommend above.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525324954



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
##########
@@ -1326,7 +1328,69 @@ public void testAddingColumnsToTablesAndViews() throws Exception {
             assertSequenceNumber(schemaName, viewName, PTable.INITIAL_SEQ_NUM + 1);
         }
     }
-	
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String columnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testSetPropertyDoesntUpdateDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String setPropertyDDL = "ALTER TABLE " + dataTableFullName +
+            " SET UPDATE_CACHE_FREQUENCY=300000 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates

Review comment:
       @gjacoby126  looks like this comment was missed ?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampWithChildViews() throws Exception {
+        Assume.assumeTrue(isMultiTenant);

Review comment:
       @gjacoby126  This is not a comment for changing anything. This is just for my knowledge.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampWithChildViews() throws Exception {
+        Assume.assumeTrue(isMultiTenant);
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String globalViewName = "V_" + generateUniqueName();
+        String tenantViewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
+        String tenantViewFullName = SchemaUtil.getTableName(schemaName, tenantViewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        //create a table with a child global view, who then has a child tenant view
+        String globalViewDDL =
+            "CREATE VIEW " + globalViewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String tenantViewDDL =
+            "CREATE VIEW " + tenantViewFullName + " AS SELECT * FROM " + globalViewFullName;
+
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        long tableDDLTimestamp, globalViewDDLTimestamp;
+
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(globalViewDDL);
+            tableDDLTimestamp = getLastDDLTimestamp(conn, dataTableFullName);
+            globalViewDDLTimestamp = getLastDDLTimestamp(conn, globalViewFullName);
+        }
+        props.setProperty(TENANT_ID_ATTRIB, TENANT1);
+        try (Connection tenantConn = DriverManager.getConnection(getUrl(), props)) {
+            tenantConn.createStatement().execute(tenantViewDDL);
+        }
+        // First, check that adding a child view didn't change the timestamps

Review comment:
       @gjacoby126  looks like this comment was missed ?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
##########
@@ -1326,7 +1328,69 @@ public void testAddingColumnsToTablesAndViews() throws Exception {
             assertSequenceNumber(schemaName, viewName, PTable.INITIAL_SEQ_NUM + 1);
         }
     }
-	
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String tableDDL = "CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " ENTITY_ID integer NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (ENTITY_ID, COL1, COL2)"
+            + " ) " + generateDDLOptions("");
+
+        String columnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {

Review comment:
       @gjacoby126  looks like this comment was missed ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525575243



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior

Review comment:
       I agree, but as you said, changing the behavior of diverged views is out of scope for this discussion.
   
   As far as current behavior goes, since columns dropped from parents are always inherited by all views (diverged or not), the lastDDLTs should be updated for even diverged views in those cases, whereas your current changes won't do that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512843496



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +399,10 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        //We're changing the application-facing schema by adding a column, so update the DDL
+        // timestamp
+        additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,

Review comment:
       SET property counts as adding a column? Didn't know that, will change the logic and add a test as you recommend above.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r521481685



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +401,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        if (isAddingColumns) {
+            //We're changing the application-facing schema by adding a column, so update the DDL
+            // timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+            for (PTable viewTable : childViews) {

Review comment:
       Will make sure to exempt diverged columns, thanks for pointing that out. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r521026323



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
##########
@@ -3707,15 +3709,20 @@ protected PhoenixConnection upgradeSystemCatalogIfRequired(PhoenixConnection met
             metaConnection = addColumnsIfNotExists(
                 metaConnection,
                 PhoenixDatabaseMetaData.SYSTEM_CATALOG,
-                MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 - 1,
+                MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 - 2,

Review comment:
       @ChinmaySKulkarni  The upgrade code requires that each new column has a distinct timestamp (for some reason)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-729083958


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  5s |  https://github.com/apache/phoenix/pull/935 does not apply to 4.x. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help.  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/15/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525580744



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
##########
@@ -654,10 +654,17 @@ public static PTable addDerivedColumnsFromParent(PTable view, PTable parentTable
         }
 
         long maxTableTimestamp = view.getTimeStamp();
+        long maxDDLTimestamp = view.getLastDDLTimestamp() != null ? view.getLastDDLTimestamp() : 0L;
         int numPKCols = view.getPKColumns().size();
-        // set the final table timestamp as the max timestamp of the view/view index or its
-        // ancestors
+        // set the final table timestamp and DDL timestamp as the respective max timestamps of the
+        // view/view index or its ancestors
         maxTableTimestamp = Math.max(maxTableTimestamp, parentTable.getTimeStamp());
+        //Diverged views no longer inherit ddl timestamps from their ancestors because they don't
+        // inherit column changes

Review comment:
       (Especially since this all appears to be _undocumented_ behavior.)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r521529233



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +401,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        if (isAddingColumns) {
+            //We're changing the application-facing schema by adding a column, so update the DDL
+            // timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+            for (PTable viewTable : childViews) {

Review comment:
       @ChinmaySKulkarni - is there an authoritative way to tell if a view is diverged? I don't see a PTable property. Is ViewUtil.isDivergedView(PTable) usable? Would I have to use ViewUtil.isDivergingView(PColumn, PTable) instead? Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525452210



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
##########
@@ -747,4 +640,83 @@ private void verifyExpectedCellValue(byte[] rowKey, byte[] syscatBytes,
             assertArrayEquals(expectedDateTypeBytes, CellUtil.cloneValue(cell));
         }
     }
+
+    @Test
+    public void testLastDDLTimestampBootstrap() throws Exception {
+        //Create a table, view, and index
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName = SchemaUtil.getTableName(schemaName, viewName);
+        try (Connection conn = getConnection(false, null)) {
+            conn.createStatement().execute(
+                "CREATE TABLE " + fullTableName
+                    + " (PK1 VARCHAR NOT NULL, PK2 VARCHAR, KV1 VARCHAR, KV2 VARCHAR CONSTRAINT " +
+                    "PK PRIMARY KEY(PK1, PK2)) ");
+            conn.createStatement().execute(
+                "CREATE VIEW " + fullViewName + " AS SELECT * FROM " + fullTableName);
+
+            //Now we null out any existing last ddl timestamps
+            nullDDLTimestamps(conn);
+            conn.commit();

Review comment:
       nit: We already have `conn.commit()`inside `nullDDLTimestamps()`

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
##########
@@ -91,122 +100,6 @@
 @Category(NeedsOwnMiniClusterTest.class)
 public class UpgradeIT extends ParallelStatsDisabledIT {
 
-    @Test
-    public void testMapTableToNamespaceDuringUpgrade()
-            throws SQLException, IOException, IllegalArgumentException, InterruptedException {
-        String[] strings = new String[] { "a", "b", "c", "d" };
-
-        try (Connection conn = DriverManager.getConnection(getUrl())) {
-            String schemaName = "TEST";
-            String phoenixFullTableName = schemaName + "." + generateUniqueName();
-            String indexName = "IDX_" + generateUniqueName();
-            String localIndexName = "LIDX_" + generateUniqueName();
-
-            String viewName = "VIEW_" + generateUniqueName();
-            String viewIndexName = "VIDX_" + generateUniqueName();
-
-            String[] tableNames = new String[] { phoenixFullTableName, schemaName + "." + indexName,
-                    schemaName + "." + localIndexName, "diff." + viewName, "test." + viewName, viewName};
-            String[] viewIndexes = new String[] { "diff." + viewIndexName, "test." + viewIndexName };
-            conn.createStatement().execute("CREATE TABLE " + phoenixFullTableName
-                    + "(k VARCHAR PRIMARY KEY, v INTEGER, f INTEGER, g INTEGER NULL, h INTEGER NULL)");
-            PreparedStatement upsertStmt = conn
-                    .prepareStatement("UPSERT INTO " + phoenixFullTableName + " VALUES(?, ?, 0, 0, 0)");
-            int i = 1;
-            for (String str : strings) {
-                upsertStmt.setString(1, str);
-                upsertStmt.setInt(2, i++);
-                upsertStmt.execute();
-            }
-            conn.commit();
-            // creating local index
-            conn.createStatement()
-                    .execute("create local index " + localIndexName + " on " + phoenixFullTableName + "(K)");
-            // creating global index
-            conn.createStatement().execute("create index " + indexName + " on " + phoenixFullTableName + "(k)");
-            // creating view in schema 'diff'
-            conn.createStatement().execute("CREATE VIEW diff." + viewName + " (col VARCHAR) AS SELECT * FROM " + phoenixFullTableName);
-            // creating view in schema 'test'
-            conn.createStatement().execute("CREATE VIEW test." + viewName + " (col VARCHAR) AS SELECT * FROM " + phoenixFullTableName);
-            conn.createStatement().execute("CREATE VIEW " + viewName + "(col VARCHAR) AS SELECT * FROM " + phoenixFullTableName);
-            // Creating index on views
-            conn.createStatement().execute("create index " + viewIndexName + "  on diff." + viewName + "(col)");
-            conn.createStatement().execute("create index " + viewIndexName + " on test." + viewName + "(col)");
-
-            // validate data
-            for (String tableName : tableNames) {
-                ResultSet rs = conn.createStatement().executeQuery("select * from " + tableName);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(1));
-                }
-            }
-
-            // validate view Index data
-            for (String viewIndex : viewIndexes) {
-                ResultSet rs = conn.createStatement().executeQuery("select * from " + viewIndex);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(2));
-                }
-            }
-
-            HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-            assertTrue(admin.tableExists(phoenixFullTableName));
-            assertTrue(admin.tableExists(schemaName + QueryConstants.NAME_SEPARATOR + indexName));
-            assertTrue(admin.tableExists(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(phoenixFullTableName))));
-            Properties props = new Properties();
-            props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean.toString(true));
-            props.setProperty(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE, Boolean.toString(false));
-            admin.close();
-            PhoenixConnection phxConn = DriverManager.getConnection(getUrl(), props).unwrap(PhoenixConnection.class);
-            UpgradeUtil.upgradeTable(phxConn, phoenixFullTableName);
-            phxConn.close();
-            props = new Properties();
-            phxConn = DriverManager.getConnection(getUrl(), props).unwrap(PhoenixConnection.class);
-            // purge MetaDataCache except for system tables
-            phxConn.getMetaDataCache().pruneTables(new PMetaData.Pruner() {
-                @Override public boolean prune(PTable table) {
-                    return table.getType() != PTableType.SYSTEM;
-                }
-
-                @Override public boolean prune(PFunction function) {
-                    return false;
-                }
-            });
-            admin = phxConn.getQueryServices().getAdmin();
-            String hbaseTableName = SchemaUtil.getPhysicalTableName(Bytes.toBytes(phoenixFullTableName), true)
-                    .getNameAsString();
-            assertTrue(admin.tableExists(hbaseTableName));
-            assertTrue(admin.tableExists(Bytes.toBytes(hbaseTableName)));
-            assertTrue(admin.tableExists(schemaName + QueryConstants.NAMESPACE_SEPARATOR + indexName));
-            assertTrue(admin.tableExists(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(hbaseTableName))));
-            i = 0;
-            // validate data
-            for (String tableName : tableNames) {
-                ResultSet rs = phxConn.createStatement().executeQuery("select * from " + tableName);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(1));
-                }
-            }
-            // validate view Index data
-            for (String viewIndex : viewIndexes) {
-                ResultSet rs = conn.createStatement().executeQuery("select * from " + viewIndex);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(2));
-                }
-            }
-            PName tenantId = phxConn.getTenantId();
-            PName physicalName = PNameFactory.newName(hbaseTableName);
-            String newSchemaName = MetaDataUtil.getViewIndexSequenceSchemaName(physicalName, true);
-            String newSequenceName = MetaDataUtil.getViewIndexSequenceName(physicalName, tenantId, true);
-            verifySequenceValue(null, newSequenceName, newSchemaName, Short.MIN_VALUE + 3);
-            admin.close();
-        }
-    }
-
     @Test
     public void testMapMultiTenantTableToNamespaceDuringUpgrade() throws SQLException, SnapshotCreationException,

Review comment:
       Should we also move this test to `UpgradeNamespaceIT`?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
##########
@@ -387,6 +388,48 @@ public void testViewUsesTableLocalIndex() throws Exception {
         }
     }
 
+    @Test
+    public void testCreateViewTimestamp() throws Exception {
+        String tenantId = null;
+        createViewTimestampHelper(tenantId);
+    }
+
+    @Test
+    public void testCreateTenantViewTimestamp() throws Exception {
+        createViewTimestampHelper(TENANT1);
+    }
+
+    private void createViewTimestampHelper(String tenantId) throws SQLException {
+        Properties props = new Properties();
+        if (tenantId != null) {
+            props.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, tenantId);
+        } else {
+            tenantId = "";

Review comment:
       I think we can remove this `else` block since the reinitialized value for `tenantId` isn't being used anywhere 

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            long tableDDLTimestamp = CreateTableIT.getLastDDLTimestamp(conn, dataTableFullName);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = CreateTableIT.getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify table DDL timestamp DID NOT change
+            assertEquals(tableDDLTimestamp, CreateTableIT.getLastDDLTimestamp(conn, dataTableFullName));
+            //verify view DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, viewDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, viewDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, viewDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, CreateTableIT.getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);

Review comment:
       Column drop _should_ change the last DDL timestamp for child views as explained in my previous comment.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +400,17 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        if (isAddingColumns) {
+            //We're changing the application-facing schema by adding a column, so update the DDL
+            // timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            if (MetaDataUtil.isTableQueryable(table.getType())) {
+                additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                    clientTimeStamp, serverTimestamp));
+            }
+            //we don't need to update the DDL timestamp for child views, because when we look up

Review comment:
       Just for clarity, can we mention that this logic only applies to non-diverged views?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
##########
@@ -654,10 +654,17 @@ public static PTable addDerivedColumnsFromParent(PTable view, PTable parentTable
         }
 
         long maxTableTimestamp = view.getTimeStamp();
+        long maxDDLTimestamp = view.getLastDDLTimestamp() != null ? view.getLastDDLTimestamp() : 0L;
         int numPKCols = view.getPKColumns().size();
-        // set the final table timestamp as the max timestamp of the view/view index or its
-        // ancestors
+        // set the final table timestamp and DDL timestamp as the respective max timestamps of the
+        // view/view index or its ancestors
         maxTableTimestamp = Math.max(maxTableTimestamp, parentTable.getTimeStamp());
+        //Diverged views no longer inherit ddl timestamps from their ancestors because they don't
+        // inherit column changes

Review comment:
       Except for when a column is dropped from some ancestor after the view diverged. In that case, they _do_ "inherit" that change, just like they would have before diverging.

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";

Review comment:
       We should ideally drop a pre-existing column from the parent for this test rather than a column that was newly added to the parent itself.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2846,21 +2872,25 @@ private MetaDataMutationResult mutateColumn(
                 separateLocalAndRemoteMutations(region, tableMetadata, localMutations,
                         remoteMutations);
                 if (!remoteMutations.isEmpty()) {
-                    // there should only be remote mutations if we are adding a column to a view
+                    // there should only be remote mutations if we are updating the last ddl

Review comment:
       This comment is no longer valid right?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
##########
@@ -387,6 +388,48 @@ public void testViewUsesTableLocalIndex() throws Exception {
         }
     }
 
+    @Test
+    public void testCreateViewTimestamp() throws Exception {
+        String tenantId = null;
+        createViewTimestampHelper(tenantId);

Review comment:
       nit: Don't need the `tenantId` variable here

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior

Review comment:
       This is not entirely true. I know this is frustrating (I was in the same boat), but in 4.15+ based on what I've gathered, here are the rules for column inheritance w.r.t. diverged and non-diverged views:
   
   ### Dropping columns from a parent
   - Any columns dropped from an ancestor that is a **physical base table** are "inherited" by its entire child view hierarchy. This is obvious since the physical column no longer exists and so any descendant view wouldn't be able to project anything for those dropped columns.
     - Since the shape of the child view changes in this case, we should probably use `max(view_DDL_ts, parent_DDL_ts_corresponding_to_col_drop)`. 
     - Not sure how easy of a change it would be to store this ts corresponding to column dropped from the parent.
   - Any columns dropped from an ancestor that is a **view** are "inherited" by its entire child view hierarchy **even if a child view has diverged**.
     - Same here
   
   ### Adding columns to a parent
   - Any columns added to an ancestor that is a **view or physical base table** are only inherited by its descendant views that have **not diverged**.
     - I think your changes already capture this.
   
   Overall, the rule seems to be:
   Once a view diverges from its parent, any columns added to an ancestor base table/view are no longer propagated to the view. A column dropped on an ancestor base table/view is however, always propagated to the child view.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DropColumnMutator.java
##########
@@ -268,6 +272,16 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table,
             }
 
         }
+        //We're changing the application-facing schema by dropping a column, so update the DDL
+        // timestamp to current _server_ timestamp
+        if (MetaDataUtil.isTableQueryable(table.getType())) {
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetaData.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+        }
+        //we don't need to update the DDL timestamp for any child views we may have, because

Review comment:
       For clarity, can we mention that this logic also applies for diverged views?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2846,21 +2872,25 @@ private MetaDataMutationResult mutateColumn(
                 separateLocalAndRemoteMutations(region, tableMetadata, localMutations,
                         remoteMutations);
                 if (!remoteMutations.isEmpty()) {
-                    // there should only be remote mutations if we are adding a column to a view
+                    // there should only be remote mutations if we are updating the last ddl
+                    // timestamp for child views, or we are adding a column to a view
                     // that uses encoded column qualifiers (the remote mutations are to update the
                     // encoded column qualifier counter on the parent table)
-                    if (mutator.getMutateColumnType() == ColumnMutator.MutateColumnType.ADD_COLUMN
+                    if (childViews.size() > 0 || ( mutator.getMutateColumnType() == ColumnMutator.MutateColumnType.ADD_COLUMN

Review comment:
       Shouldn't this be done even if `if (!remoteMutations.isEmpty())` is false if the table/view has child views??

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";

Review comment:
       Same here.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java
##########
@@ -654,10 +654,17 @@ public static PTable addDerivedColumnsFromParent(PTable view, PTable parentTable
         }
 
         long maxTableTimestamp = view.getTimeStamp();
+        long maxDDLTimestamp = view.getLastDDLTimestamp() != null ? view.getLastDDLTimestamp() : 0L;
         int numPKCols = view.getPKColumns().size();
-        // set the final table timestamp as the max timestamp of the view/view index or its
-        // ancestors
+        // set the final table timestamp and DDL timestamp as the respective max timestamps of the
+        // view/view index or its ancestors
         maxTableTimestamp = Math.max(maxTableTimestamp, parentTable.getTimeStamp());
+        //Diverged views no longer inherit ddl timestamps from their ancestors because they don't
+        // inherit column changes

Review comment:
       Maybe we need to introduce another field in PTable to explicitly store any drop column DDLs being issued to a table/view. Then we could use that to get the lastDDLTs of diverged views.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -107,6 +107,29 @@
             HColumnDescriptor.KEEP_DELETED_CELLS,
             HColumnDescriptor.REPLICATION_SCOPE);
 
+    public static Put getLastDDLTimestampUpdate(byte[] tableHeaderRowKey,
+                                                     long clientTimestamp,
+                                                     long lastDDLTimestamp) {
+        //use client timestamp as the timestamp of the Cell, to match the other Cells that might
+        // be created by this DDL. But the actual value will be a _server_ timestamp
+        Put p = new Put(tableHeaderRowKey, clientTimestamp);
+        byte[] lastDDLTimestampBytes = PLong.INSTANCE.toBytes(lastDDLTimestamp);
+        p.addColumn(PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES,
+            PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES, lastDDLTimestampBytes);
+        return p;
+    }
+
+    /**
+     * Checks if a table is meant to be queried directly (and hence is relevant to external
+     * systems tracking Phoenix schema)
+     * @param tableType
+     * @return True if a table or view, false otherwise (such as for an index, system table, or
+     * subquery)
+     */
+    public static boolean isTableQueryable(PTableType tableType) {

Review comment:
       I also think this name is a little confusing. Maybe something like `isTableTypeDirectlyQueried()` is more in line with what we mean




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-716834823


   Three tests failed (IndexExtendedIT, UpgradeIT, ImmutableIndexIT),  but two were timeouts (the index tests), and all pass when run locally so appear to be flappers. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-728212707


   @shahrs87 - Of the three findbugs errors, the first is unclear to me, the second is complaining about a potential resource leak in the boostrap code (valid complaint) and the third is complaining about a potential SQL injection vulnerability (incorrect, but understandable). I've changed it to use a PreparedStatement in a try-with-resources executing a constant string.
   
   Went through the checkstyle findings. The line lengths checks looks misconfigured (it's 80 but should be 100, and we routinely let things a little longer than 100 slide). I did go through and fix a bunch of whitespace complaints around if and for blocks, plus marking some params final, to clear out some of the noise. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-719090292


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  12m 44s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   8m  8s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   5m 44s |  root in 4.x has 1007 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   3m 47s |  phoenix-core in 4.x has 953 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 56s |  the patch passed  |
   | -1 :x: |  checkstyle  |   8m 18s |  root: The patch generated 261 new + 22473 unchanged - 125 fixed = 22734 total (was 22598)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   1m  4s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m 29s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   4m  5s |  phoenix-core generated 1 new + 952 unchanged - 1 fixed = 953 total (was 953)  |
   | -1 :x: |  spotbugs  |   5m 24s |  root generated 1 new + 1006 unchanged - 1 fixed = 1007 total (was 1007)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 136m 55s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 11s |  The patch does not generate ASF License warnings.  |
   |  |   | 207m 19s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   | Failed junit tests | phoenix.end2end.index.PartialIndexRebuilderIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux 276926976dc9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 2aaf2e2 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/testReport/ |
   | Max. process+thread count | 6771 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/5/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r521030508



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
##########
@@ -2586,4 +2588,18 @@ public static boolean isNoUpgradeSet(Properties props) {
     public static void doNotUpgradeOnFirstConnection(Properties props) {
         props.setProperty(DO_NOT_UPGRADE, String.valueOf(true));
     }
+
+    //When upgrading to Phoenix 4.16, make each existing table's DDL timestamp equal to its last
+    // updated row timestamp.
+    public static void bootstrapLastDDLTimestamp(PhoenixConnection metaConnection) throws SQLException  {
+        String pkCols = TENANT_ID + ", " + TABLE_SCHEM +
+            ", " + TABLE_NAME + ", " + COLUMN_NAME + ", " + COLUMN_FAMILY;
+        String upsertSql =
+            "UPSERT INTO " + SYSTEM_CATALOG_NAME + " (" + pkCols + ", " +
+        LAST_DDL_TIMESTAMP + ")" + " " +
+            "SELECT " + pkCols + ", PHOENIX_ROW_TIMESTAMP() FROM " + SYSTEM_CATALOG_NAME + " " +
+                "WHERE " + TABLE_TYPE + " " + " in " + "('u','v')";

Review comment:
       Because I affirmatively only intend for it to affect tables and views, not "not system tables or indexes". I'll add a comment in the code though, good point. 
   
   The reason, btw, is that only tables and views are relevant schema for _external_ systems trying to interpret Phoenix data. (This is a pre-req change for Phoenix change data capture, to allow Phoenix DML to be emitted as schema'ed messages into a message bus.) Indexes are an internal Phoenix optimization, and system tables are internal Phoenix config; no other system or external schema registry should care about them. 
   
   If some other kind of Phoenix schema object is created later, I'd want whoever creates it to opt-in to having a ddl timestamp, not need to remember to opt-out.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-727071089


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  11m 38s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |   9m  5s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m 16s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 53s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 30s |  root: The patch generated 394 new + 25081 unchanged - 238 fixed = 25475 total (was 25319)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 41s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  4s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  3s |  phoenix-core generated 3 new + 945 unchanged - 1 fixed = 948 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  7s |  root generated 3 new + 999 unchanged - 1 fixed = 1002 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 131m 26s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  The patch does not generate ASF License warnings.  |
   |  |   | 196m  1s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2604] is not discharged |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) passes a nonconstant String to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) may fail to clean up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:up java.sql.Statement  Obligation to clean up resource created at UpgradeUtil.java:[line 2604] is not discharged |
   |  |  org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection) passes a nonconstant String to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:to an execute or addBatch method on an SQL statement  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.index.DropColumnIT |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   |   | phoenix.end2end.UpsertSelectIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux f1b085061fa0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / b97696b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/testReport/ |
   | Max. process+thread count | 6281 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/12/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] shahrs87 edited a comment on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
shahrs87 edited a comment on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726892294


   > I'm curious -- what is the source is for the "split lines indent 8 spaces" rule you mention?
   
   Actually found the property. It uses this property:
   `<setting id="org.eclipse.jdt.core.formatter.continuation_indentation" value="2"/>`
   The value 2 is multiplier of tabulation size property.
   `<setting id="org.eclipse.jdt.core.formatter.tabulation.size" value="4"/>
   `
   Thats how it comes to 8 spaces. :)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r522551653



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3063,60 +3063,62 @@ public boolean isViewReferenced() {
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
             PTable table = new PTableImpl.Builder()
-                    .setType(tableType)

Review comment:
       Sorry for the inconvenience, but the formatting changes fix the spacing from being incorrect to correct. GitHub does have a "Hide whitespace changes" in the setting icon near the top of the page, btw, to make it easier to ignore. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-729958844


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 46s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m 16s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  0s |  root in 4.x has 1000 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 41s |  phoenix-core in 4.x has 946 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 32s |  root: The patch generated 301 new + 25102 unchanged - 220 fixed = 25403 total (was 25322)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  javadoc  |   0m 43s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  4s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  6s |  phoenix-core generated 2 new + 945 unchanged - 1 fixed = 947 total (was 946)  |
   | -1 :x: |  spotbugs  |   4m  4s |  root generated 2 new + 999 unchanged - 1 fixed = 1001 total (was 1000)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 132m  9s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  The patch does not generate ASF License warnings.  |
   |  |   | 196m 57s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 330] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 8844591b7f86 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / 510ca96 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/testReport/ |
   | Max. process+thread count | 6450 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/17/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r527047479



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
##########
@@ -387,6 +388,48 @@ public void testViewUsesTableLocalIndex() throws Exception {
         }
     }
 
+    @Test
+    public void testCreateViewTimestamp() throws Exception {
+        String tenantId = null;
+        createViewTimestampHelper(tenantId);

Review comment:
       I could just pass null, but this seemed more self-documenting (because the name makes it clear that I'm passing a null tenant id, rather than making someone need to look at the function def to figure that out)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525573661



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2846,21 +2872,25 @@ private MetaDataMutationResult mutateColumn(
                 separateLocalAndRemoteMutations(region, tableMetadata, localMutations,
                         remoteMutations);
                 if (!remoteMutations.isEmpty()) {
-                    // there should only be remote mutations if we are adding a column to a view
+                    // there should only be remote mutations if we are updating the last ddl

Review comment:
       Oops, forgot to undo that comment change. Thanks, will fix. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525571557



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior

Review comment:
       That doesn't seem like desirable behavior to me as a feature. As we discussed offline, I think diverged views should have been written to get all ancestor DDL changes. (To be precise, I think we should allow column projection in view definitions -- SELECT COL1, COL2 FROM FOO rather than SELECT * FROM FOO--, rather than having diverged views at all, but that's a whole new feature and out of scope for this discussion.) 
   
   But setting all that aside, it's the _column drops_ that are the dangerous operations, because they can break existing queries, not _column adds_, which are always benign. (Don't care about a new column? Don't select it!) So if I were trying to allow for a view to split off from its parent and be semi-independent, it's the _drops_ I'd try to shield it from, not additions. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r523105176



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);

Review comment:
       There's already a check in a different test that dropping a view's column doesn't change the base table's timestamp, but no harm in also having a check here since it's an inherited column. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-717647493


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   5m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  10m 44s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  10m 10s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   3m 59s |  root in 4.x has 1007 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 41s |  phoenix-core in 4.x has 953 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 22s |  the patch passed  |
   | -1 :x: |  checkstyle  |  10m 20s |  root: The patch generated 242 new + 22485 unchanged - 111 fixed = 22727 total (was 22596)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML file.  |
   | -1 :x: |  javadoc  |   0m 43s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  2s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  7s |  phoenix-core generated 2 new + 952 unchanged - 1 fixed = 954 total (was 953)  |
   | -1 :x: |  spotbugs  |   4m  5s |  root generated 2 new + 1006 unchanged - 1 fixed = 1008 total (was 1007)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 122m 19s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  The patch does not generate ASF License warnings.  |
   |  |   | 189m 46s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.AddColumnMutator.validateAndAddMetadata(PTable, byte[][], List, Region, List, List, long, long, boolean):in org.apache.phoenix.coprocessor.AddColumnMutator.validateAndAddMetadata(PTable, byte[][], List, Region, List, List, long, long, boolean): new String(byte[])  At AddColumnMutator.java:[line 341] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   | FindBugs | module:root |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.AddColumnMutator.validateAndAddMetadata(PTable, byte[][], List, Region, List, List, long, long, boolean):in org.apache.phoenix.coprocessor.AddColumnMutator.validateAndAddMetadata(PTable, byte[][], List, Region, List, List, long, long, boolean): new String(byte[])  At AddColumnMutator.java:[line 341] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   | Failed junit tests | phoenix.end2end.index.LocalMutableTxIndexIT |
   |   | phoenix.end2end.OrphanViewToolIT |
   |   | phoenix.end2end.UpsertSelectIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool xml |
   | uname | Linux 5c9c5c9e0156 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / ac0538b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/testReport/ |
   | Max. process+thread count | 7059 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/4/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r527049260



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
##########
@@ -91,122 +100,6 @@
 @Category(NeedsOwnMiniClusterTest.class)
 public class UpgradeIT extends ParallelStatsDisabledIT {
 
-    @Test
-    public void testMapTableToNamespaceDuringUpgrade()
-            throws SQLException, IOException, IllegalArgumentException, InterruptedException {
-        String[] strings = new String[] { "a", "b", "c", "d" };
-
-        try (Connection conn = DriverManager.getConnection(getUrl())) {
-            String schemaName = "TEST";
-            String phoenixFullTableName = schemaName + "." + generateUniqueName();
-            String indexName = "IDX_" + generateUniqueName();
-            String localIndexName = "LIDX_" + generateUniqueName();
-
-            String viewName = "VIEW_" + generateUniqueName();
-            String viewIndexName = "VIDX_" + generateUniqueName();
-
-            String[] tableNames = new String[] { phoenixFullTableName, schemaName + "." + indexName,
-                    schemaName + "." + localIndexName, "diff." + viewName, "test." + viewName, viewName};
-            String[] viewIndexes = new String[] { "diff." + viewIndexName, "test." + viewIndexName };
-            conn.createStatement().execute("CREATE TABLE " + phoenixFullTableName
-                    + "(k VARCHAR PRIMARY KEY, v INTEGER, f INTEGER, g INTEGER NULL, h INTEGER NULL)");
-            PreparedStatement upsertStmt = conn
-                    .prepareStatement("UPSERT INTO " + phoenixFullTableName + " VALUES(?, ?, 0, 0, 0)");
-            int i = 1;
-            for (String str : strings) {
-                upsertStmt.setString(1, str);
-                upsertStmt.setInt(2, i++);
-                upsertStmt.execute();
-            }
-            conn.commit();
-            // creating local index
-            conn.createStatement()
-                    .execute("create local index " + localIndexName + " on " + phoenixFullTableName + "(K)");
-            // creating global index
-            conn.createStatement().execute("create index " + indexName + " on " + phoenixFullTableName + "(k)");
-            // creating view in schema 'diff'
-            conn.createStatement().execute("CREATE VIEW diff." + viewName + " (col VARCHAR) AS SELECT * FROM " + phoenixFullTableName);
-            // creating view in schema 'test'
-            conn.createStatement().execute("CREATE VIEW test." + viewName + " (col VARCHAR) AS SELECT * FROM " + phoenixFullTableName);
-            conn.createStatement().execute("CREATE VIEW " + viewName + "(col VARCHAR) AS SELECT * FROM " + phoenixFullTableName);
-            // Creating index on views
-            conn.createStatement().execute("create index " + viewIndexName + "  on diff." + viewName + "(col)");
-            conn.createStatement().execute("create index " + viewIndexName + " on test." + viewName + "(col)");
-
-            // validate data
-            for (String tableName : tableNames) {
-                ResultSet rs = conn.createStatement().executeQuery("select * from " + tableName);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(1));
-                }
-            }
-
-            // validate view Index data
-            for (String viewIndex : viewIndexes) {
-                ResultSet rs = conn.createStatement().executeQuery("select * from " + viewIndex);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(2));
-                }
-            }
-
-            HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-            assertTrue(admin.tableExists(phoenixFullTableName));
-            assertTrue(admin.tableExists(schemaName + QueryConstants.NAME_SEPARATOR + indexName));
-            assertTrue(admin.tableExists(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(phoenixFullTableName))));
-            Properties props = new Properties();
-            props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean.toString(true));
-            props.setProperty(QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE, Boolean.toString(false));
-            admin.close();
-            PhoenixConnection phxConn = DriverManager.getConnection(getUrl(), props).unwrap(PhoenixConnection.class);
-            UpgradeUtil.upgradeTable(phxConn, phoenixFullTableName);
-            phxConn.close();
-            props = new Properties();
-            phxConn = DriverManager.getConnection(getUrl(), props).unwrap(PhoenixConnection.class);
-            // purge MetaDataCache except for system tables
-            phxConn.getMetaDataCache().pruneTables(new PMetaData.Pruner() {
-                @Override public boolean prune(PTable table) {
-                    return table.getType() != PTableType.SYSTEM;
-                }
-
-                @Override public boolean prune(PFunction function) {
-                    return false;
-                }
-            });
-            admin = phxConn.getQueryServices().getAdmin();
-            String hbaseTableName = SchemaUtil.getPhysicalTableName(Bytes.toBytes(phoenixFullTableName), true)
-                    .getNameAsString();
-            assertTrue(admin.tableExists(hbaseTableName));
-            assertTrue(admin.tableExists(Bytes.toBytes(hbaseTableName)));
-            assertTrue(admin.tableExists(schemaName + QueryConstants.NAMESPACE_SEPARATOR + indexName));
-            assertTrue(admin.tableExists(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(hbaseTableName))));
-            i = 0;
-            // validate data
-            for (String tableName : tableNames) {
-                ResultSet rs = phxConn.createStatement().executeQuery("select * from " + tableName);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(1));
-                }
-            }
-            // validate view Index data
-            for (String viewIndex : viewIndexes) {
-                ResultSet rs = conn.createStatement().executeQuery("select * from " + viewIndex);
-                for (String str : strings) {
-                    assertTrue(rs.next());
-                    assertEquals(str, rs.getString(2));
-                }
-            }
-            PName tenantId = phxConn.getTenantId();
-            PName physicalName = PNameFactory.newName(hbaseTableName);
-            String newSchemaName = MetaDataUtil.getViewIndexSequenceSchemaName(physicalName, true);
-            String newSequenceName = MetaDataUtil.getViewIndexSequenceName(physicalName, tenantId, true);
-            verifySequenceValue(null, newSequenceName, newSchemaName, Short.MIN_VALUE + 3);
-            admin.close();
-        }
-    }
-
     @Test
     public void testMapMultiTenantTableToNamespaceDuringUpgrade() throws SQLException, SnapshotCreationException,

Review comment:
       Done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r522550707



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -107,6 +107,29 @@
             HColumnDescriptor.KEEP_DELETED_CELLS,
             HColumnDescriptor.REPLICATION_SCOPE);
 
+    public static Put getLastDDLTimestampUpdate(byte[] tableHeaderRowKey,
+                                                     long clientTimestamp,
+                                                     long lastDDLTimestamp) {
+        //use client timestamp as the timestamp of the Cell, to match the other Cells that might
+        // be created by this DDL. But the actual value will be a _server_ timestamp
+        Put p = new Put(tableHeaderRowKey, clientTimestamp);
+        byte[] lastDDLTimestampBytes = PLong.INSTANCE.toBytes(lastDDLTimestamp);
+        p.addColumn(PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES,
+            PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES, lastDDLTimestampBytes);
+        return p;
+    }
+
+    /**
+     * Checks if a table is meant to be queried directly (and hence is relevant to external
+     * systems tracking Phoenix schema)
+     * @param tableType
+     * @return True if a table or view, false otherwise (such as for an index, system table, or
+     * subquery)
+     */
+    public static boolean isTableQueryable(PTableType tableType) {

Review comment:
       I struggled to find a good name here. "Queryable" was the best I could think of, by which I meant that tables and views are meant to be queried, but while you _can_ directly query an index and a system table, end users aren't really _supposed_ to. The internal schema of an index or a system table shouldn't be relevant to an external schema registry trying to track the schema of Phoenix tables. 
   
   Happy to hear other naming suggestions. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-726881073


   @shahrs87 - I'm curious -- what is the source is for the "split lines indent 8 spaces" rule you mention? You may very well be right, but I don't see anything in dev/PhoenixCodeTemplate.xml that has a value of 8 that looks relevant, and I do see `<setting> id="org.eclipse.jdt.core.formatter.indentation.size" value="4"/</setting>`


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r526307458



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";

Review comment:
       Switched the test to drop an original table column. (But did not add an additional test because of the open question about what diverged views should do.) 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512847398



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3048,7 +3048,13 @@ public boolean isViewReferenced() {
              * the counter as NULL_COUNTER for extra safety.
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
-            PTable table = new PTableImpl.Builder()
+            PTable table;
+            //better to use the table sent back from the server so we get an accurate DDL
+            // timestamp, which is server-generated.
+            if (result.getTable() != null ) {

Review comment:
       It seems strange that System.Catalog is the source of truth, we've just been there on the server, and yet the client copy controls. Will look closer at MetadataClient as you suggest to understand why. 
   
   Are we missing tests? Because assuming the client-side copy _should_ control, my replacing it with the server-side copy should have broken tests somewhere, and it doesn't appear to. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525578347



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1216,251 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";

Review comment:
       If the column is added to the base table after its child view diverges, they view won't inherit it anyways, so dropping the same column from the parent shouldn't change the lastDDLTs of the view either. On the contrary, if an existing column is dropped, that should change the ts of the view. In fact this sounds like a good additional test to add.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512338581



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +399,10 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        //We're changing the application-facing schema by adding a column, so update the DDL
+        // timestamp
+        additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,

Review comment:
       This will get triggered even for `ALTER TABLE/VIEW SET <property>`. I thought we didn't want to update the ddl timestamp in those cases.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r513080370



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
##########
@@ -910,6 +911,48 @@ public void testTableDescriptorPriority() throws SQLException, IOException {
         }
     }
 
+    @Test
+    public void testCreateTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        final String schemaName = generateUniqueName();
+        final String tableName = generateUniqueName();
+        final String dataTableFullName = SchemaUtil.getTableName(schemaName, tableName);
+        String ddl =
+            "CREATE TABLE " + dataTableFullName + " (\n" + "ID1 VARCHAR(15) NOT NULL,\n"
+                + "ID2 VARCHAR(15) NOT NULL,\n" + "CREATED_DATE DATE,\n"
+                + "CREATION_TIME BIGINT,\n" + "LAST_USED DATE,\n"
+                + "CONSTRAINT PK PRIMARY KEY (ID1, ID2)) ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(ddl);
+            verifyLastDDLTimestamp(schemaName, tableName, dataTableFullName, startTS, conn);
+        }
+    }
+
+    public static long verifyLastDDLTimestamp(String schemaName, String tableName,
+                                              String dataTableFullName, long startTS, Connection conn) throws SQLException {
+        long endTS = EnvironmentEdgeManager.currentTimeMillis();
+        //First try the JDBC metadata API
+        PhoenixDatabaseMetaData metadata = (PhoenixDatabaseMetaData) conn.getMetaData();
+        ResultSet rs = metadata.getTables("", schemaName, tableName, null);
+        assertTrue("No metadata returned", rs.next());
+        Long ddlTimestamp = rs.getLong(PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP);
+        assertNotNull("JDBC DDL timestamp is null!", ddlTimestamp);
+        assertTrue("JDBC DDL Timestamp not in the right range!",
+            ddlTimestamp >= startTS && ddlTimestamp <= endTS);
+        //Now try the PTable API
+        PTable table = PhoenixRuntime.getTableNoCache(conn, dataTableFullName);

Review comment:
       Done.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -107,6 +107,18 @@
             HColumnDescriptor.KEEP_DELETED_CELLS,
             HColumnDescriptor.REPLICATION_SCOPE);
 
+    public static Mutation getLastDDLTimestampUpdate(byte[] tableHeaderRowKey,

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r520170305



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +401,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        if (isAddingColumns) {
+            //We're changing the application-facing schema by adding a column, so update the DDL
+            // timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+            for (PTable viewTable : childViews) {

Review comment:
       In splittable-SYSTEM.CATALOG world, we avoid making synchronous changes to child views when the parent changes, so if a column is added to the parent, we wouldn't add that to each child view's metadata. Instead on resolving the view, we'd combine the parent columns and inherit them that way. This was done for scalability in case child views span many SYSCAT regions. I'm wondering if we can use similar inheritance logic for lastDDLTs of child views. 
   
   This becomes more complicated for diverged views. In that case, any columns added to a parent view/physical table are inherited by its child views only if they haven't diverged. In those cases, the LastDDLTs for diverged views ideally shouldn't even be modified if a column is added to its parent.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
##########
@@ -94,7 +94,7 @@
     public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_13_0 = MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
     public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0 = MIN_TABLE_TIMESTAMP + 28;
     public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0 = MIN_TABLE_TIMESTAMP + 29;
-    public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 = MIN_TABLE_TIMESTAMP + 31;
+    public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 = MIN_TABLE_TIMESTAMP + 33;

Review comment:
       Why are we changing this?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DropColumnMutator.java
##########
@@ -268,7 +272,20 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table,
             }
 
         }
-        tableMetaData.addAll(additionalTableMetaData);
+        if (isDroppingColumns) {
+            //We're changing the application-facing schema by dropping a column, so update the DDL
+            // timestamp to current _server_ timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetaData.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+            for (PTable viewTable : childViews) {

Review comment:
       Same concerns here with parent column inheritance and diverged views, etc.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/AddColumnMutator.java
##########
@@ -399,6 +401,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table, byte[][] rowK
                                 rowKeyMetaData[TABLE_NAME_INDEX])));
             }
         }
+        if (isAddingColumns) {
+            //We're changing the application-facing schema by adding a column, so update the DDL
+            // timestamp
+            long serverTimestamp = EnvironmentEdgeManager.currentTimeMillis();
+            additionalTableMetadataMutations.add(MetaDataUtil.getLastDDLTimestampUpdate(tableHeaderRowKey,
+                clientTimeStamp, serverTimestamp));
+            for (PTable viewTable : childViews) {

Review comment:
       Another complication is, pre-4.15 clients do not have this logic to just store view-specific columns. They don't have EXCLUDED_COLUMN linking rows either. The parent column combining logic in those cases is different too.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
##########
@@ -2586,4 +2588,18 @@ public static boolean isNoUpgradeSet(Properties props) {
     public static void doNotUpgradeOnFirstConnection(Properties props) {
         props.setProperty(DO_NOT_UPGRADE, String.valueOf(true));
     }
+
+    //When upgrading to Phoenix 4.16, make each existing table's DDL timestamp equal to its last
+    // updated row timestamp.
+    public static void bootstrapLastDDLTimestamp(PhoenixConnection metaConnection) throws SQLException  {
+        String pkCols = TENANT_ID + ", " + TABLE_SCHEM +
+            ", " + TABLE_NAME + ", " + COLUMN_NAME + ", " + COLUMN_FAMILY;
+        String upsertSql =
+            "UPSERT INTO " + SYSTEM_CATALOG_NAME + " (" + pkCols + ", " +
+        LAST_DDL_TIMESTAMP + ")" + " " +
+            "SELECT " + pkCols + ", PHOENIX_ROW_TIMESTAMP() FROM " + SYSTEM_CATALOG_NAME + " " +
+                "WHERE " + TABLE_TYPE + " " + " in " + "('u','v')";

Review comment:
       nit: Instead of using `u` and `v`, use PTableType.. API

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
##########
@@ -3707,15 +3709,20 @@ protected PhoenixConnection upgradeSystemCatalogIfRequired(PhoenixConnection met
             metaConnection = addColumnsIfNotExists(
                 metaConnection,
                 PhoenixDatabaseMetaData.SYSTEM_CATALOG,
-                MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 - 1,
+                MIN_SYSTEM_TABLE_TIMESTAMP_4_16_0 - 2,

Review comment:
       I'm a little unclear on these changes. Shouldn't all these 4.16-specific columns be added at the 4.16 ts rather than `4.16 ts - something`?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
##########
@@ -2586,4 +2588,18 @@ public static boolean isNoUpgradeSet(Properties props) {
     public static void doNotUpgradeOnFirstConnection(Properties props) {
         props.setProperty(DO_NOT_UPGRADE, String.valueOf(true));
     }
+
+    //When upgrading to Phoenix 4.16, make each existing table's DDL timestamp equal to its last
+    // updated row timestamp.
+    public static void bootstrapLastDDLTimestamp(PhoenixConnection metaConnection) throws SQLException  {
+        String pkCols = TENANT_ID + ", " + TABLE_SCHEM +

Review comment:
       Can we add a log before and after this `UPSERT SELECT` to help track the upgrade progress?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2037,7 +2049,10 @@ public void createTable(RpcController controller, CreateTableRequest request,
                     // view's property in case they are different from the parent
                     ViewUtil.addTagsToPutsForViewAlteredProperties(tableMetadata, parentTable);
                 }
-
+                //set the last DDL timestamp to the current server time since we're creating the
+                // table
+                tableMetadata.add(MetaDataUtil.getLastDDLTimestampUpdate(tableKey,
+                    clientTimeStamp, EnvironmentEdgeManager.currentTimeMillis()));

Review comment:
       Do we want to restrict this to just tables and views i.e. 'u' and 'v' table_types? The upgrade code only adds a ts for existing tables and views, but not indexes and SYSTEM tables, but here we do it for all types. There will be inconsistency in that case between an index created before upgrading (no ts) vs an index created after upgrading (has ts), not to mention fresh clusters will have a ts for all entities.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
##########
@@ -2586,4 +2588,18 @@ public static boolean isNoUpgradeSet(Properties props) {
     public static void doNotUpgradeOnFirstConnection(Properties props) {
         props.setProperty(DO_NOT_UPGRADE, String.valueOf(true));
     }
+
+    //When upgrading to Phoenix 4.16, make each existing table's DDL timestamp equal to its last
+    // updated row timestamp.
+    public static void bootstrapLastDDLTimestamp(PhoenixConnection metaConnection) throws SQLException  {
+        String pkCols = TENANT_ID + ", " + TABLE_SCHEM +
+            ", " + TABLE_NAME + ", " + COLUMN_NAME + ", " + COLUMN_FAMILY;
+        String upsertSql =
+            "UPSERT INTO " + SYSTEM_CATALOG_NAME + " (" + pkCols + ", " +
+        LAST_DDL_TIMESTAMP + ")" + " " +
+            "SELECT " + pkCols + ", PHOENIX_ROW_TIMESTAMP() FROM " + SYSTEM_CATALOG_NAME + " " +
+                "WHERE " + TABLE_TYPE + " " + " in " + "('u','v')";

Review comment:
       Also, can we add a comment about why we don't do this for system tables and indexes? Might be better to invert the condition and say `WHERE TABLE_TYPE NOT IN ('s', i')`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-722692891


   @ChinmaySKulkarni - After fixing the test failure in UpgradeIT, we got another test run.  It looks like in this latest run the minicluster got into a bad state where tables couldn't be created, and this led to a bunch of failures. I've rerun them all locally and they look fine. 
   
   As far as I know, this patch should be ready to go for 4.x branch. Anything else that needs to be changed or added? Thanks!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r527049509



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
##########
@@ -387,6 +388,48 @@ public void testViewUsesTableLocalIndex() throws Exception {
         }
     }
 
+    @Test
+    public void testCreateViewTimestamp() throws Exception {
+        String tenantId = null;
+        createViewTimestampHelper(tenantId);
+    }
+
+    @Test
+    public void testCreateTenantViewTimestamp() throws Exception {
+        createViewTimestampHelper(TENANT1);
+    }
+
+    private void createViewTimestampHelper(String tenantId) throws SQLException {
+        Properties props = new Properties();
+        if (tenantId != null) {
+            props.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, tenantId);
+        } else {
+            tenantId = "";

Review comment:
       Done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 merged pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 merged pull request #935:
URL: https://github.com/apache/phoenix/pull/935


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #935:
URL: https://github.com/apache/phoenix/pull/935#issuecomment-730240713


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 new or modified test files.  |
   ||| _ 4.x Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  12m 40s |  4.x passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  4.x passed  |
   | +1 :green_heart: |  checkstyle  |  11m 14s |  4.x passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  4.x passed  |
   | +0 :ok: |  spotbugs  |   4m  5s |  root in 4.x has 1003 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   2m 43s |  phoenix-core in 4.x has 949 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 26s |  the patch passed  |
   | -1 :x: |  checkstyle  |  11m 21s |  root: The patch generated 338 new + 25077 unchanged - 257 fixed = 25415 total (was 25334)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  javadoc  |   0m 46s |  phoenix-core generated 3 new + 97 unchanged - 3 fixed = 100 total (was 100)  |
   | -1 :x: |  javadoc  |   1m  3s |  root generated 3 new + 129 unchanged - 3 fixed = 132 total (was 132)  |
   | -1 :x: |  spotbugs  |   3m  6s |  phoenix-core generated 2 new + 948 unchanged - 1 fixed = 950 total (was 949)  |
   | -1 :x: |  spotbugs  |   4m  7s |  root generated 2 new + 1002 unchanged - 1 fixed = 1004 total (was 1003)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 137m  6s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 17s |  The patch does not generate ASF License warnings.  |
   |  |   | 204m  8s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | FindBugs | module:root |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LAST_DDL_TIMESTAMP_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 331] |
   |  |  A prepared statement is generated from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:from a nonconstant String in org.apache.phoenix.util.UpgradeUtil.bootstrapLastDDLTimestamp(Connection)  At UpgradeUtil.java:[line 2604] |
   | Failed junit tests | phoenix.end2end.IndexBuildTimestampIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/935 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 3dd6dc9b6fba 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x / ed7f1a6 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/diff-checkstyle-root.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/diff-javadoc-javadoc-root.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/new-spotbugs-root.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/artifact/yetus-general-check/output/patch-unit-root.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/testReport/ |
   | Max. process+thread count | 6417 (vs. ulimit of 30000) |
   | modules | C: phoenix-core . U: . |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-935/19/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.4 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512906673



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3048,7 +3048,13 @@ public boolean isViewReferenced() {
              * the counter as NULL_COUNTER for extra safety.
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
-            PTable table = new PTableImpl.Builder()
+            PTable table;
+            //better to use the table sent back from the server so we get an accurate DDL
+            // timestamp, which is server-generated.
+            if (result.getTable() != null ) {

Review comment:
       I'm not sure that would be covered by a test since we'd only see it if the test were to actually look at the cached PTable rather than say doing a `PhoenixRuntime.getTableNoCache()` right? Since we're changing what is cached, we should add some tests around this now to make sure the PTable has all expected fields set.
   
   I see some tests that should cover this code path inside AlterTableWithViewsIT (like `testAlterPropertiesOfParentTable()`) and also some in ViewTTLIT, but I'm not sure to what extent they would capture this. 
   
   I'm mostly worried about things like setting certain properties based on the parent PTable or overriding null properties with the default values, etc. (see [this](https://github.com/apache/phoenix/blob/ac0538b7f5456dd4f91b948c1a51c0042e115955/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L3093-L3104) and [this](https://github.com/apache/phoenix/blob/ac0538b7f5456dd4f91b948c1a51c0042e115955/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L3071-L3074)). That's where I'm not sure that the PTable returned from the server will be identical to the one that's created by the client today, since I don't see any code in MetaDataEndpointImpl that does this.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] ChinmaySKulkarni commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
ChinmaySKulkarni commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r512347866



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -3048,7 +3048,13 @@ public boolean isViewReferenced() {
              * the counter as NULL_COUNTER for extra safety.
              */
             EncodedCQCounter cqCounterToBe = tableType == PTableType.VIEW ? NULL_COUNTER : cqCounter;
-            PTable table = new PTableImpl.Builder()
+            PTable table;
+            //better to use the table sent back from the server so we get an accurate DDL
+            // timestamp, which is server-generated.
+            if (result.getTable() != null ) {

Review comment:
       This seems risky since we are relying on the fact that the PTable returned from the server has all the necessary attributes set as the PTable we create on the client-side. There are some that we set explicitly inside `MetaDataClient` which depend on the parent so I'm not sure we still have those set as expected.
   
   Instead, to be safe we can maybe `getDDLTimestamp()` from this returned PTable and set that in the builder. Better yet, we could just send the DDL timestamp in the server response rather than the entire PTable. We could then use the setter for this attribute when creating the PTable from its builder.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r525334220



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
##########
@@ -1216,5 +1218,248 @@ public void testDroppingIndexedColDropsViewIndex() throws Exception {
             assertNull(results.next());
         }
     }
-    
+
+    @Test
+    public void testAddThenDropColumnTableDDLTimestamp() throws Exception {
+        Properties props = new Properties();
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint NOT NULL,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1, COL2)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String columnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String columnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        long startTS = EnvironmentEdgeManager.currentTimeMillis();
+        try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+            conn.createStatement().execute(tableDDL);
+            //first get the original DDL timestamp when we created the table
+            long tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, dataTableName,
+                dataTableFullName, startTS,
+                conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(viewDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            //now add a column and make sure the timestamp updates
+            conn.createStatement().execute(columnAddDDL);
+            tableDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1, conn);
+            Thread.sleep(1);
+            conn.createStatement().execute(columnDropDDL);
+            CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName,
+                tableDDLTimestamp + 1 , conn);
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampForDivergedViews() throws Exception {
+        //Phoenix allows users to "drop" columns from views that are inherited from their ancestor
+        // views or tables. These columns are then excluded from the view schema, and the view is
+        // considered "diverged" from its parents, and so no longer inherit any additional schema
+        // changes that are applied to their ancestors. This test make sure that this behavior
+        // extends to DDL timestamp
+        String schemaName = SCHEMA1;
+        String dataTableName = "T_" + generateUniqueName();
+        String viewName = "V_" + generateUniqueName();
+        String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
+        String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
+
+        String tableDDL = generateDDL("CREATE TABLE IF NOT EXISTS " + dataTableFullName + " ("
+            + " %s ID char(1) NOT NULL,"
+            + " COL1 integer NOT NULL,"
+            + " COL2 bigint,"
+            + " CONSTRAINT NAME_PK PRIMARY KEY (%s ID, COL1)"
+            + " ) %s");
+
+        String viewDDL = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + dataTableFullName;
+
+        String divergeDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL2";
+        String viewColumnAddDDL = "ALTER VIEW " + viewFullName + " ADD COL3 varchar(50) NULL ";
+        String viewColumnDropDDL = "ALTER VIEW " + viewFullName + " DROP COLUMN COL3 ";
+        String tableColumnAddDDL = "ALTER TABLE " + dataTableFullName + " ADD COL4 varchar" +
+            "(50) NULL";
+        String tableColumnDropDDL = "ALTER TABLE " + dataTableFullName + " DROP COLUMN COL4 ";
+        try (Connection conn = DriverManager.getConnection(getUrl())) {
+            conn.createStatement().execute(tableDDL);
+            conn.createStatement().execute(viewDDL);
+            long viewDDLTimestamp = getLastDDLTimestamp(conn, viewFullName);
+            Thread.sleep(1);
+            conn.createStatement().execute(divergeDDL);
+            //verify DDL timestamp changed
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnAddDDL);
+            //verify DDL timestamp changed because we added a column to the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(viewColumnDropDDL);
+            //verify DDL timestamp changed because we dropped a column from the view
+            viewDDLTimestamp = CreateTableIT.verifyLastDDLTimestamp(schemaName, viewName,
+                viewFullName, viewDDLTimestamp, conn);
+            conn.createStatement().execute(tableColumnAddDDL);
+            //verify DDL timestamp DID NOT change because we added a column from the base table
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+            conn.createStatement().execute(tableColumnDropDDL);
+            assertEquals(viewDDLTimestamp, getLastDDLTimestamp(conn, viewFullName));
+        }
+    }
+
+    @Test
+    public void testLastDDLTimestampWithChildViews() throws Exception {
+        Assume.assumeTrue(isMultiTenant);

Review comment:
       @shahrs87 - the test deals in part with tenant-owned views, which don't make sense on a non-MT table. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #935: PHOENIX-6186 - Store last DDL timestamp in System.Catalog

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #935:
URL: https://github.com/apache/phoenix/pull/935#discussion_r522549122



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DropColumnMutator.java
##########
@@ -268,7 +272,19 @@ public MetaDataMutationResult validateAndAddMetadata(PTable table,
             }
 
         }
-        tableMetaData.addAll(additionalTableMetaData);
+        if (isDroppingColumns) {

Review comment:
       It's not really general-purpose enough to put in a standalone Util method in MetaDataUtil, and since it's only 3 lines repeated in 2 places I figured the minor violation of DRY (Don't Repeat Yourself) wasn't worth working around, since all alternatives I could think of seemed a little ugly. 
   
   For example, I could put a static utility method in one of the mutators, but then one depends on the other which couples classes that shouldn't be...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org