You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by GitBox <gi...@apache.org> on 2021/03/18 16:11:51 UTC

[GitHub] [phoenix] gokceni opened a new pull request #1170: PHOENIX-6247 Separating logical and physical table names

gokceni opened a new pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605277820



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1008,6 +1030,11 @@ public final PName getTableName() {
         return tableName;
     }
 
+    @Override
+    public final PName getPhysicalTableName() {

Review comment:
       It is a bit complicated. getPhysicalName is mostly used for views. getPhysicalTableName is mostly for non-views and maps to a column on syscat. Let me see what I can do.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni closed pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni closed pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r599244939



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
##########
@@ -147,8 +157,14 @@ public void addTable(PTable table, long resolvedTime) throws SQLException {
         for (PTable index : table.getIndexes()) {
             metaData.put(index.getKey(), tableRefFactory.makePTableRef(index, this.timeKeeper.getCurrentTime(), resolvedTime));
         }
+        if (table.getPhysicalTableName() != null && !table.getPhysicalTableName().getString().equals(table.getTableName().getString())) {
+            String physicalTableName =  table.getPhysicalTableName().getString().replace(
+                    QueryConstants.NAMESPACE_SEPARATOR, QueryConstants.NAME_SEPARATOR);
+            String physicalTableFullName = SchemaUtil.getTableName(table.getSchemaName() != null ? table.getSchemaName().getString() : null, physicalTableName);
+            this.physicalNameToLogicalTableMap.put(physicalTableFullName, key);
+        }
     }
-
+    private static final Logger LOGGER = LoggerFactory.getLogger(PTableImpl.class);

Review comment:
       removed

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
##########
@@ -21,18 +21,24 @@
 
 import java.sql.SQLException;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 
+import com.google.common.base.Strings;

Review comment:
       removed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-812226102


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 17s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 10s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 15s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 12s |  phoenix-core: The patch generated 284 new + 13755 unchanged - 151 fixed = 14039 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 31s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 194m 47s |  phoenix-core in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate ASF License warnings.  |
   |  |   | 238m 23s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5419] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/13/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 6abf6bf7fa1b 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/13/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/13/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/13/testReport/ |
   | Max. process+thread count | 5161 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/13/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605273917



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -678,13 +691,13 @@ public static SequenceKey getOldViewIndexSequenceKey(String tenantId, PName phys
         return new SequenceKey(isNamespaceMapped ? tenantId : null, schemaName, tableName, nSaltBuckets);
     }
 
-    public static String getViewIndexSequenceSchemaName(PName physicalName, boolean isNamespaceMapped) {
+    public static String getViewIndexSequenceSchemaName(PName logicalName, boolean isNamespaceMapped) {

Review comment:
       Will change to logicalBaseTableName




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] swaroopak commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
swaroopak commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r597846333



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
##########
@@ -44,6 +44,7 @@
 
 import javax.annotation.Nullable;
 
+import org.apache.hadoop.hbase.util.Bytes;

Review comment:
       nit: unused import?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -678,39 +678,39 @@ public static SequenceKey getOldViewIndexSequenceKey(String tenantId, PName phys
         return new SequenceKey(isNamespaceMapped ? tenantId : null, schemaName, tableName, nSaltBuckets);
     }
 
-    public static String getViewIndexSequenceSchemaName(PName physicalName, boolean isNamespaceMapped) {
+    public static String getViewIndexSequenceSchemaName(PName logicalName, boolean isNamespaceMapped) {
         if (!isNamespaceMapped) {
-            String baseTableName = SchemaUtil.getParentTableNameFromIndexTable(physicalName.getString(),
+            String baseTableName = SchemaUtil.getParentTableNameFromIndexTable(logicalName.getString(),
                 MetaDataUtil.VIEW_INDEX_TABLE_PREFIX);
             return SchemaUtil.getSchemaNameFromFullName(baseTableName);
         } else {
-            return SchemaUtil.getSchemaNameFromFullName(physicalName.toString());
+            return SchemaUtil.getSchemaNameFromFullName(logicalName.toString());
         }
 
     }
 
-    public static String getViewIndexSequenceName(PName physicalName, PName tenantId, boolean isNamespaceMapped) {
-        return SchemaUtil.getTableNameFromFullName(physicalName.toString()) + VIEW_INDEX_SEQUENCE_NAME_PREFIX;
+    public static String getViewIndexSequenceName(PName logicalName, PName tenantId, boolean isNamespaceMapped) {
+        return SchemaUtil.getTableNameFromFullName(logicalName.toString()) + VIEW_INDEX_SEQUENCE_NAME_PREFIX;
     }
 
     /**
      *
      * @param tenantId No longer used, but kept in signature for backwards compatibility
-     * @param physicalName Name of physical view index table
+     * @param logicalName Name of physical view index table
      * @param nSaltBuckets Number of salt buckets
      * @param isNamespaceMapped Is namespace mapping enabled
      * @return SequenceKey for the ViewIndexId
      */
-    public static SequenceKey getViewIndexSequenceKey(String tenantId, PName physicalName, int nSaltBuckets,
+    public static SequenceKey getViewIndexSequenceKey(String tenantId, PName logicalName, int nSaltBuckets,
             boolean isNamespaceMapped) {
         // Create global sequence of the form: <prefixed base table name>.
         // We can't use a tenant-owned or escaped sequence because of collisions,
         // with other view indexes that may be global or owned by other tenants that
         // also use this same physical view index table. It's also much easier
         // to cleanup when the physical table is dropped, as we can delete
         // all global sequences leading with <prefix> + physical name.
-        String schemaName = getViewIndexSequenceSchemaName(physicalName, isNamespaceMapped);

Review comment:
       I think the comment above won't be relevant anymore?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1851,112 +1905,126 @@ public static PTable createFromProto(PTableProtos.PTable table) {
     }
 
     public static PTableProtos.PTable toProto(PTable table) {
-      PTableProtos.PTable.Builder builder = PTableProtos.PTable.newBuilder();
-      if(table.getTenantId() != null){
-        builder.setTenantId(ByteStringer.wrap(table.getTenantId().getBytes()));
-      }
-      builder.setSchemaNameBytes(ByteStringer.wrap(table.getSchemaName().getBytes()));
-      builder.setTableNameBytes(ByteStringer.wrap(table.getTableName().getBytes()));
-      builder.setTableType(ProtobufUtil.toPTableTypeProto(table.getType()));
-      if (table.getType() == PTableType.INDEX) {
-        if(table.getIndexState() != null) {
-          builder.setIndexState(table.getIndexState().getSerializedValue());
-        }
-        if(table.getViewIndexId() != null) {
-          builder.setViewIndexId(table.getViewIndexId());
-          builder.setViewIndexIdType(table.getviewIndexIdType().getSqlType());
-		}
-        if(table.getIndexType() != null) {
-            builder.setIndexType(ByteStringer.wrap(new byte[]{table.getIndexType().getSerializedValue()}));
-        }
-      }
-      builder.setSequenceNumber(table.getSequenceNumber());
-      builder.setTimeStamp(table.getTimeStamp());
-      PName tmp = table.getPKName();
-      if (tmp != null) {
-        builder.setPkNameBytes(ByteStringer.wrap(tmp.getBytes()));
-      }
-      Integer bucketNum = table.getBucketNum();
-      int offset = 0;
-      if(bucketNum == null){
-        builder.setBucketNum(NO_SALTING);
-      } else {
-        offset = 1;
-        builder.setBucketNum(bucketNum);
-      }
-      List<PColumn> columns = table.getColumns();
-      int columnSize = columns.size();
-      for (int i = offset; i < columnSize; i++) {
-          PColumn column = columns.get(i);
-          builder.addColumns(PColumnImpl.toProto(column));
-      }
-      List<PTable> indexes = table.getIndexes();
-      for (PTable curIndex : indexes) {
-        builder.addIndexes(toProto(curIndex));
-      }
-      builder.setIsImmutableRows(table.isImmutableRows());
-      // TODO remove this field in 5.0 release
-      if (table.getParentName() != null) {
-          builder.setDataTableNameBytes(ByteStringer.wrap(table.getParentTableName().getBytes()));
-      }
-      if (table.getParentName() !=null) {
-          builder.setParentNameBytes(ByteStringer.wrap(table.getParentName().getBytes()));
-      }
-      if (table.getDefaultFamilyName()!= null) {
-        builder.setDefaultFamilyName(ByteStringer.wrap(table.getDefaultFamilyName().getBytes()));
-      }
-      builder.setDisableWAL(table.isWALDisabled());
-      builder.setMultiTenant(table.isMultiTenant());
-      builder.setStoreNulls(table.getStoreNulls());
-      if (table.getTransactionProvider() != null) {
-          builder.setTransactionProvider(table.getTransactionProvider().getCode());
-      }
-      if(table.getType() == PTableType.VIEW){
-        builder.setViewType(ByteStringer.wrap(new byte[]{table.getViewType().getSerializedValue()}));
-      }
-      if(table.getViewStatement()!=null){
-        builder.setViewStatement(ByteStringer.wrap(PVarchar.INSTANCE.toBytes(table.getViewStatement())));
-      }
-      for (int i = 0; i < table.getPhysicalNames().size(); i++) {
-        builder.addPhysicalNames(ByteStringer.wrap(table.getPhysicalNames().get(i).getBytes()));
-      }
-      builder.setBaseColumnCount(table.getBaseColumnCount());
-      builder.setRowKeyOrderOptimizable(table.rowKeyOrderOptimizable());
-      builder.setUpdateCacheFrequency(table.getUpdateCacheFrequency());
-      builder.setIndexDisableTimestamp(table.getIndexDisableTimestamp());
-      builder.setIsNamespaceMapped(table.isNamespaceMapped());
-      if (table.getAutoPartitionSeqName() != null) {
-          builder.setAutoParititonSeqName(table.getAutoPartitionSeqName());
-      }
-      builder.setIsAppendOnlySchema(table.isAppendOnlySchema());
-      if (table.getImmutableStorageScheme() != null) {
-          builder.setStorageScheme(ByteStringer.wrap(new byte[]{table.getImmutableStorageScheme().getSerializedMetadataValue()}));
-      }
-      if (table.getEncodedCQCounter() != null) {
-          Map<String, Integer> values = table.getEncodedCQCounter().values();
-          for (Entry<String, Integer> cqCounter : values.entrySet()) {
-              org.apache.phoenix.coprocessor.generated.PTableProtos.EncodedCQCounter.Builder cqBuilder = org.apache.phoenix.coprocessor.generated.PTableProtos.EncodedCQCounter.newBuilder();
-              cqBuilder.setColFamily(cqCounter.getKey());
-              cqBuilder.setCounter(cqCounter.getValue());
-              builder.addEncodedCQCounters(cqBuilder.build());
-          }
-      }
-      if (table.getEncodingScheme() != null) {
-          builder.setEncodingScheme(ByteStringer.wrap(new byte[]{table.getEncodingScheme().getSerializedMetadataValue()}));
-      }
-      if (table.useStatsForParallelization() != null) {
-          builder.setUseStatsForParallelization(table.useStatsForParallelization());
-      }
-      builder.setPhoenixTTL(table.getPhoenixTTL());
-      builder.setPhoenixTTLHighWaterMark(table.getPhoenixTTLHighWaterMark());
-      builder.setViewModifiedUpdateCacheFrequency(table.hasViewModifiedUpdateCacheFrequency());
-      builder.setViewModifiedUseStatsForParallelization(table.hasViewModifiedUseStatsForParallelization());
-      builder.setViewModifiedPhoenixTTL(table.hasViewModifiedPhoenixTTL());
-      if (table.getLastDDLTimestamp() != null) {
-          builder.setLastDDLTimestamp(table.getLastDDLTimestamp());
-      }
-      builder.setChangeDetectionEnabled(table.isChangeDetectionEnabled());
-      return builder.build();
+        PTableProtos.PTable.Builder builder = PTableProtos.PTable.newBuilder();

Review comment:
       Is it only indentation change?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexScrutinyMapper.java
##########
@@ -343,18 +343,17 @@ private int getTableTtl() throws SQLException, IOException {
 
     @VisibleForTesting
     public static String getSourceTableName(PTable pSourceTable, boolean isNamespaceEnabled) {
-        String sourcePhysicalName = pSourceTable.getPhysicalName().getString();

Review comment:
       pSourceTable.getPhysicalName().getString() can still be used as a variable since being called at multiple places.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
##########
@@ -105,6 +105,7 @@
 import org.apache.phoenix.schema.ColumnFamilyNotFoundException;
 import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PColumnFamily;
+import org.apache.phoenix.schema.PNameFactory;

Review comment:
       nit: unused import?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1559,7 +1588,17 @@ public PName getParentTableName() {
     @Override
     public PName getParentName() {
         // a view on a table will not have a parent name but will have a physical table name (which is the parent)
-        return (type!=PTableType.VIEW || parentName!=null) ? parentName : getPhysicalName();
+        // Update to above comment: we will return logical name of view parent base table
+        return (type!=PTableType.VIEW || parentName!=null) ? parentName : (parentLogicalName != null? parentLogicalName : getPhysicalName());
+    }
+
+    @Override
+    public PName getParentLogicalName() {

Review comment:
       curious, what does it return for a table?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1583,10 +1622,15 @@ public synchronized boolean getIndexMaintainers(ImmutableBytesWritable ptr, Phoe
         ptr.set(indexMaintainersPtr.get(), indexMaintainersPtr.getOffset(), indexMaintainersPtr.getLength());
         return indexMaintainersPtr.getLength() > 0;
     }
-
+    private static final Logger LOGGER = LoggerFactory.getLogger(PTableImpl.class);

Review comment:
       nit: this should be at the beginning of the class. 

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -728,6 +729,12 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
      * (use @getPhysicalTableName for this case) 
      */
     PName getParentTableName();
+
+    /**
+     * @return the logical name of the parent. In case of the view index, it is the _IDX_+logical name of base table

Review comment:
       comment for hierarchical views would be helpful as well. 
   Something like table --> view1 --> view2 what would be the parent logical name in each case. 

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
##########
@@ -147,8 +157,14 @@ public void addTable(PTable table, long resolvedTime) throws SQLException {
         for (PTable index : table.getIndexes()) {
             metaData.put(index.getKey(), tableRefFactory.makePTableRef(index, this.timeKeeper.getCurrentTime(), resolvedTime));
         }
+        if (table.getPhysicalTableName() != null && !table.getPhysicalTableName().getString().equals(table.getTableName().getString())) {
+            String physicalTableName =  table.getPhysicalTableName().getString().replace(
+                    QueryConstants.NAMESPACE_SEPARATOR, QueryConstants.NAME_SEPARATOR);
+            String physicalTableFullName = SchemaUtil.getTableName(table.getSchemaName() != null ? table.getSchemaName().getString() : null, physicalTableName);
+            this.physicalNameToLogicalTableMap.put(physicalTableFullName, key);
+        }
     }
-
+    private static final Logger LOGGER = LoggerFactory.getLogger(PTableImpl.class);

Review comment:
       nit: this should be at the beginning of the class

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java
##########
@@ -328,6 +328,10 @@ private static boolean isExistingTableMappedToPhoenixName(String name) {
                 SEPARATOR_BYTE_ARRAY, Bytes.toBytes(familyName));
     }
 
+    public static PName getTableName(PName schemaName, PName tableName) {

Review comment:
       nit: getTablePName?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
##########
@@ -1331,7 +1331,7 @@ private static void syncGlobalIndexesForTable(ConnectionQueryServices cqs, PTabl
      */
     private static void syncViewIndexTable(ConnectionQueryServices cqs, PTable baseTable, HColumnDescriptor defaultColFam,
             Map<String, Object> syncedProps, Set<HTableDescriptor> tableDescsToSync) throws SQLException {
-        String viewIndexName = MetaDataUtil.getViewIndexPhysicalName(baseTable.getPhysicalName().getString());
+        String viewIndexName = MetaDataUtil.getViewIndexPhysicalName(baseTable.getName().getString());

Review comment:
       curious, what does getName() return?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;

Review comment:
       nit: avoid importing *




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-811570174


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 19s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 35s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 10s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 17s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 13s |  phoenix-core: The patch generated 232 new + 13807 unchanged - 99 fixed = 14039 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 32s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 200m 50s |  phoenix-core in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate ASF License warnings.  |
   |  |   | 244m 39s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5419] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/12/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 78b9ec382688 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/12/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/12/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/12/testReport/ |
   | Max. process+thread count | 4864 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/12/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-811502865


   @ChinmaySKulkarni - this is a significant revision of how Phoenix approaches metadata naming by @gokceni , would appreciate your opinion as well. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605253952



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -583,6 +583,7 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
     PName getName();
     PName getSchemaName();
     PName getTableName();
+    PName getPhysicalTableName();

Review comment:
       Distinction between getPhysicalTableName and getPhysicalName can be confusing

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2152,10 +2201,15 @@ public void createTable(RpcController controller, CreateTableRequest request,
         }
     }
 
-    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable, PName physicalName) throws SQLException {
+    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable) throws SQLException {
         int nSequenceSaltBuckets = connection.getQueryServices().getSequenceSaltBuckets();
-
-        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+        // parentTable is parent of the view index which is the view.
+        // Since parent is the view, the parentTable.getParentLogicalName() returns the logical full name of the base table
+        PName parentName = parentTable.getParentLogicalName();

Review comment:
       I'm a bit confused here. In the original logic, we passed in a physical name, but now we pass in a logical name if parentTable (the view) has one defined and a physical name if it doesn't. From the design doc you shared with me, sounds like it should usually be a constant logical name?
   
   It's actually really important that we use the same table name here in all cases -- that all indexes on all views on a particular physical base table use the same sequence to generate view index ids. Otherwise you can get collisions between view index ids. 

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1008,6 +1030,11 @@ public final PName getTableName() {
         return tableName;
     }
 
+    @Override
+    public final PName getPhysicalTableName() {

Review comment:
       As mentioned elsewhere, why both the existing getPhysicalName and a new getPhysicalTableName? Can they be consolidated? Or at least named something more clear?

##########
File path: phoenix-core/src/main/protobuf/PTable.proto
##########
@@ -111,6 +111,9 @@ message PTable {
   optional bool viewModifiedPhoenixTTL = 44;
   optional int64 lastDDLTimestamp = 45;
   optional bool changeDetectionEnabled = 46;
+  optional bytes physicalTableNameBytes = 47;
+  optional bool isModifiable = 48;

Review comment:
       where is isModifiable set?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1559,7 +1586,29 @@ public PName getParentTableName() {
     @Override
     public PName getParentName() {
         // a view on a table will not have a parent name but will have a physical table name (which is the parent)
-        return (type!=PTableType.VIEW || parentName!=null) ? parentName : getPhysicalName();
+        // Update to above comment: we will return logical name of view parent base table

Review comment:
       Why the parent base table and not the immediate parent?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1586,7 +1635,12 @@ public synchronized boolean getIndexMaintainers(ImmutableBytesWritable ptr, Phoe
 
     @Override
     public PName getPhysicalName() {
+        // For views, physicalName is base table name. There might be a case where the Phoenix table is pointing to another physical table.

Review comment:
       For views, physicalName is the base table _physical table name_ or _logical_ table name?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1559,7 +1586,29 @@ public PName getParentTableName() {
     @Override
     public PName getParentName() {
         // a view on a table will not have a parent name but will have a physical table name (which is the parent)
-        return (type!=PTableType.VIEW || parentName!=null) ? parentName : getPhysicalName();
+        // Update to above comment: we will return logical name of view parent base table

Review comment:
       nit: remove "update to above comment"

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
##########
@@ -678,13 +691,13 @@ public static SequenceKey getOldViewIndexSequenceKey(String tenantId, PName phys
         return new SequenceKey(isNamespaceMapped ? tenantId : null, schemaName, tableName, nSaltBuckets);
     }
 
-    public static String getViewIndexSequenceSchemaName(PName physicalName, boolean isNamespaceMapped) {
+    public static String getViewIndexSequenceSchemaName(PName logicalName, boolean isNamespaceMapped) {

Review comment:
       should be logicalParentName or logicalBaseTableName to make it clear that this is not the logical name of the view, but the suffix of the _IDX_ table so that all view indexes of the same base table get the same view index sequence. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] lhofhansl commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
lhofhansl commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r599809055



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
##########
@@ -443,7 +445,52 @@ public void testImportOneIndexTable(String tableName, boolean localIndex) throws
             checkIndexTableIsVerified(indexTableName);
         }
     }
-    
+
+    @Test
+    public void testImportWithDifferentPhysicalName() throws Exception {
+        String tableName = generateUniqueName();
+        String indexTableName = String.format("%s_IDX", tableName);
+        Statement stmt = conn.createStatement();
+        stmt.execute("CREATE TABLE " + tableName + "(ID INTEGER NOT NULL PRIMARY KEY, "
+                + "FIRST_NAME VARCHAR, LAST_NAME VARCHAR)");
+        String ddl = "CREATE  INDEX " + indexTableName + " ON " + tableName + "(FIRST_NAME ASC)";
+        stmt.execute(ddl);
+        String newTableName = LogicalTableNameIT.NEW_TABLE_PREFIX + generateUniqueName();
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(tableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(tableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(newTableName));
+        }
+        LogicalTableNameIT.renameAndDropPhysicalTable(conn, null, null, tableName, newTableName);
+
+        FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+        FSDataOutputStream outputStream = fs.create(new Path("/tmp/input4.csv"));
+        PrintWriter printWriter = new PrintWriter(outputStream);
+        printWriter.println("1,FirstName 1,LastName 1");
+        printWriter.println("2,FirstName 2,LastName 2");
+        printWriter.close();
+
+        CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+        csvBulkLoadTool.setConf(getUtility().getConfiguration());
+        int
+                exitCode =
+                csvBulkLoadTool
+                        .run(new String[] { "--input", "/tmp/input4.csv", "--table", tableName,
+                                "--index-table", indexTableName, "--zookeeper", zkQuorum });
+        assertEquals(0, exitCode);
+
+        ResultSet rs = stmt.executeQuery("SELECT * FROM " + tableName);
+        assertFalse(rs.next());

Review comment:
       This also assume that the Phoenix query planner chooses a specific plan. Better to hint the query with /*+ NO_INDEX */ if you do not want the index to be used.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605085645



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,820 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullTableName + " WHERE PK1='PK10'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP TABLE " + fullTableName);
+                // check that the physical data table is dropped
+                Admin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+                assertEquals(false, admin.tableExists(TableName.valueOf(SchemaUtil.getTableName(schemaName,NEW_TABLE_PREFIX + tableName))));
+
+                // check that index is dropped
+                assertEquals(false, admin.tableExists(TableName.valueOf(fullIndexName)));
+
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, indexName));
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+            }
+        }
+    }
+
+    private  HashMap<String, ArrayList<String>> test_IndexTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName, byte[] verifiedBytes) throws Exception {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        conn.setAutoCommit(true);
+        createTable(conn, fullTableName);
+        createIndexOnTable(conn, fullTableName, indexName);
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table for index and add 1 more row
+        String newTableName = "NEW_IDXTBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(indexName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullIndexName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put
+                        put =
+                        new Put(ByteUtil.concat(Bytes.toBytes("V13"), QueryConstants.SEPARATOR_BYTE_ARRAY, Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        verifiedBytes);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT * FROM " + fullIndexName;
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, indexName, newTableName);
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // create another index and drop the first index and validate the second one
+                String indexName2 = "IDX2_" + generateUniqueName();
+                String fullIndexName2 = SchemaUtil.getTableName(schemaName, indexName2);
+                if (createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                dropIndex(conn2, fullTableName, indexName);
+                if (!createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                // The new index doesn't have the new row
+                expected.remove("PK3");
+                validateIndex(conn, fullIndexName2, false, expected);
+                validateIndex(conn2, fullIndexName2, false, expected);
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+                // Try with unverified bytes
+                String tableName2 = "TBL_" + generateUniqueName();
+                String indexName2 = "IDX_" + generateUniqueName();
+                test_IndexTableChange(conn, conn2, schemaName, tableName2, indexName2, IndexRegionObserver.UNVERIFIED_BYTES);
+
+                completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName2, indexName2, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+            }
+        }
+    }
+
+    private HashMap<String, ArrayList<String>> testWithViewsAndIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String viewName1, String v1_indexName1, String v1_indexName2, String viewName2, String v2_indexName1) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName1 = SchemaUtil.getTableName(schemaName, viewName1);
+        String fullViewName2 = SchemaUtil.getTableName(schemaName, viewName2);
+        createTable(conn, fullTableName);
+        HashMap<String, ArrayList<String>> expected = new HashMap<>();
+        if (!createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        // Create another hbase table and add 1 more row
+        String newTableName = "NEW_TBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"),
+                        Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL1"),
+                        Bytes.toBytes("VIEW_COL1_3"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL2"),
+                        Bytes.toBytes("VIEW_COL2_3"));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4", "VIEW_COL1_3", "VIEW_COL2_3"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        if (!createChildAfterTransform) {
+            assertTrue(rs1.next());
+        }
+
+        // Rename table to point to hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        conn.unwrap(PhoenixConnection.class).getQueryServices().clearCache();
+        if (createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+
+    private PhoenixTestBuilder.SchemaBuilder createGlobalViewAndTenantView() throws Exception {
+        int numOfRows = 5;
+        PhoenixTestBuilder.SchemaBuilder.TableOptions tableOptions = PhoenixTestBuilder.SchemaBuilder.TableOptions.withDefaults();
+        tableOptions.getTableColumns().clear();
+        tableOptions.getTableColumnTypes().clear();
+        tableOptions.setTableProps(" MULTI_TENANT=true, COLUMN_ENCODED_BYTES=0 "+this.dataTableDdl);
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions globalViewOptions = PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions.withDefaults();
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions globalViewIndexOptions =
+                PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions.withDefaults();
+        globalViewIndexOptions.setLocal(false);
+
+        PhoenixTestBuilder.SchemaBuilder.TenantViewOptions tenantViewOptions = new PhoenixTestBuilder.SchemaBuilder.TenantViewOptions();
+        tenantViewOptions.setTenantViewColumns(asList("ZID", "COL7", "COL8", "COL9"));
+        tenantViewOptions.setTenantViewColumnTypes(asList("CHAR(15)", "VARCHAR", "VARCHAR", "VARCHAR"));
+
+        PhoenixTestBuilder.SchemaBuilder.OtherOptions testCaseWhenAllCFMatchAndAllDefault = new PhoenixTestBuilder.SchemaBuilder.OtherOptions();
+        testCaseWhenAllCFMatchAndAllDefault.setTestName("testCaseWhenAllCFMatchAndAllDefault");
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTableCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setGlobalViewCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTenantViewCFs(Lists.newArrayList((String) null, null, null, null));
+
+        // Define the test schema.
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = null;
+        if (!createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }  else {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).build();
+        }
+
+        PTable table = schemaBuilder.getBaseTable();
+        String schemaName = table.getSchemaName().getString();
+        String tableName = table.getTableName().getString();
+        String newBaseTableName = "NEW_TBL_" + tableName;
+        String fullNewBaseTableName = SchemaUtil.getTableName(schemaName, newBaseTableName);
+        String fullTableName = table.getName().getString();
+
+        try (Connection conn = getConnection(props)) {
+
+            try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+                String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+                admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+                admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewBaseTableName));
+            }
+
+            renameAndDropPhysicalTable(conn, null, schemaName, tableName, newBaseTableName);
+        }
+
+        // TODO: this still creates a new table.
+        if (createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withDataOptions(schemaBuilder.getDataOptions())
+                    .withTableOptions(tableOptions)
+                    .withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }
+
+        // Define the test data.
+        PhoenixTestBuilder.DataSupplier dataSupplier = new PhoenixTestBuilder.DataSupplier() {
+
+            @Override public List<Object> getValues(int rowIndex) {
+                Random rnd = new Random();
+                String id = String.format(ViewTTLIT.ID_FMT, rowIndex);
+                String zid = String.format(ViewTTLIT.ZID_FMT, rowIndex);
+                String col4 = String.format(ViewTTLIT.COL4_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col5 = String.format(ViewTTLIT.COL5_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col6 = String.format(ViewTTLIT.COL6_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col7 = String.format(ViewTTLIT.COL7_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col8 = String.format(ViewTTLIT.COL8_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col9 = String.format(ViewTTLIT.COL9_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+
+                return Lists.newArrayList(
+                        new Object[] { id, zid, col4, col5, col6, col7, col8, col9 });
+            }
+        };
+
+        // Create a test data reader/writer for the above schema.
+        PhoenixTestBuilder.DataWriter dataWriter = new PhoenixTestBuilder.BasicDataWriter();
+        List<String> columns =
+                Lists.newArrayList("ID", "ZID", "COL4", "COL5", "COL6", "COL7", "COL8", "COL9");
+        List<String> rowKeyColumns = Lists.newArrayList("ID", "ZID");
+
+        String tenantConnectUrl =
+                getUrl() + ';' + TENANT_ID_ATTRIB + '=' + schemaBuilder.getDataOptions().getTenantId();
+
+        try (Connection tenantConnection = DriverManager.getConnection(tenantConnectUrl)) {
+            tenantConnection.setAutoCommit(true);
+            dataWriter.setConnection(tenantConnection);
+            dataWriter.setDataSupplier(dataSupplier);
+            dataWriter.setUpsertColumns(columns);
+            dataWriter.setRowKeyColumns(rowKeyColumns);
+            dataWriter.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataWriter.upsertRows(1, numOfRows);
+            com.google.common.collect.Table<String, String, Object> upsertedData = dataWriter.getDataTable();;
+
+            PhoenixTestBuilder.DataReader dataReader = new PhoenixTestBuilder.BasicDataReader();
+            dataReader.setValidationColumns(columns);
+            dataReader.setRowKeyColumns(rowKeyColumns);
+            dataReader.setDML(String.format("SELECT %s from %s", Joiner.on(",").join(columns),
+                    schemaBuilder.getEntityTenantViewName()));
+            dataReader.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataReader.setConnection(tenantConnection);
+            dataReader.readRows();
+            com.google.common.collect.Table<String, String, Object> fetchedData
+                    = dataReader.getDataTable();
+            assertNotNull("Fetched data should not be null", fetchedData);
+            ViewTTLIT.verifyRowsBeforeTTLExpiration(upsertedData, fetchedData);
+
+        }
+        return schemaBuilder;
+    }
+
+    @Test
+    public void testWith2LevelViewsBaseTablePhysicalNameChange() throws Exception {
+        // TODO: use namespace in one of the cases
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = createGlobalViewAndTenantView();
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithViews() throws Exception {
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                String schemaName = "S_" + generateUniqueName();
+                String tableName = "TBL_" + generateUniqueName();
+                String view1Name = "VW1_" + generateUniqueName();
+                String view1IndexName1 = "VW1IDX1_" + generateUniqueName();
+                String view1IndexName2 = "VW1IDX2_" + generateUniqueName();
+                String fullView1IndexName1 = SchemaUtil.getTableName(schemaName, view1IndexName1);
+                String fullView1IndexName2 =  SchemaUtil.getTableName(schemaName, view1IndexName2);
+                String view2Name = "VW2_" + generateUniqueName();
+                String view2IndexName1 = "VW2IDX1_" + generateUniqueName();
+                String fullView1Name = SchemaUtil.getTableName(schemaName, view1Name);
+                String fullView2Name = SchemaUtil.getTableName(schemaName, view2Name);
+                String fullView2IndexName1 =  SchemaUtil.getTableName(schemaName, view2IndexName1);
+
+                HashMap<String, ArrayList<String>> expected = testWithViewsAndIndex_BaseTableChange(conn, conn2, schemaName, tableName, view1Name, view1IndexName1, view1IndexName2, view2Name, view2IndexName1);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName1);
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName2);
+                IndexToolIT.runIndexTool(true, false, schemaName, view2Name, view2IndexName1);
+
+                SingleCellIndexIT.dumpTable("_IDX_" + SchemaUtil.getTableName(schemaName, tableName));
+                validateIndex(conn, fullView1IndexName1, true, expected);
+                validateIndex(conn2, fullView1IndexName2, true, expected);
+
+                // Add row and check
+                populateView(conn, fullView2Name, 20, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1 + " WHERE \":PK1\"='PK20'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name  + " WHERE PK1='PK20'");
+                assertEquals(true, rs.next());
+
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullView2Name + " WHERE PK1='PK20'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1 + " WHERE \":PK1\"='PK20'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name  + " WHERE PK1='PK20'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP VIEW " + fullView2Name);
+                // check that this view is dropped but the other is there
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView1Name);
+                assertEquals(true, rs.next());
+                boolean failed = true;
+                try {
+                    rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name);
+                    rs.next();
+                    failed = false;
+                } catch (SQLException e){
+
+                }
+                assertEquals(true, failed);
+
+                // check that first index is there but second index is dropped
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView1IndexName1);
+                assertEquals(true, rs.next());
+                try {
+                    rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1);
+                    rs.next();
+                    failed = false;
+                } catch (SQLException e){
+
+                }
+                assertEquals(true, failed);
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithViews_runScrutiny() throws Exception {
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                String schemaName = "S_" + generateUniqueName();
+                String tableName = "TBL_" + generateUniqueName();
+                String view1Name = "VW1_" + generateUniqueName();
+                String view1IndexName1 = "VW1IDX1_" + generateUniqueName();
+                String view1IndexName2 = "VW1IDX2_" + generateUniqueName();
+                String view2Name = "VW2_" + generateUniqueName();
+                String view2IndexName1 = "VW2IDX1_" + generateUniqueName();
+
+                testWithViewsAndIndex_BaseTableChange(conn, conn2,schemaName, tableName, view1Name,
+                        view1IndexName1, view1IndexName2, view2Name, view2IndexName1);
+
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, view2Name, view2IndexName1, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(2, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row and 2 are from the other index
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(3, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+
+            }
+        }
+    }
+
+    private void createTable(Connection conn, String tableName) throws Exception {
+        String createTableSql = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 INTEGER, V3 INTEGER "
+                + "CONSTRAINT NAME_PK PRIMARY KEY(PK1)) COLUMN_ENCODED_BYTES=0 " + dataTableDdl;
+        LOGGER.debug(createTableSql);
+        conn.createStatement().execute(createTableSql);
+    }
+
+    private void createIndexOnTable(Connection conn, String tableName, String indexName)
+            throws SQLException {
+        String createIndexSql = "CREATE INDEX " + indexName + " ON " + tableName + " (V1) INCLUDE (V2, V3) ";
+        LOGGER.debug(createIndexSql);
+        conn.createStatement().execute(createIndexSql);
+    }
+
+    private void dropIndex(Connection conn, String tableName, String indexName)
+            throws SQLException {
+        String sql = "DROP INDEX " + indexName + " ON " + tableName ;
+        conn.createStatement().execute(sql);
+    }
+
+    private HashMap<String, ArrayList<String>> populateTable(Connection conn, String tableName, int startnum, int numOfRows)
+            throws SQLException {
+        String upsert = "UPSERT INTO " + tableName + " (PK1, V1,  V2, V3) VALUES (?,?,?,?)";
+        PreparedStatement upsertStmt = conn.prepareStatement(upsert);
+        HashMap<String, ArrayList<String>> result = new HashMap<>();
+        for (int i=startnum; i < startnum + numOfRows; i++) {
+            ArrayList<String> row = new ArrayList<>();
+            upsertStmt.setString(1, "PK" + i);
+            row.add("PK" + i);
+            upsertStmt.setString(2, "V1" + i);
+            row.add("V1" + i);
+            upsertStmt.setInt(3, i);
+            row.add(String.valueOf(i));
+            upsertStmt.setInt(4, i + 1);
+            row.add(String.valueOf(i + 1));
+            upsertStmt.executeUpdate();
+            result.put("PK" + i, row);
+        }
+        return result;
+    }
+
+    private HashMap<String, ArrayList<String>> populateView(Connection conn, String viewName, int startNum, int numOfRows) throws SQLException {
+        String upsert = "UPSERT INTO " + viewName + " (PK1, V1,  V2, V3, VIEW_COL1, VIEW_COL2) VALUES (?,?,?,?,?,?)";
+        PreparedStatement upsertStmt = conn.prepareStatement(upsert);
+        HashMap<String, ArrayList<String>> result = new HashMap<>();
+        for (int i=startNum; i < startNum + numOfRows; i++) {
+            ArrayList<String> row = new ArrayList<>();
+            upsertStmt.setString(1, "PK"+i);
+            row.add("PK"+i);
+            upsertStmt.setString(2, "V1"+i);
+            row.add("V1"+i);
+            upsertStmt.setInt(3, i);
+            row.add(String.valueOf(i));
+            upsertStmt.setInt(4, i+1);
+            row.add(String.valueOf(i+1));
+            upsertStmt.setString(5, "VIEW_COL1_"+i);
+            row.add("VIEW_COL1_"+i);
+            upsertStmt.setString(6, "VIEW_COL2_"+i);
+            row.add("VIEW_COL2_"+i);
+            upsertStmt.executeUpdate();
+            result.put("PK"+i, row);
+        }
+        return result;
+    }
+
+    private void createViewAndIndex(Connection conn, String schemaName, String tableName, String viewName, String viewIndexName)
+            throws SQLException {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName = SchemaUtil.getTableName(schemaName, viewName);
+        String
+                view1DDL =
+                "CREATE VIEW IF NOT EXISTS " + fullViewName + " ( VIEW_COL1 VARCHAR, VIEW_COL2 VARCHAR) AS SELECT * FROM "
+                        + fullTableName;
+        conn.createStatement().execute(view1DDL);
+        String indexDDL = "CREATE INDEX IF NOT EXISTS " + viewIndexName + " ON " + fullViewName + " (V1) include (V2, V3, VIEW_COL2) ";
+        conn.createStatement().execute(indexDDL);
+        conn.commit();
+    }
+
+    private void validateTable(Connection connection, String tableName) throws SQLException {
+        String selectTable = "SELECT PK1, V1, V2, V3 FROM " + tableName + " ORDER BY PK1 DESC";
+        ResultSet rs = connection.createStatement().executeQuery(selectTable);
+        assertTrue(rs.next());
+        assertEquals("PK3", rs.getString(1));
+        assertEquals("V13", rs.getString(2));
+        assertEquals(3, rs.getInt(3));
+        assertEquals(4, rs.getInt(4));
+        assertTrue(rs.next());
+        assertEquals("PK2", rs.getString(1));
+        assertEquals("V12", rs.getString(2));
+        assertEquals(2, rs.getInt(3));
+        assertEquals(3, rs.getInt(4));
+        assertTrue(rs.next());
+        assertEquals("PK1", rs.getString(1));
+        assertEquals("V11", rs.getString(2));
+        assertEquals(1, rs.getInt(3));
+        assertEquals(2, rs.getInt(4));
+    }
+
+    private void validateIndex(Connection connection, String tableName, boolean isViewIndex, HashMap<String, ArrayList<String>> expected) throws SQLException {
+        String selectTable = "SELECT * FROM " + tableName;
+        ResultSet rs = connection.createStatement().executeQuery(selectTable);
+        int cnt = 0;
+        while (rs.next()) {
+            String pk = rs.getString(2);
+            assertTrue(expected.containsKey(pk));
+            ArrayList<String> row = expected.get(pk);
+            assertEquals(row.get(1), rs.getString(1));
+            assertEquals(row.get(2), rs.getString(3));
+            assertEquals(row.get(3), rs.getString(4));
+            if (isViewIndex) {
+                assertEquals(row.get(5), rs.getString(5));
+            }
+            cnt++;
+        }
+        assertEquals(cnt, expected.size());
+    }
+
+    public static void renameAndDropPhysicalTable(Connection conn, String tenantId, String schema, String tableName, String physicalName) throws Exception {

Review comment:
       I don't have a case right now that requires separate methods. It is a good test to do the switch and drop so that we are sure the old table is not used. In the future, I will consider separating.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r599944494



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));

Review comment:
       yes. but in this test, we are not checking index consistency. The test is to make sure we see an extra row on the data table. If index is not updated, that is ok. 

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());

Review comment:
       Since I have the where statement, I am checking that the row I am interested in is there. Same for below

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));

Review comment:
       Will remove later.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605879621



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2152,10 +2201,15 @@ public void createTable(RpcController controller, CreateTableRequest request,
         }
     }
 
-    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable, PName physicalName) throws SQLException {
+    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable) throws SQLException {
         int nSequenceSaltBuckets = connection.getQueryServices().getSequenceSaltBuckets();
-
-        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+        // parentTable is parent of the view index which is the view.
+        // Since parent is the view, the parentTable.getParentLogicalName() returns the logical full name of the base table
+        PName parentName = parentTable.getParentLogicalName();

Review comment:
       There was only one place calling this function and it was using parentTable.getPhysicalName(); as physicalName. That mapped to full name of base table like SC1.TBL_1. getParentLogicalName returns the same now. But parentTable.getPhysicalName returns SC1.NEW_PHYSICALNAME_TBL1 which we don't.
   I run all view related IT tests. Is there any other tests you would like me to run to check for collusions? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-808130427


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   5m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 57s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 11s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 23s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m  3s |  phoenix-core: The patch generated 284 new + 13755 unchanged - 151 fixed = 14039 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 49s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 268m  1s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  The patch does not generate ASF License warnings.  |
   |  |   | 319m 34s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5419] |
   | Failed junit tests | phoenix.end2end.QueryDatabaseMetaDataIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/11/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 248aae1f7915 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/11/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/11/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/11/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/11/testReport/ |
   | Max. process+thread count | 4786 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/11/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605926406



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2152,10 +2201,15 @@ public void createTable(RpcController controller, CreateTableRequest request,
         }
     }
 
-    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable, PName physicalName) throws SQLException {
+    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable) throws SQLException {
         int nSequenceSaltBuckets = connection.getQueryServices().getSequenceSaltBuckets();
-
-        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+        // parentTable is parent of the view index which is the view.
+        // Since parent is the view, the parentTable.getParentLogicalName() returns the logical full name of the base table
+        PName parentName = parentTable.getParentLogicalName();

Review comment:
       Basically if you change the physical name of a table, this function is going to use the logical name instead of physical name of the base table. So the sequence will not go back to the beginning. In the next pr, I will create a view index, rename the table and create another view index to see that it is not colliding. This should be enough right?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] swaroopak commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
swaroopak commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r604402727



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
##########
@@ -53,12 +55,7 @@
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
-import org.apache.phoenix.util.DateUtil;
-import org.apache.phoenix.util.EncodedColumnsUtil;
-import org.apache.phoenix.util.PhoenixRuntime;
-import org.apache.phoenix.util.ReadOnlyProps;
-import org.apache.phoenix.util.SchemaUtil;
-import org.apache.phoenix.util.TestUtil;
+import org.apache.phoenix.util.*;

Review comment:
       nit: avoid importing *

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,820 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullTableName + " WHERE PK1='PK10'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP TABLE " + fullTableName);
+                // check that the physical data table is dropped
+                Admin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+                assertEquals(false, admin.tableExists(TableName.valueOf(SchemaUtil.getTableName(schemaName,NEW_TABLE_PREFIX + tableName))));
+
+                // check that index is dropped
+                assertEquals(false, admin.tableExists(TableName.valueOf(fullIndexName)));
+
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, indexName));
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+            }
+        }
+    }
+
+    private  HashMap<String, ArrayList<String>> test_IndexTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName, byte[] verifiedBytes) throws Exception {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        conn.setAutoCommit(true);
+        createTable(conn, fullTableName);
+        createIndexOnTable(conn, fullTableName, indexName);
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table for index and add 1 more row
+        String newTableName = "NEW_IDXTBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(indexName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullIndexName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put
+                        put =
+                        new Put(ByteUtil.concat(Bytes.toBytes("V13"), QueryConstants.SEPARATOR_BYTE_ARRAY, Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        verifiedBytes);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT * FROM " + fullIndexName;
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, indexName, newTableName);
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // create another index and drop the first index and validate the second one
+                String indexName2 = "IDX2_" + generateUniqueName();
+                String fullIndexName2 = SchemaUtil.getTableName(schemaName, indexName2);
+                if (createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                dropIndex(conn2, fullTableName, indexName);
+                if (!createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                // The new index doesn't have the new row
+                expected.remove("PK3");
+                validateIndex(conn, fullIndexName2, false, expected);
+                validateIndex(conn2, fullIndexName2, false, expected);
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+                // Try with unverified bytes
+                String tableName2 = "TBL_" + generateUniqueName();
+                String indexName2 = "IDX_" + generateUniqueName();
+                test_IndexTableChange(conn, conn2, schemaName, tableName2, indexName2, IndexRegionObserver.UNVERIFIED_BYTES);
+
+                completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName2, indexName2, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+            }
+        }
+    }
+
+    private HashMap<String, ArrayList<String>> testWithViewsAndIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String viewName1, String v1_indexName1, String v1_indexName2, String viewName2, String v2_indexName1) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName1 = SchemaUtil.getTableName(schemaName, viewName1);
+        String fullViewName2 = SchemaUtil.getTableName(schemaName, viewName2);
+        createTable(conn, fullTableName);
+        HashMap<String, ArrayList<String>> expected = new HashMap<>();
+        if (!createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        // Create another hbase table and add 1 more row
+        String newTableName = "NEW_TBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"),
+                        Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL1"),
+                        Bytes.toBytes("VIEW_COL1_3"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL2"),
+                        Bytes.toBytes("VIEW_COL2_3"));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4", "VIEW_COL1_3", "VIEW_COL2_3"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        if (!createChildAfterTransform) {
+            assertTrue(rs1.next());
+        }
+
+        // Rename table to point to hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        conn.unwrap(PhoenixConnection.class).getQueryServices().clearCache();
+        if (createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+
+    private PhoenixTestBuilder.SchemaBuilder createGlobalViewAndTenantView() throws Exception {
+        int numOfRows = 5;
+        PhoenixTestBuilder.SchemaBuilder.TableOptions tableOptions = PhoenixTestBuilder.SchemaBuilder.TableOptions.withDefaults();
+        tableOptions.getTableColumns().clear();
+        tableOptions.getTableColumnTypes().clear();
+        tableOptions.setTableProps(" MULTI_TENANT=true, COLUMN_ENCODED_BYTES=0 "+this.dataTableDdl);
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions globalViewOptions = PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions.withDefaults();
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions globalViewIndexOptions =
+                PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions.withDefaults();
+        globalViewIndexOptions.setLocal(false);
+
+        PhoenixTestBuilder.SchemaBuilder.TenantViewOptions tenantViewOptions = new PhoenixTestBuilder.SchemaBuilder.TenantViewOptions();
+        tenantViewOptions.setTenantViewColumns(asList("ZID", "COL7", "COL8", "COL9"));
+        tenantViewOptions.setTenantViewColumnTypes(asList("CHAR(15)", "VARCHAR", "VARCHAR", "VARCHAR"));
+
+        PhoenixTestBuilder.SchemaBuilder.OtherOptions testCaseWhenAllCFMatchAndAllDefault = new PhoenixTestBuilder.SchemaBuilder.OtherOptions();
+        testCaseWhenAllCFMatchAndAllDefault.setTestName("testCaseWhenAllCFMatchAndAllDefault");
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTableCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setGlobalViewCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTenantViewCFs(Lists.newArrayList((String) null, null, null, null));
+
+        // Define the test schema.
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = null;
+        if (!createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }  else {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).build();
+        }
+
+        PTable table = schemaBuilder.getBaseTable();
+        String schemaName = table.getSchemaName().getString();
+        String tableName = table.getTableName().getString();
+        String newBaseTableName = "NEW_TBL_" + tableName;
+        String fullNewBaseTableName = SchemaUtil.getTableName(schemaName, newBaseTableName);
+        String fullTableName = table.getName().getString();
+
+        try (Connection conn = getConnection(props)) {
+
+            try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+                String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+                admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+                admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewBaseTableName));
+            }
+
+            renameAndDropPhysicalTable(conn, null, schemaName, tableName, newBaseTableName);
+        }
+
+        // TODO: this still creates a new table.
+        if (createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withDataOptions(schemaBuilder.getDataOptions())
+                    .withTableOptions(tableOptions)
+                    .withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }
+
+        // Define the test data.
+        PhoenixTestBuilder.DataSupplier dataSupplier = new PhoenixTestBuilder.DataSupplier() {
+
+            @Override public List<Object> getValues(int rowIndex) {
+                Random rnd = new Random();
+                String id = String.format(ViewTTLIT.ID_FMT, rowIndex);
+                String zid = String.format(ViewTTLIT.ZID_FMT, rowIndex);
+                String col4 = String.format(ViewTTLIT.COL4_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col5 = String.format(ViewTTLIT.COL5_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col6 = String.format(ViewTTLIT.COL6_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col7 = String.format(ViewTTLIT.COL7_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col8 = String.format(ViewTTLIT.COL8_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col9 = String.format(ViewTTLIT.COL9_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+
+                return Lists.newArrayList(
+                        new Object[] { id, zid, col4, col5, col6, col7, col8, col9 });
+            }
+        };
+
+        // Create a test data reader/writer for the above schema.
+        PhoenixTestBuilder.DataWriter dataWriter = new PhoenixTestBuilder.BasicDataWriter();
+        List<String> columns =
+                Lists.newArrayList("ID", "ZID", "COL4", "COL5", "COL6", "COL7", "COL8", "COL9");
+        List<String> rowKeyColumns = Lists.newArrayList("ID", "ZID");
+
+        String tenantConnectUrl =
+                getUrl() + ';' + TENANT_ID_ATTRIB + '=' + schemaBuilder.getDataOptions().getTenantId();
+
+        try (Connection tenantConnection = DriverManager.getConnection(tenantConnectUrl)) {
+            tenantConnection.setAutoCommit(true);
+            dataWriter.setConnection(tenantConnection);
+            dataWriter.setDataSupplier(dataSupplier);
+            dataWriter.setUpsertColumns(columns);
+            dataWriter.setRowKeyColumns(rowKeyColumns);
+            dataWriter.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataWriter.upsertRows(1, numOfRows);
+            com.google.common.collect.Table<String, String, Object> upsertedData = dataWriter.getDataTable();;
+
+            PhoenixTestBuilder.DataReader dataReader = new PhoenixTestBuilder.BasicDataReader();
+            dataReader.setValidationColumns(columns);
+            dataReader.setRowKeyColumns(rowKeyColumns);
+            dataReader.setDML(String.format("SELECT %s from %s", Joiner.on(",").join(columns),
+                    schemaBuilder.getEntityTenantViewName()));
+            dataReader.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataReader.setConnection(tenantConnection);
+            dataReader.readRows();
+            com.google.common.collect.Table<String, String, Object> fetchedData
+                    = dataReader.getDataTable();
+            assertNotNull("Fetched data should not be null", fetchedData);
+            ViewTTLIT.verifyRowsBeforeTTLExpiration(upsertedData, fetchedData);
+
+        }
+        return schemaBuilder;
+    }
+
+    @Test
+    public void testWith2LevelViewsBaseTablePhysicalNameChange() throws Exception {
+        // TODO: use namespace in one of the cases
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = createGlobalViewAndTenantView();
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithViews() throws Exception {
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                String schemaName = "S_" + generateUniqueName();
+                String tableName = "TBL_" + generateUniqueName();
+                String view1Name = "VW1_" + generateUniqueName();
+                String view1IndexName1 = "VW1IDX1_" + generateUniqueName();
+                String view1IndexName2 = "VW1IDX2_" + generateUniqueName();
+                String fullView1IndexName1 = SchemaUtil.getTableName(schemaName, view1IndexName1);
+                String fullView1IndexName2 =  SchemaUtil.getTableName(schemaName, view1IndexName2);
+                String view2Name = "VW2_" + generateUniqueName();
+                String view2IndexName1 = "VW2IDX1_" + generateUniqueName();
+                String fullView1Name = SchemaUtil.getTableName(schemaName, view1Name);
+                String fullView2Name = SchemaUtil.getTableName(schemaName, view2Name);
+                String fullView2IndexName1 =  SchemaUtil.getTableName(schemaName, view2IndexName1);
+
+                HashMap<String, ArrayList<String>> expected = testWithViewsAndIndex_BaseTableChange(conn, conn2, schemaName, tableName, view1Name, view1IndexName1, view1IndexName2, view2Name, view2IndexName1);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName1);
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName2);
+                IndexToolIT.runIndexTool(true, false, schemaName, view2Name, view2IndexName1);
+
+                SingleCellIndexIT.dumpTable("_IDX_" + SchemaUtil.getTableName(schemaName, tableName));
+                validateIndex(conn, fullView1IndexName1, true, expected);
+                validateIndex(conn2, fullView1IndexName2, true, expected);
+
+                // Add row and check
+                populateView(conn, fullView2Name, 20, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1 + " WHERE \":PK1\"='PK20'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name  + " WHERE PK1='PK20'");
+                assertEquals(true, rs.next());
+
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullView2Name + " WHERE PK1='PK20'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1 + " WHERE \":PK1\"='PK20'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name  + " WHERE PK1='PK20'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP VIEW " + fullView2Name);
+                // check that this view is dropped but the other is there
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView1Name);
+                assertEquals(true, rs.next());
+                boolean failed = true;
+                try {

Review comment:
       Let's use try-with-resources to avoid leaving result sets open

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -1209,7 +1231,30 @@ private PTable getTable(RegionScanner scanner, long clientTimeStamp, long tableT
                 if (linkType == LinkType.INDEX_TABLE) {
                     addIndexToTable(tenantId, schemaName, famName, tableName, clientTimeStamp, indexes, clientVersion);
                 } else if (linkType == LinkType.PHYSICAL_TABLE) {
-                    physicalTables.add(famName);
+                    // famName contains the logical name of the parent table. We need to get the actual physical name of the table
+                    PTable parentTable = null;
+                    if (indexType != IndexType.LOCAL) {
+                        parentTable = getTable(null, SchemaUtil.getSchemaNameFromFullName(famName.getBytes()).getBytes(),
+                                SchemaUtil.getTableNameFromFullName(famName.getBytes()).getBytes(), clientTimeStamp, clientVersion);
+                        if (parentTable == null) {
+                            // parentTable is not in the cache. Since famName is only logical name, we need to find the physical table.
+                            try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class)) {
+                                parentTable = PhoenixRuntime.getTableNoCache(connection, famName.getString());
+                            } catch (TableNotFoundException e) {
+                                // It is ok to swallow this exception since this could be a view index and _IDX_ table is not there.
+                            }
+                        }
+                    }
+
+                    if (parentTable == null) {
+                        physicalTables.add(famName);
+                        // If this is a view index, then one of the link is IDX_VW -> _IDX_ PhysicalTable link. Since famName is _IDX_ and we can't get this table hence it is null, we need to use actual view name
+                        parentLogicalName = (tableType == INDEX ? SchemaUtil.getTableName(parentSchemaName, parentTableName) : famName);

Review comment:
       somthing like INDEX.equalsIgnoreCase(tableType) instad of ==

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,820 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullTableName + " WHERE PK1='PK10'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP TABLE " + fullTableName);
+                // check that the physical data table is dropped
+                Admin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+                assertEquals(false, admin.tableExists(TableName.valueOf(SchemaUtil.getTableName(schemaName,NEW_TABLE_PREFIX + tableName))));
+
+                // check that index is dropped
+                assertEquals(false, admin.tableExists(TableName.valueOf(fullIndexName)));
+
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, indexName));
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+            }
+        }
+    }
+
+    private  HashMap<String, ArrayList<String>> test_IndexTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName, byte[] verifiedBytes) throws Exception {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        conn.setAutoCommit(true);
+        createTable(conn, fullTableName);
+        createIndexOnTable(conn, fullTableName, indexName);
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table for index and add 1 more row
+        String newTableName = "NEW_IDXTBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(indexName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullIndexName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put
+                        put =
+                        new Put(ByteUtil.concat(Bytes.toBytes("V13"), QueryConstants.SEPARATOR_BYTE_ARRAY, Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        verifiedBytes);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT * FROM " + fullIndexName;
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, indexName, newTableName);
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // create another index and drop the first index and validate the second one
+                String indexName2 = "IDX2_" + generateUniqueName();
+                String fullIndexName2 = SchemaUtil.getTableName(schemaName, indexName2);
+                if (createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                dropIndex(conn2, fullTableName, indexName);
+                if (!createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                // The new index doesn't have the new row
+                expected.remove("PK3");
+                validateIndex(conn, fullIndexName2, false, expected);
+                validateIndex(conn2, fullIndexName2, false, expected);
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+                // Try with unverified bytes
+                String tableName2 = "TBL_" + generateUniqueName();
+                String indexName2 = "IDX_" + generateUniqueName();
+                test_IndexTableChange(conn, conn2, schemaName, tableName2, indexName2, IndexRegionObserver.UNVERIFIED_BYTES);
+
+                completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName2, indexName2, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+            }
+        }
+    }
+
+    private HashMap<String, ArrayList<String>> testWithViewsAndIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String viewName1, String v1_indexName1, String v1_indexName2, String viewName2, String v2_indexName1) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName1 = SchemaUtil.getTableName(schemaName, viewName1);
+        String fullViewName2 = SchemaUtil.getTableName(schemaName, viewName2);
+        createTable(conn, fullTableName);
+        HashMap<String, ArrayList<String>> expected = new HashMap<>();
+        if (!createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        // Create another hbase table and add 1 more row
+        String newTableName = "NEW_TBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"),
+                        Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL1"),
+                        Bytes.toBytes("VIEW_COL1_3"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL2"),
+                        Bytes.toBytes("VIEW_COL2_3"));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4", "VIEW_COL1_3", "VIEW_COL2_3"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        if (!createChildAfterTransform) {
+            assertTrue(rs1.next());
+        }
+
+        // Rename table to point to hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        conn.unwrap(PhoenixConnection.class).getQueryServices().clearCache();
+        if (createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+
+    private PhoenixTestBuilder.SchemaBuilder createGlobalViewAndTenantView() throws Exception {
+        int numOfRows = 5;
+        PhoenixTestBuilder.SchemaBuilder.TableOptions tableOptions = PhoenixTestBuilder.SchemaBuilder.TableOptions.withDefaults();
+        tableOptions.getTableColumns().clear();
+        tableOptions.getTableColumnTypes().clear();
+        tableOptions.setTableProps(" MULTI_TENANT=true, COLUMN_ENCODED_BYTES=0 "+this.dataTableDdl);
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions globalViewOptions = PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions.withDefaults();
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions globalViewIndexOptions =
+                PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions.withDefaults();
+        globalViewIndexOptions.setLocal(false);
+
+        PhoenixTestBuilder.SchemaBuilder.TenantViewOptions tenantViewOptions = new PhoenixTestBuilder.SchemaBuilder.TenantViewOptions();
+        tenantViewOptions.setTenantViewColumns(asList("ZID", "COL7", "COL8", "COL9"));
+        tenantViewOptions.setTenantViewColumnTypes(asList("CHAR(15)", "VARCHAR", "VARCHAR", "VARCHAR"));
+
+        PhoenixTestBuilder.SchemaBuilder.OtherOptions testCaseWhenAllCFMatchAndAllDefault = new PhoenixTestBuilder.SchemaBuilder.OtherOptions();
+        testCaseWhenAllCFMatchAndAllDefault.setTestName("testCaseWhenAllCFMatchAndAllDefault");
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTableCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setGlobalViewCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTenantViewCFs(Lists.newArrayList((String) null, null, null, null));
+
+        // Define the test schema.
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = null;
+        if (!createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }  else {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).build();
+        }
+
+        PTable table = schemaBuilder.getBaseTable();
+        String schemaName = table.getSchemaName().getString();
+        String tableName = table.getTableName().getString();
+        String newBaseTableName = "NEW_TBL_" + tableName;
+        String fullNewBaseTableName = SchemaUtil.getTableName(schemaName, newBaseTableName);
+        String fullTableName = table.getName().getString();
+
+        try (Connection conn = getConnection(props)) {
+
+            try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+                String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+                admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+                admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewBaseTableName));
+            }
+
+            renameAndDropPhysicalTable(conn, null, schemaName, tableName, newBaseTableName);
+        }
+
+        // TODO: this still creates a new table.
+        if (createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withDataOptions(schemaBuilder.getDataOptions())
+                    .withTableOptions(tableOptions)
+                    .withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }
+
+        // Define the test data.
+        PhoenixTestBuilder.DataSupplier dataSupplier = new PhoenixTestBuilder.DataSupplier() {
+
+            @Override public List<Object> getValues(int rowIndex) {
+                Random rnd = new Random();
+                String id = String.format(ViewTTLIT.ID_FMT, rowIndex);
+                String zid = String.format(ViewTTLIT.ZID_FMT, rowIndex);
+                String col4 = String.format(ViewTTLIT.COL4_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col5 = String.format(ViewTTLIT.COL5_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col6 = String.format(ViewTTLIT.COL6_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col7 = String.format(ViewTTLIT.COL7_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col8 = String.format(ViewTTLIT.COL8_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col9 = String.format(ViewTTLIT.COL9_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+
+                return Lists.newArrayList(
+                        new Object[] { id, zid, col4, col5, col6, col7, col8, col9 });
+            }
+        };
+
+        // Create a test data reader/writer for the above schema.
+        PhoenixTestBuilder.DataWriter dataWriter = new PhoenixTestBuilder.BasicDataWriter();
+        List<String> columns =
+                Lists.newArrayList("ID", "ZID", "COL4", "COL5", "COL6", "COL7", "COL8", "COL9");
+        List<String> rowKeyColumns = Lists.newArrayList("ID", "ZID");
+
+        String tenantConnectUrl =
+                getUrl() + ';' + TENANT_ID_ATTRIB + '=' + schemaBuilder.getDataOptions().getTenantId();
+
+        try (Connection tenantConnection = DriverManager.getConnection(tenantConnectUrl)) {
+            tenantConnection.setAutoCommit(true);
+            dataWriter.setConnection(tenantConnection);
+            dataWriter.setDataSupplier(dataSupplier);
+            dataWriter.setUpsertColumns(columns);
+            dataWriter.setRowKeyColumns(rowKeyColumns);
+            dataWriter.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataWriter.upsertRows(1, numOfRows);
+            com.google.common.collect.Table<String, String, Object> upsertedData = dataWriter.getDataTable();;
+
+            PhoenixTestBuilder.DataReader dataReader = new PhoenixTestBuilder.BasicDataReader();
+            dataReader.setValidationColumns(columns);
+            dataReader.setRowKeyColumns(rowKeyColumns);
+            dataReader.setDML(String.format("SELECT %s from %s", Joiner.on(",").join(columns),
+                    schemaBuilder.getEntityTenantViewName()));
+            dataReader.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataReader.setConnection(tenantConnection);
+            dataReader.readRows();
+            com.google.common.collect.Table<String, String, Object> fetchedData
+                    = dataReader.getDataTable();
+            assertNotNull("Fetched data should not be null", fetchedData);
+            ViewTTLIT.verifyRowsBeforeTTLExpiration(upsertedData, fetchedData);
+
+        }
+        return schemaBuilder;
+    }
+
+    @Test
+    public void testWith2LevelViewsBaseTablePhysicalNameChange() throws Exception {
+        // TODO: use namespace in one of the cases
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = createGlobalViewAndTenantView();
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithViews() throws Exception {
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                String schemaName = "S_" + generateUniqueName();
+                String tableName = "TBL_" + generateUniqueName();
+                String view1Name = "VW1_" + generateUniqueName();
+                String view1IndexName1 = "VW1IDX1_" + generateUniqueName();
+                String view1IndexName2 = "VW1IDX2_" + generateUniqueName();
+                String fullView1IndexName1 = SchemaUtil.getTableName(schemaName, view1IndexName1);
+                String fullView1IndexName2 =  SchemaUtil.getTableName(schemaName, view1IndexName2);
+                String view2Name = "VW2_" + generateUniqueName();
+                String view2IndexName1 = "VW2IDX1_" + generateUniqueName();
+                String fullView1Name = SchemaUtil.getTableName(schemaName, view1Name);
+                String fullView2Name = SchemaUtil.getTableName(schemaName, view2Name);
+                String fullView2IndexName1 =  SchemaUtil.getTableName(schemaName, view2IndexName1);
+
+                HashMap<String, ArrayList<String>> expected = testWithViewsAndIndex_BaseTableChange(conn, conn2, schemaName, tableName, view1Name, view1IndexName1, view1IndexName2, view2Name, view2IndexName1);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName1);
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName2);
+                IndexToolIT.runIndexTool(true, false, schemaName, view2Name, view2IndexName1);
+
+                SingleCellIndexIT.dumpTable("_IDX_" + SchemaUtil.getTableName(schemaName, tableName));
+                validateIndex(conn, fullView1IndexName1, true, expected);
+                validateIndex(conn2, fullView1IndexName2, true, expected);
+
+                // Add row and check
+                populateView(conn, fullView2Name, 20, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1 + " WHERE \":PK1\"='PK20'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name  + " WHERE PK1='PK20'");
+                assertEquals(true, rs.next());
+
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullView2Name + " WHERE PK1='PK20'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1 + " WHERE \":PK1\"='PK20'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name  + " WHERE PK1='PK20'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP VIEW " + fullView2Name);
+                // check that this view is dropped but the other is there
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView1Name);
+                assertEquals(true, rs.next());
+                boolean failed = true;
+                try {
+                    rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2Name);
+                    rs.next();
+                    failed = false;
+                } catch (SQLException e){
+
+                }
+                assertEquals(true, failed);
+
+                // check that first index is there but second index is dropped
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullView1IndexName1);
+                assertEquals(true, rs.next());
+                try {
+                    rs = conn.createStatement().executeQuery("SELECT * FROM " + fullView2IndexName1);
+                    rs.next();
+                    failed = false;
+                } catch (SQLException e){
+
+                }
+                assertEquals(true, failed);
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithViews_runScrutiny() throws Exception {
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                String schemaName = "S_" + generateUniqueName();
+                String tableName = "TBL_" + generateUniqueName();
+                String view1Name = "VW1_" + generateUniqueName();
+                String view1IndexName1 = "VW1IDX1_" + generateUniqueName();
+                String view1IndexName2 = "VW1IDX2_" + generateUniqueName();
+                String view2Name = "VW2_" + generateUniqueName();
+                String view2IndexName1 = "VW2IDX1_" + generateUniqueName();
+
+                testWithViewsAndIndex_BaseTableChange(conn, conn2,schemaName, tableName, view1Name,
+                        view1IndexName1, view1IndexName2, view2Name, view2IndexName1);
+
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, view2Name, view2IndexName1, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(2, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row and 2 are from the other index
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(3, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+
+            }
+        }
+    }
+
+    private void createTable(Connection conn, String tableName) throws Exception {
+        String createTableSql = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 INTEGER, V3 INTEGER "
+                + "CONSTRAINT NAME_PK PRIMARY KEY(PK1)) COLUMN_ENCODED_BYTES=0 " + dataTableDdl;
+        LOGGER.debug(createTableSql);
+        conn.createStatement().execute(createTableSql);
+    }
+
+    private void createIndexOnTable(Connection conn, String tableName, String indexName)
+            throws SQLException {
+        String createIndexSql = "CREATE INDEX " + indexName + " ON " + tableName + " (V1) INCLUDE (V2, V3) ";
+        LOGGER.debug(createIndexSql);
+        conn.createStatement().execute(createIndexSql);
+    }
+
+    private void dropIndex(Connection conn, String tableName, String indexName)
+            throws SQLException {
+        String sql = "DROP INDEX " + indexName + " ON " + tableName ;
+        conn.createStatement().execute(sql);
+    }
+
+    private HashMap<String, ArrayList<String>> populateTable(Connection conn, String tableName, int startnum, int numOfRows)
+            throws SQLException {
+        String upsert = "UPSERT INTO " + tableName + " (PK1, V1,  V2, V3) VALUES (?,?,?,?)";
+        PreparedStatement upsertStmt = conn.prepareStatement(upsert);
+        HashMap<String, ArrayList<String>> result = new HashMap<>();
+        for (int i=startnum; i < startnum + numOfRows; i++) {
+            ArrayList<String> row = new ArrayList<>();
+            upsertStmt.setString(1, "PK" + i);
+            row.add("PK" + i);
+            upsertStmt.setString(2, "V1" + i);
+            row.add("V1" + i);
+            upsertStmt.setInt(3, i);
+            row.add(String.valueOf(i));
+            upsertStmt.setInt(4, i + 1);
+            row.add(String.valueOf(i + 1));
+            upsertStmt.executeUpdate();
+            result.put("PK" + i, row);
+        }
+        return result;
+    }
+
+    private HashMap<String, ArrayList<String>> populateView(Connection conn, String viewName, int startNum, int numOfRows) throws SQLException {
+        String upsert = "UPSERT INTO " + viewName + " (PK1, V1,  V2, V3, VIEW_COL1, VIEW_COL2) VALUES (?,?,?,?,?,?)";
+        PreparedStatement upsertStmt = conn.prepareStatement(upsert);
+        HashMap<String, ArrayList<String>> result = new HashMap<>();
+        for (int i=startNum; i < startNum + numOfRows; i++) {
+            ArrayList<String> row = new ArrayList<>();
+            upsertStmt.setString(1, "PK"+i);
+            row.add("PK"+i);
+            upsertStmt.setString(2, "V1"+i);
+            row.add("V1"+i);
+            upsertStmt.setInt(3, i);
+            row.add(String.valueOf(i));
+            upsertStmt.setInt(4, i+1);
+            row.add(String.valueOf(i+1));
+            upsertStmt.setString(5, "VIEW_COL1_"+i);
+            row.add("VIEW_COL1_"+i);
+            upsertStmt.setString(6, "VIEW_COL2_"+i);
+            row.add("VIEW_COL2_"+i);
+            upsertStmt.executeUpdate();
+            result.put("PK"+i, row);
+        }
+        return result;
+    }
+
+    private void createViewAndIndex(Connection conn, String schemaName, String tableName, String viewName, String viewIndexName)
+            throws SQLException {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName = SchemaUtil.getTableName(schemaName, viewName);
+        String
+                view1DDL =
+                "CREATE VIEW IF NOT EXISTS " + fullViewName + " ( VIEW_COL1 VARCHAR, VIEW_COL2 VARCHAR) AS SELECT * FROM "
+                        + fullTableName;
+        conn.createStatement().execute(view1DDL);
+        String indexDDL = "CREATE INDEX IF NOT EXISTS " + viewIndexName + " ON " + fullViewName + " (V1) include (V2, V3, VIEW_COL2) ";
+        conn.createStatement().execute(indexDDL);
+        conn.commit();
+    }
+
+    private void validateTable(Connection connection, String tableName) throws SQLException {
+        String selectTable = "SELECT PK1, V1, V2, V3 FROM " + tableName + " ORDER BY PK1 DESC";
+        ResultSet rs = connection.createStatement().executeQuery(selectTable);
+        assertTrue(rs.next());
+        assertEquals("PK3", rs.getString(1));
+        assertEquals("V13", rs.getString(2));
+        assertEquals(3, rs.getInt(3));
+        assertEquals(4, rs.getInt(4));
+        assertTrue(rs.next());
+        assertEquals("PK2", rs.getString(1));
+        assertEquals("V12", rs.getString(2));
+        assertEquals(2, rs.getInt(3));
+        assertEquals(3, rs.getInt(4));
+        assertTrue(rs.next());
+        assertEquals("PK1", rs.getString(1));
+        assertEquals("V11", rs.getString(2));
+        assertEquals(1, rs.getInt(3));
+        assertEquals(2, rs.getInt(4));
+    }
+
+    private void validateIndex(Connection connection, String tableName, boolean isViewIndex, HashMap<String, ArrayList<String>> expected) throws SQLException {
+        String selectTable = "SELECT * FROM " + tableName;
+        ResultSet rs = connection.createStatement().executeQuery(selectTable);
+        int cnt = 0;
+        while (rs.next()) {
+            String pk = rs.getString(2);
+            assertTrue(expected.containsKey(pk));
+            ArrayList<String> row = expected.get(pk);
+            assertEquals(row.get(1), rs.getString(1));
+            assertEquals(row.get(2), rs.getString(3));
+            assertEquals(row.get(3), rs.getString(4));
+            if (isViewIndex) {
+                assertEquals(row.get(5), rs.getString(5));
+            }
+            cnt++;
+        }
+        assertEquals(cnt, expected.size());
+    }
+
+    public static void renameAndDropPhysicalTable(Connection conn, String tenantId, String schema, String tableName, String physicalName) throws Exception {

Review comment:
       should this be broken into 2 and call them "assignNewPhysicalTable" and "dropOldPhysicalTable"? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r606318514



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -583,6 +583,7 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
     PName getName();
     PName getSchemaName();
     PName getTableName();
+    PName getPhysicalTableNameColumnInSyscat();

Review comment:
       getPhysicalName already returns physical table name column from syscat when appropriate.
   
   We still need 2 methods. One represents the actual column in Syscat. The other is inferred (like views).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605277282



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1586,7 +1635,12 @@ public synchronized boolean getIndexMaintainers(ImmutableBytesWritable ptr, Phoe
 
     @Override
     public PName getPhysicalName() {
+        // For views, physicalName is base table name. There might be a case where the Phoenix table is pointing to another physical table.

Review comment:
       physical. Will update

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1586,7 +1635,12 @@ public synchronized boolean getIndexMaintainers(ImmutableBytesWritable ptr, Phoe
 
     @Override
     public PName getPhysicalName() {
+        // For views, physicalName is base table name. There might be a case where the Phoenix table is pointing to another physical table.

Review comment:
       physical. Will update the comment




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-802118175


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   6m 56s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 15s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 15s |  phoenix-core in 4.x-PHOENIX-6247 has 944 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 22s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 23s |  phoenix-core: The patch generated 274 new + 14431 unchanged - 167 fixed = 14705 total (was 14598)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 31s |  phoenix-core generated 1 new + 944 unchanged - 0 fixed = 945 total (was 944)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 46s |  phoenix-core in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 10s |  The patch generated 1 ASF License warnings.  |
   |  |   |  51m 47s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   | Failed junit tests | phoenix.index.IndexScrutinyMapperTest |
   |   | phoenix.compile.TenantSpecificViewIndexCompileTest |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux eae744c58d36 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / e6d7d0b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 453 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/1/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r598007293



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();

Review comment:
       Rather than doing this manually, should this be a NeedsOwnCluster test?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
##########
@@ -443,7 +445,52 @@ public void testImportOneIndexTable(String tableName, boolean localIndex) throws
             checkIndexTableIsVerified(indexTableName);
         }
     }
-    
+
+    @Test
+    public void testImportWithDifferentPhysicalName() throws Exception {
+        String tableName = generateUniqueName();
+        String indexTableName = String.format("%s_IDX", tableName);
+        Statement stmt = conn.createStatement();
+        stmt.execute("CREATE TABLE " + tableName + "(ID INTEGER NOT NULL PRIMARY KEY, "
+                + "FIRST_NAME VARCHAR, LAST_NAME VARCHAR)");
+        String ddl = "CREATE  INDEX " + indexTableName + " ON " + tableName + "(FIRST_NAME ASC)";
+        stmt.execute(ddl);
+        String newTableName = LogicalTableNameIT.NEW_TABLE_PREFIX + generateUniqueName();
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(tableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(tableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(newTableName));
+        }
+        LogicalTableNameIT.renameAndDropPhysicalTable(conn, null, null, tableName, newTableName);
+
+        FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+        FSDataOutputStream outputStream = fs.create(new Path("/tmp/input4.csv"));
+        PrintWriter printWriter = new PrintWriter(outputStream);
+        printWriter.println("1,FirstName 1,LastName 1");
+        printWriter.println("2,FirstName 2,LastName 2");
+        printWriter.close();
+
+        CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+        csvBulkLoadTool.setConf(getUtility().getConfiguration());
+        int
+                exitCode =
+                csvBulkLoadTool
+                        .run(new String[] { "--input", "/tmp/input4.csv", "--table", tableName,
+                                "--index-table", indexTableName, "--zookeeper", zkQuorum });
+        assertEquals(0, exitCode);
+
+        ResultSet rs = stmt.executeQuery("SELECT * FROM " + tableName);
+        assertFalse(rs.next());

Review comment:
       I don't understand this check -- how can SELECT * FROM FOO return no rows but SELECT * FROM FOO WHERE <some predicate> return a row? 

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));

Review comment:
       do we need the dumpTable when we're not actively debugging this?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -1209,7 +1231,26 @@ private PTable getTable(RegionScanner scanner, long clientTimeStamp, long tableT
                 if (linkType == LinkType.INDEX_TABLE) {
                     addIndexToTable(tenantId, schemaName, famName, tableName, clientTimeStamp, indexes, clientVersion);
                 } else if (linkType == LinkType.PHYSICAL_TABLE) {
-                    physicalTables.add(famName);
+                    // famName contains the logical name of the parent table. We need to get the actual physical name of the table
+                    PTable parentTable = getTable(null, schemaName.getBytes(), famName.getBytes(), clientTimeStamp, clientVersion);
+                    if (parentTable == null && indexType != IndexType.LOCAL) {
+                        // parentTable is not in the cache. Since famName is only logical name, we need to find the physical table.
+                        try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class)) {
+                            parentTable = PhoenixRuntime.getTableNoCache(connection, famName.getString());
+                        } catch (TableNotFoundException e) {
+

Review comment:
       Please include a comment why it's OK to swallow the exception here. (I assume because of the next if clause?)

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;

Review comment:
       Good to use the shaded guava (required for 5.x)

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexScrutinyMapper.java
##########
@@ -343,18 +343,17 @@ private int getTableTtl() throws SQLException, IOException {
 
     @VisibleForTesting
     public static String getSourceTableName(PTable pSourceTable, boolean isNamespaceEnabled) {
-        String sourcePhysicalName = pSourceTable.getPhysicalName().getString();
+        String sourcePhysicalName = SchemaUtil.getTableNameFromFullName(pSourceTable.getPhysicalName().getString());
         String physicalTable, table, schema;
         if (pSourceTable.getType() == PTableType.VIEW
-                || MetaDataUtil.isViewIndex(sourcePhysicalName)) {
+                || MetaDataUtil.isViewIndex(pSourceTable.getPhysicalName().getString())) {
             // in case of view and view index ptable, getPhysicalName() returns hbase tables
             // i.e. without _IDX_ and with _IDX_ respectively
-            physicalTable = sourcePhysicalName;
+            physicalTable = pSourceTable.getPhysicalName().getString();

Review comment:
       nit: can pull physicalTable up to 346 and refer to it rather than using pSourceTable.getPhysicalName().getString() several times. 

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
##########
@@ -2985,7 +2989,8 @@ private void setSyncedPropertiesForTableIndexes(PTable table,
             tableAndIndexDescriptorMappings.put(origIndexDescriptor, newIndexDescriptor);
         }
         // Also keep properties for the physical view index table in sync
-        String viewIndexName = MetaDataUtil.getViewIndexPhysicalName(table.getPhysicalName().getString());
+        //String viewIndexName = MetaDataUtil.getViewIndexPhysicalName(table.getPhysicalName().getString());

Review comment:
       nit: remove commented line

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1583,10 +1622,15 @@ public synchronized boolean getIndexMaintainers(ImmutableBytesWritable ptr, Phoe
         ptr.set(indexMaintainersPtr.get(), indexMaintainersPtr.getOffset(), indexMaintainersPtr.getLength());
         return indexMaintainersPtr.getLength() > 0;
     }
-
+    private static final Logger LOGGER = LoggerFactory.getLogger(PTableImpl.class);
     @Override
     public PName getPhysicalName() {
+        // For views, physicalName is base table name. There might be a case where the Phoenix table is pointing to another physical table.
+        // In that case, pysicalTableName is not null

Review comment:
       spelling: physicalTableName

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());

Review comment:
       Ditto

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
##########
@@ -21,18 +21,24 @@
 
 import java.sql.SQLException;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
 
+import com.google.common.base.Strings;

Review comment:
       nit: use shaded version

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
##########
@@ -44,6 +44,7 @@
 
 import javax.annotation.Nullable;
 
+import org.apache.hadoop.hbase.util.Bytes;

Review comment:
       nit: unnecessary import?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));

Review comment:
       if createChildAfterTransform is false, won't doing this through the HBase client API make the index inconsistent?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -2152,10 +2196,10 @@ public void createTable(RpcController controller, CreateTableRequest request,
         }
     }
 
-    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable, PName physicalName) throws SQLException {
+    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable) throws SQLException {
         int nSequenceSaltBuckets = connection.getQueryServices().getSequenceSaltBuckets();
-
-        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+        // parentTable is parent of the view index which is the view.
+        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, SchemaUtil.getTableName(parentTable.getSchemaName(), parentTable.getTableName()) ,   //parentTable.getParentLogicalName(),

Review comment:
       nit: remove commented out code

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);

Review comment:
       Consider using TestUtil.dumpTable. Also, do we need to output to stdout in the final version of this? Can probably be cut. 

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());

Review comment:
       we're just checking if a row exists but not its value?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r599254115



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
##########
@@ -1331,7 +1331,7 @@ private static void syncGlobalIndexesForTable(ConnectionQueryServices cqs, PTabl
      */
     private static void syncViewIndexTable(ConnectionQueryServices cqs, PTable baseTable, HColumnDescriptor defaultColFam,
             Map<String, Object> syncedProps, Set<HTableDescriptor> tableDescsToSync) throws SQLException {
-        String viewIndexName = MetaDataUtil.getViewIndexPhysicalName(baseTable.getPhysicalName().getString());
+        String viewIndexName = MetaDataUtil.getViewIndexPhysicalName(baseTable.getName().getString());

Review comment:
       schema+logicaltablename




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-803826804


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 58s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 18s |  phoenix-core in 4.x-PHOENIX-6247 has 944 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 25s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 32s |  phoenix-core: The patch generated 282 new + 14838 unchanged - 158 fixed = 15120 total (was 14996)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  javadoc  |   0m 47s |  phoenix-core generated 1 new + 92 unchanged - 0 fixed = 93 total (was 92)  |
   | -1 :x: |  spotbugs  |   3m 29s |  phoenix-core generated 2 new + 944 unchanged - 0 fixed = 946 total (was 944)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 235m 19s |  phoenix-core in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 38s |  The patch generated 1 ASF License warnings.  |
   |  |   | 281m 15s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2249] |
   | Failed junit tests | phoenix.tx.FlappingTransactionIT |
   |   | phoenix.end2end.PermissionsCacheIT |
   |   | phoenix.end2end.ParameterizedIndexUpgradeToolIT |
   |   | phoenix.end2end.index.GlobalImmutableTxIndexIT |
   |   | phoenix.end2end.SystemCatalogRollbackEnabledIT |
   |   | phoenix.end2end.PermissionNSEnabledWithCustomAccessControllerIT |
   |   | phoenix.end2end.ConcurrentUpsertsWithoutIndexedColsIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.end2end.index.LocalImmutableTxIndexIT |
   |   | phoenix.tx.TransactionIT |
   |   | phoenix.end2end.PermissionNSEnabledIT |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 22b707d4b7b9 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / e6d7d0b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 4900 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/5/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-815416422


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 59s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 34s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 36s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   8m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 51s |  phoenix-core: The patch generated 286 new + 13755 unchanged - 151 fixed = 14041 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 47s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 222m  0s |  phoenix-core in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 11s |  The patch does not generate ASF License warnings.  |
   |  |   | 270m 19s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1236] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 325] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5417] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/16/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 163c19ae2705 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/16/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/16/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/16/testReport/ |
   | Max. process+thread count | 4868 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/16/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-803229296


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 55s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 20s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 56s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 32s |  phoenix-core in 4.x-PHOENIX-6247 has 944 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 39s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   8m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 17s |  the patch passed  |
   | -1 :x: |  checkstyle  |   5m 23s |  phoenix-core: The patch generated 206 new + 14898 unchanged - 98 fixed = 15104 total (was 14996)  |
   | +1 :green_heart: |  prototool  |   0m  0s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 39s |  phoenix-core generated 1 new + 944 unchanged - 0 fixed = 945 total (was 944)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 283m 10s |  phoenix-core in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 38s |  The patch generated 1 ASF License warnings.  |
   |  |   | 334m 59s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   | Failed junit tests | TEST-[CastAndCoerceIT_6] |
   |   | phoenix.end2end.TenantSpecificViewIndexIT |
   |   | phoenix.end2end.index.MutableIndexSplitForwardScanIT |
   |   | phoenix.end2end.DistinctPrefixFilterIT |
   |   | phoenix.end2end.CostBasedDecisionIT |
   |   | phoenix.end2end.IndexBuildTimestampIT |
   |   | phoenix.end2end.PermissionsCacheIT |
   |   | phoenix.end2end.PermissionNSEnabledIT |
   |   | phoenix.end2end.index.IndexMetadataIT |
   |   | phoenix.end2end.index.MutableIndexExtendedIT |
   |   | TEST-[InQueryIT_6] |
   |   | phoenix.end2end.index.ViewIndexIT |
   |   | phoenix.end2end.index.LocalImmutableNonTxIndexIT |
   |   | phoenix.schema.stats.TxStatsCollectorIT |
   |   | phoenix.schema.stats.NonTxStatsCollectorIT |
   |   | phoenix.end2end.index.DropColumnIT |
   |   | phoenix.end2end.ParameterizedIndexUpgradeToolIT |
   |   | TEST-[RangeScanIT_6] |
   |   | phoenix.end2end.ExplainPlanWithStatsEnabledIT |
   |   | phoenix.end2end.index.LocalMutableNonTxIndexIT |
   |   | phoenix.end2end.OnDuplicateKeyIT |
   |   | phoenix.schema.stats.NamespaceDisabledStatsCollectorIT |
   |   | TEST-[PointInTimeScanQueryIT_13,columnEncoded=true] |
   |   | phoenix.end2end.index.GlobalMutableTxIndexIT |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.end2end.UpgradeNamespaceIT |
   |   | phoenix.end2end.index.GlobalIndexOptimizationIT |
   |   | phoenix.end2end.PhoenixTTLToolIT |
   |   | phoenix.tx.TxCheckpointIT |
   |   | phoenix.end2end.DeleteIT |
   |   | TEST-[QueryIT_6] |
   |   | phoenix.end2end.CreateTableIT |
   |   | TEST-[GroupByIT_6] |
   |   | phoenix.end2end.TenantSpecificViewIndexSaltedIT |
   |   | phoenix.end2end.CsvBulkLoadToolIT |
   |   | phoenix.end2end.UpsertWithSCNIT |
   |   | phoenix.end2end.index.LocalIndexIT |
   |   | TEST-[PointInTimeQueryIT_13,columnEncoded=true] |
   |   | TEST-[CaseStatementIT_6] |
   |   | TEST-[UngroupedIT_6] |
   |   | phoenix.end2end.index.IndexWithTableSchemaChangeIT |
   |   | phoenix.end2end.PermissionNSEnabledWithCustomAccessControllerIT |
   |   | TEST-[IntArithmeticIT_6] |
   |   | phoenix.end2end.UserDefinedFunctionsIT |
   |   | phoenix.end2end.index.ShortViewIndexIdIT |
   |   | phoenix.end2end.index.MutableIndexSplitReverseScanIT |
   |   | phoenix.end2end.index.IndexUsageIT |
   |   | TEST-[NullIT_13,columnEncoded=true] |
   |   | TEST-[AggregateQueryIT_6] |
   |   | phoenix.end2end.PropertiesInSyncIT |
   |   | phoenix.rpc.UpdateCacheIT |
   |   | phoenix.end2end.RegexBulkLoadToolIT |
   |   | phoenix.schema.stats.NamespaceEnabledStatsCollectorIT |
   |   | phoenix.end2end.SystemCatalogRollbackEnabledIT |
   |   | phoenix.end2end.index.LocalMutableTxIndexIT |
   |   | phoenix.end2end.DefaultColumnValueIT |
   |   | phoenix.end2end.index.LocalImmutableTxIndexIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 42608faefd50 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / e6d7d0b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 4536 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/3/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-804408410


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 17s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 16s |  phoenix-core in 4.x-PHOENIX-6247 has 944 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 23s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 44s |  phoenix-core: The patch generated 219 new + 15727 unchanged - 177 fixed = 15946 total (was 15904)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  javadoc  |   0m 48s |  phoenix-core generated 1 new + 92 unchanged - 0 fixed = 93 total (was 92)  |
   | -1 :x: |  spotbugs  |   3m 44s |  phoenix-core generated 2 new + 943 unchanged - 1 fixed = 945 total (was 944)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 267m 55s |  phoenix-core in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 39s |  The patch generated 1 ASF License warnings.  |
   |  |   | 315m  7s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2249] |
   | Failed junit tests | phoenix.end2end.index.txn.RollbackIT |
   |   | phoenix.end2end.PermissionsCacheIT |
   |   | phoenix.end2end.index.ImmutableIndexExtendedIT |
   |   | phoenix.end2end.ParameterizedIndexUpgradeToolIT |
   |   | phoenix.end2end.SystemCatalogRollbackEnabledIT |
   |   | phoenix.end2end.PermissionNSEnabledWithCustomAccessControllerIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.end2end.PermissionNSEnabledIT |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 360c98be0118 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / e6d7d0b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | javadoc | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/artifact/yetus-general-check/output/diff-javadoc-javadoc-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 5073 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/6/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605282359



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1559,7 +1586,29 @@ public PName getParentTableName() {
     @Override
     public PName getParentName() {
         // a view on a table will not have a parent name but will have a physical table name (which is the parent)
-        return (type!=PTableType.VIEW || parentName!=null) ? parentName : getPhysicalName();
+        // Update to above comment: we will return logical name of view parent base table

Review comment:
       This is the existing logic. If this is a child view of another view, the parent is the immediate parent view. If it is a view directly on top of a table, the parent is the logical name of the table. Removed that comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-806362874


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 57s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m  6s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 16s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 14s |  phoenix-core: The patch generated 232 new + 13806 unchanged - 99 fixed = 14038 total (was 13905)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 30s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 192m 18s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate ASF License warnings.  |
   |  |   | 237m 36s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5419] |
   | Failed junit tests | phoenix.end2end.index.txn.RollbackIT |
   |   | phoenix.end2end.UpsertSelectIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.tx.TransactionIT |
   |   | phoenix.end2end.UpsertWithSCNIT |
   |   | phoenix.end2end.ViewUtilIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/9/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 7122dd8b0d65 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / 410f738 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/9/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/9/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/9/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/9/testReport/ |
   | Max. process+thread count | 4945 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/9/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-807813612


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   7m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 39s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 31s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 53s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   8m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 25s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 48s |  phoenix-core: The patch generated 232 new + 13807 unchanged - 99 fixed = 14039 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  the patch passed  |
   | -1 :x: |  spotbugs  |   4m 16s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 207m 37s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate ASF License warnings.  |
   |  |   | 266m 28s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5419] |
   | Failed junit tests | phoenix.end2end.index.PartialIndexRebuilderIT |
   |   | phoenix.end2end.index.GlobalMutableTxIndexIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.tx.ParameterizedTransactionIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/10/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux eb20035fa452 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/10/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/10/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/10/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/10/testReport/ |
   | Max. process+thread count | 4898 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/10/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-813662831


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   6m 49s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 38s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m  4s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 14s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 14s |  phoenix-core: The patch generated 285 new + 13755 unchanged - 151 fixed = 14040 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 30s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 199m 49s |  phoenix-core in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 10s |  The patch does not generate ASF License warnings.  |
   |  |   | 248m 11s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5419] |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/14/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 7e7a1ac1aa72 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/14/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/14/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/14/testReport/ |
   | Max. process+thread count | 5139 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/14/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605821511



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -583,6 +583,7 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
     PName getName();
     PName getSchemaName();
     PName getTableName();
+    PName getPhysicalTableName();

Review comment:
       Renamed getPhysicalTableName as getPhysicalTableNameColumnInSyscat. getPhysicalName is inferred and I could have renamed it but that would touch a lot of files.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r599908594



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);

Review comment:
       Yeah, we don't need it in the end. This one is simpler to call since it just takes the string tablename. The other one requires HTableInterface.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605273186



##########
File path: phoenix-core/src/main/protobuf/PTable.proto
##########
@@ -111,6 +111,9 @@ message PTable {
   optional bool viewModifiedPhoenixTTL = 44;
   optional int64 lastDDLTimestamp = 45;
   optional bool changeDetectionEnabled = 46;
+  optional bytes physicalTableNameBytes = 47;
+  optional bool isModifiable = 48;

Review comment:
       It will be set later as part of another change. Since I was changing this part, I added this one as well




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605277820



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
##########
@@ -1008,6 +1030,11 @@ public final PName getTableName() {
         return tableName;
     }
 
+    @Override
+    public final PName getPhysicalTableName() {

Review comment:
       It is a bit complicated. getPhysicalName is mostly used for views. getPhysicalTableName is mostly for non-views. Let me see what I can do.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r606319436



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -728,6 +729,13 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
      * (use @getPhysicalTableName for this case) 
      */
     PName getParentTableName();
+
+    /**
+     * @return the logical name of the parent. In case of the view index, it is the _IDX_+logical name of base table

Review comment:
       I am not sure I understand your comment. IDX prefix + logical name of the table is where the view index is stored in.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-803537584


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 35s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m 28s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 15s |  phoenix-core in 4.x-PHOENIX-6247 has 944 extant spotbugs warnings.  |
   | -0 :warning: |  patch  |   3m 22s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 30s |  phoenix-core: The patch generated 217 new + 14898 unchanged - 98 fixed = 15115 total (was 14996)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 31s |  phoenix-core generated 2 new + 944 unchanged - 0 fixed = 946 total (was 944)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 233m 36s |  phoenix-core in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 37s |  The patch generated 1 ASF License warnings.  |
   |  |   | 280m  6s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2249] |
   | Failed junit tests | phoenix.end2end.RegexBulkLoadToolIT |
   |   | phoenix.end2end.index.txn.RollbackIT |
   |   | phoenix.end2end.DistinctPrefixFilterIT |
   |   | TEST-[RangeScanIT_6] |
   |   | phoenix.end2end.PermissionsCacheIT |
   |   | phoenix.end2end.IndexExtendedIT |
   |   | TEST-[QueryIT_6] |
   |   | TEST-[IntArithmeticIT_6] |
   |   | TEST-[NullIT_13,columnEncoded=true] |
   |   | phoenix.end2end.PermissionNSDisabledWithCustomAccessControllerIT |
   |   | phoenix.end2end.index.MutableIndexFailureIT |
   |   | phoenix.util.IndexScrutinyIT |
   |   | phoenix.schema.stats.NonTxStatsCollectorIT |
   |   | phoenix.end2end.join.HashJoinLocalIndexIT |
   |   | phoenix.end2end.IndexToolForDeleteBeforeRebuildIT |
   |   | phoenix.end2end.ExplainPlanWithStatsEnabledIT |
   |   | phoenix.rpc.UpdateCacheIT |
   |   | phoenix.end2end.index.LocalMutableTxIndexIT |
   |   | phoenix.end2end.DefaultColumnValueIT |
   |   | phoenix.end2end.index.LocalIndexIT |
   |   | phoenix.coprocessor.StatisticsCollectionRunTrackerIT |
   |   | TEST-[GroupByIT_6] |
   |   | phoenix.end2end.index.MutableIndexExtendedIT |
   |   | phoenix.end2end.index.MutableIndexSplitForwardScanIT |
   |   | phoenix.schema.stats.TxStatsCollectorIT |
   |   | phoenix.end2end.ParameterizedIndexUpgradeToolIT |
   |   | phoenix.end2end.SystemCatalogRollbackEnabledIT |
   |   | phoenix.end2end.IndexToolIT |
   |   | phoenix.end2end.PermissionNSEnabledWithCustomAccessControllerIT |
   |   | phoenix.end2end.UserDefinedFunctionsIT |
   |   | phoenix.end2end.PermissionNSDisabledIT |
   |   | phoenix.end2end.IndexBuildTimestampIT |
   |   | phoenix.end2end.FlappingLocalIndexIT |
   |   | phoenix.end2end.index.IndexWithTableSchemaChangeIT |
   |   | phoenix.end2end.index.DropColumnIT |
   |   | phoenix.end2end.CsvBulkLoadToolIT |
   |   | phoenix.end2end.index.LocalMutableNonTxIndexIT |
   |   | phoenix.end2end.DeleteIT |
   |   | phoenix.end2end.join.SubqueryIT |
   |   | TEST-[AggregateQueryIT_6] |
   |   | phoenix.end2end.index.MutableIndexIT |
   |   | phoenix.end2end.LocalIndexSplitMergeIT |
   |   | phoenix.end2end.join.SortMergeJoinLocalIndexIT |
   |   | phoenix.end2end.CostBasedDecisionIT |
   |   | TEST-[UngroupedIT_6] |
   |   | phoenix.end2end.index.IndexUsageIT |
   |   | phoenix.tx.TxCheckpointIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.end2end.index.txn.TxWriteFailureIT |
   |   | phoenix.end2end.index.LocalImmutableTxIndexIT |
   |   | phoenix.end2end.index.ImmutableIndexIT |
   |   | phoenix.end2end.index.ShortViewIndexIdIT |
   |   | phoenix.end2end.index.MutableIndexFailureWithNamespaceIT |
   |   | phoenix.end2end.IndexScrutinyToolIT |
   |   | phoenix.end2end.index.IndexMaintenanceIT |
   |   | phoenix.end2end.UpgradeNamespaceIT |
   |   | phoenix.schema.stats.NamespaceEnabledStatsCollectorIT |
   |   | TEST-[PointInTimeQueryIT_13,columnEncoded=true] |
   |   | phoenix.end2end.CreateTableIT |
   |   | phoenix.end2end.join.SubqueryUsingSortMergeJoinIT |
   |   | phoenix.end2end.EmptyColumnIT |
   |   | phoenix.end2end.ViewIT |
   |   | phoenix.end2end.index.MutableIndexSplitReverseScanIT |
   |   | phoenix.end2end.index.IndexMetadataIT |
   |   | phoenix.end2end.PermissionNSEnabledIT |
   |   | TEST-[InQueryIT_6] |
   |   | phoenix.end2end.PropertiesInSyncIT |
   |   | phoenix.end2end.index.txn.MutableRollbackIT |
   |   | phoenix.end2end.UpsertWithSCNIT |
   |   | TEST-[CastAndCoerceIT_6] |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   |   | phoenix.schema.stats.NamespaceDisabledStatsCollectorIT |
   |   | phoenix.end2end.index.LocalImmutableNonTxIndexIT |
   |   | phoenix.end2end.index.GlobalIndexOptimizationIT |
   |   | TEST-[PointInTimeScanQueryIT_13,columnEncoded=true] |
   |   | phoenix.end2end.OnDuplicateKeyIT |
   |   | TEST-[CaseStatementIT_6] |
   |   | phoenix.end2end.AlterAddCascadeIndexIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux c038c6209dbc 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / e6d7d0b |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 4611 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/4/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-813778620


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  14m 51s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m  2s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 17s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 14s |  phoenix-core: The patch generated 231 new + 13807 unchanged - 99 fixed = 14038 total (was 13906)  |
   | +1 :green_heart: |  prototool  |   0m  2s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 31s |  phoenix-core generated 5 new + 941 unchanged - 0 fixed = 946 total (was 941)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 197m  6s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  The patch does not generate ASF License warnings.  |
   |  |   | 242m 16s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1928] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1236] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 325] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2250] |
   |  |  Call to String.equals(org.apache.phoenix.schema.PName) in org.apache.phoenix.schema.MetaDataClient.evaluateStmtProperties(MetaDataClient$MetaProperties, MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:MetaDataClient$MetaPropertiesEvaluated, PTable, String, String)  At MetaDataClient.java:[line 5417] |
   | Failed junit tests | phoenix.end2end.UpsertSelectIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/15/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 0d516a1d73d3 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / ee4ce9f |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/15/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/15/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/15/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/15/testReport/ |
   | Max. process+thread count | 5123 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/15/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-815337082


   Merged to 4.x-PHOENIX-6247 branch. Thanks for the review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-804704242


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 16s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m  9s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 18s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 13s |  phoenix-core: The patch generated 201 new + 13820 unchanged - 85 fixed = 14021 total (was 13905)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 37s |  phoenix-core generated 3 new + 941 unchanged - 0 fixed = 944 total (was 941)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 199m  2s |  phoenix-core in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 37s |  The patch generated 1 ASF License warnings.  |
   |  |   | 244m 46s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2249] |
   | Failed junit tests | phoenix.end2end.PermissionsCacheIT |
   |   | phoenix.end2end.PermissionNSEnabledWithCustomAccessControllerIT |
   |   | phoenix.end2end.CsvBulkLoadToolIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.end2end.PermissionNSEnabledIT |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 65ec3e96eabf 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / 410f738 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/testReport/ |
   | asflicense | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/artifact/yetus-general-check/output/patch-asflicense-problems.txt |
   | Max. process+thread count | 4672 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/7/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r599021967



##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
##########
@@ -443,7 +445,52 @@ public void testImportOneIndexTable(String tableName, boolean localIndex) throws
             checkIndexTableIsVerified(indexTableName);
         }
     }
-    
+
+    @Test
+    public void testImportWithDifferentPhysicalName() throws Exception {
+        String tableName = generateUniqueName();
+        String indexTableName = String.format("%s_IDX", tableName);
+        Statement stmt = conn.createStatement();
+        stmt.execute("CREATE TABLE " + tableName + "(ID INTEGER NOT NULL PRIMARY KEY, "
+                + "FIRST_NAME VARCHAR, LAST_NAME VARCHAR)");
+        String ddl = "CREATE  INDEX " + indexTableName + " ON " + tableName + "(FIRST_NAME ASC)";
+        stmt.execute(ddl);
+        String newTableName = LogicalTableNameIT.NEW_TABLE_PREFIX + generateUniqueName();
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(tableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(tableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(newTableName));
+        }
+        LogicalTableNameIT.renameAndDropPhysicalTable(conn, null, null, tableName, newTableName);
+
+        FileSystem fs = FileSystem.get(getUtility().getConfiguration());
+        FSDataOutputStream outputStream = fs.create(new Path("/tmp/input4.csv"));
+        PrintWriter printWriter = new PrintWriter(outputStream);
+        printWriter.println("1,FirstName 1,LastName 1");
+        printWriter.println("2,FirstName 2,LastName 2");
+        printWriter.close();
+
+        CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
+        csvBulkLoadTool.setConf(getUtility().getConfiguration());
+        int
+                exitCode =
+                csvBulkLoadTool
+                        .run(new String[] { "--input", "/tmp/input4.csv", "--table", tableName,
+                                "--index-table", indexTableName, "--zookeeper", zkQuorum });
+        assertEquals(0, exitCode);
+
+        ResultSet rs = stmt.executeQuery("SELECT * FROM " + tableName);
+        assertFalse(rs.next());

Review comment:
       The import tool is actually importing on the index table without touching the data table, that is why data table is empty but index table is not empty. I think it will make more sense if I remove --index-table param and import it on the data table and check that the data table whose physical name is changed got populated. And then to check index, change the physical index table name too and then use --index-table param.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r609039671



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
##########
@@ -896,6 +898,9 @@ private boolean addColumnsAndIndexesFromAncestors(MetaDataMutationResult result,
             MetaDataMutationResult parentResult = updateCache(connection.getTenantId(), parentSchemaName, parentTableName,
                     false, resolvedTimestamp);
             PTable parentTable = parentResult.getTable();
+            if (LOGGER.isDebugEnabled()) {

Review comment:
       Do we need this logging? Wondering if it should be at TRACE level to avoid having a bunch of logs of it, since I think this is a pretty frequently used function.

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -728,6 +728,14 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
      * (use @getPhysicalTableName for this case) 
      */
     PName getParentTableName();
+
+    /**
+     * @return the logical full name of the parent. In case of the view index, it is the _IDX_+logical name of base table

Review comment:
       Should be "the logical full name of the base table", not the parent (which may not be the base table, in the case of a child view)

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,819 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {

Review comment:
       tiny nit: line length

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,819 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullTableName + " WHERE PK1='PK10'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP TABLE " + fullTableName);
+                // check that the physical data table is dropped
+                Admin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+                assertEquals(false, admin.tableExists(TableName.valueOf(SchemaUtil.getTableName(schemaName,NEW_TABLE_PREFIX + tableName))));
+
+                // check that index is dropped
+                assertEquals(false, admin.tableExists(TableName.valueOf(fullIndexName)));
+
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, indexName));
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+            }
+        }
+    }
+
+    private  HashMap<String, ArrayList<String>> test_IndexTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName, byte[] verifiedBytes) throws Exception {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        conn.setAutoCommit(true);
+        createTable(conn, fullTableName);
+        createIndexOnTable(conn, fullTableName, indexName);
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table for index and add 1 more row
+        String newTableName = "NEW_IDXTBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(indexName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullIndexName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put
+                        put =
+                        new Put(ByteUtil.concat(Bytes.toBytes("V13"), QueryConstants.SEPARATOR_BYTE_ARRAY, Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        verifiedBytes);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT * FROM " + fullIndexName;
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, indexName, newTableName);
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // create another index and drop the first index and validate the second one
+                String indexName2 = "IDX2_" + generateUniqueName();
+                String fullIndexName2 = SchemaUtil.getTableName(schemaName, indexName2);
+                if (createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                dropIndex(conn2, fullTableName, indexName);
+                if (!createChildAfterTransform) {
+                    createIndexOnTable(conn2, fullTableName, indexName2);
+                }
+                // The new index doesn't have the new row
+                expected.remove("PK3");
+                validateIndex(conn, fullIndexName2, false, expected);
+                validateIndex(conn2, fullIndexName2, false, expected);
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalIndexTableName_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                test_IndexTableChange(conn, conn2, schemaName, tableName, indexName, IndexRegionObserver.VERIFIED_BYTES);
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+                // Try with unverified bytes
+                String tableName2 = "TBL_" + generateUniqueName();
+                String indexName2 = "IDX_" + generateUniqueName();
+                test_IndexTableChange(conn, conn2, schemaName, tableName2, indexName2, IndexRegionObserver.UNVERIFIED_BYTES);
+
+                completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName2, indexName2, 1L,
+                                IndexScrutinyTool.SourceTable.INDEX_TABLE_SOURCE);
+
+                job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                counters = job.getCounters();
+
+                // Since we didn't build the index, we expect 1 missing index row
+                assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+
+            }
+        }
+    }
+
+    private HashMap<String, ArrayList<String>> testWithViewsAndIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String viewName1, String v1_indexName1, String v1_indexName2, String viewName2, String v2_indexName1) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullViewName1 = SchemaUtil.getTableName(schemaName, viewName1);
+        String fullViewName2 = SchemaUtil.getTableName(schemaName, viewName2);
+        createTable(conn, fullTableName);
+        HashMap<String, ArrayList<String>> expected = new HashMap<>();
+        if (!createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        // Create another hbase table and add 1 more row
+        String newTableName = "NEW_TBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"),
+                        Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL1"),
+                        Bytes.toBytes("VIEW_COL1_3"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("VIEW_COL2"),
+                        Bytes.toBytes("VIEW_COL2_3"));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4", "VIEW_COL1_3", "VIEW_COL2_3"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        if (!createChildAfterTransform) {
+            assertTrue(rs1.next());
+        }
+
+        // Rename table to point to hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        conn.unwrap(PhoenixConnection.class).getQueryServices().clearCache();
+        if (createChildAfterTransform) {
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName1);
+            createViewAndIndex(conn, schemaName, tableName, viewName1, v1_indexName2);
+            createViewAndIndex(conn, schemaName, tableName, viewName2, v2_indexName1);
+            expected.putAll(populateView(conn, fullViewName1, 1,2));
+            expected.putAll(populateView(conn, fullViewName2, 10,2));
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+
+    private PhoenixTestBuilder.SchemaBuilder createGlobalViewAndTenantView() throws Exception {
+        int numOfRows = 5;
+        PhoenixTestBuilder.SchemaBuilder.TableOptions tableOptions = PhoenixTestBuilder.SchemaBuilder.TableOptions.withDefaults();
+        tableOptions.getTableColumns().clear();
+        tableOptions.getTableColumnTypes().clear();
+        tableOptions.setTableProps(" MULTI_TENANT=true, COLUMN_ENCODED_BYTES=0 "+this.dataTableDdl);
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions globalViewOptions = PhoenixTestBuilder.SchemaBuilder.GlobalViewOptions.withDefaults();
+
+        PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions globalViewIndexOptions =
+                PhoenixTestBuilder.SchemaBuilder.GlobalViewIndexOptions.withDefaults();
+        globalViewIndexOptions.setLocal(false);
+
+        PhoenixTestBuilder.SchemaBuilder.TenantViewOptions tenantViewOptions = new PhoenixTestBuilder.SchemaBuilder.TenantViewOptions();
+        tenantViewOptions.setTenantViewColumns(asList("ZID", "COL7", "COL8", "COL9"));
+        tenantViewOptions.setTenantViewColumnTypes(asList("CHAR(15)", "VARCHAR", "VARCHAR", "VARCHAR"));
+
+        PhoenixTestBuilder.SchemaBuilder.OtherOptions testCaseWhenAllCFMatchAndAllDefault = new PhoenixTestBuilder.SchemaBuilder.OtherOptions();
+        testCaseWhenAllCFMatchAndAllDefault.setTestName("testCaseWhenAllCFMatchAndAllDefault");
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTableCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setGlobalViewCFs(Lists.newArrayList((String) null, null, null));
+        testCaseWhenAllCFMatchAndAllDefault
+                .setTenantViewCFs(Lists.newArrayList((String) null, null, null, null));
+
+        // Define the test schema.
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = null;
+        if (!createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }  else {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withTableOptions(tableOptions).build();
+        }
+
+        PTable table = schemaBuilder.getBaseTable();
+        String schemaName = table.getSchemaName().getString();
+        String tableName = table.getTableName().getString();
+        String newBaseTableName = "NEW_TBL_" + tableName;
+        String fullNewBaseTableName = SchemaUtil.getTableName(schemaName, newBaseTableName);
+        String fullTableName = table.getName().getString();
+
+        try (Connection conn = getConnection(props)) {
+
+            try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+                String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+                admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+                admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewBaseTableName));
+            }
+
+            renameAndDropPhysicalTable(conn, null, schemaName, tableName, newBaseTableName);
+        }
+
+        // TODO: this still creates a new table.
+        if (createChildAfterTransform) {
+            schemaBuilder = new PhoenixTestBuilder.SchemaBuilder(getUrl());
+            schemaBuilder.withDataOptions(schemaBuilder.getDataOptions())
+                    .withTableOptions(tableOptions)
+                    .withGlobalViewOptions(globalViewOptions)
+                    .withGlobalViewIndexOptions(globalViewIndexOptions)
+                    .withTenantViewOptions(tenantViewOptions)
+                    .withOtherOptions(testCaseWhenAllCFMatchAndAllDefault).build();
+        }
+
+        // Define the test data.
+        PhoenixTestBuilder.DataSupplier dataSupplier = new PhoenixTestBuilder.DataSupplier() {
+
+            @Override public List<Object> getValues(int rowIndex) {
+                Random rnd = new Random();
+                String id = String.format(ViewTTLIT.ID_FMT, rowIndex);
+                String zid = String.format(ViewTTLIT.ZID_FMT, rowIndex);
+                String col4 = String.format(ViewTTLIT.COL4_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col5 = String.format(ViewTTLIT.COL5_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col6 = String.format(ViewTTLIT.COL6_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col7 = String.format(ViewTTLIT.COL7_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col8 = String.format(ViewTTLIT.COL8_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+                String col9 = String.format(ViewTTLIT.COL9_FMT, rowIndex + rnd.nextInt(MAX_ROWS));
+
+                return Lists.newArrayList(
+                        new Object[] { id, zid, col4, col5, col6, col7, col8, col9 });
+            }
+        };
+
+        // Create a test data reader/writer for the above schema.
+        PhoenixTestBuilder.DataWriter dataWriter = new PhoenixTestBuilder.BasicDataWriter();
+        List<String> columns =
+                Lists.newArrayList("ID", "ZID", "COL4", "COL5", "COL6", "COL7", "COL8", "COL9");
+        List<String> rowKeyColumns = Lists.newArrayList("ID", "ZID");
+
+        String tenantConnectUrl =
+                getUrl() + ';' + TENANT_ID_ATTRIB + '=' + schemaBuilder.getDataOptions().getTenantId();
+
+        try (Connection tenantConnection = DriverManager.getConnection(tenantConnectUrl)) {
+            tenantConnection.setAutoCommit(true);
+            dataWriter.setConnection(tenantConnection);
+            dataWriter.setDataSupplier(dataSupplier);
+            dataWriter.setUpsertColumns(columns);
+            dataWriter.setRowKeyColumns(rowKeyColumns);
+            dataWriter.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataWriter.upsertRows(1, numOfRows);
+            com.google.common.collect.Table<String, String, Object> upsertedData = dataWriter.getDataTable();;
+
+            PhoenixTestBuilder.DataReader dataReader = new PhoenixTestBuilder.BasicDataReader();
+            dataReader.setValidationColumns(columns);
+            dataReader.setRowKeyColumns(rowKeyColumns);
+            dataReader.setDML(String.format("SELECT %s from %s", Joiner.on(",").join(columns),
+                    schemaBuilder.getEntityTenantViewName()));
+            dataReader.setTargetEntity(schemaBuilder.getEntityTenantViewName());
+            dataReader.setConnection(tenantConnection);
+            dataReader.readRows();
+            com.google.common.collect.Table<String, String, Object> fetchedData
+                    = dataReader.getDataTable();
+            assertNotNull("Fetched data should not be null", fetchedData);
+            ViewTTLIT.verifyRowsBeforeTTLExpiration(upsertedData, fetchedData);
+
+        }
+        return schemaBuilder;
+    }
+
+    @Test
+    public void testWith2LevelViewsBaseTablePhysicalNameChange() throws Exception {
+        // TODO: use namespace in one of the cases
+        PhoenixTestBuilder.SchemaBuilder schemaBuilder = createGlobalViewAndTenantView();
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithViews() throws Exception {
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                String schemaName = "S_" + generateUniqueName();
+                String tableName = "TBL_" + generateUniqueName();
+                String view1Name = "VW1_" + generateUniqueName();
+                String view1IndexName1 = "VW1IDX1_" + generateUniqueName();
+                String view1IndexName2 = "VW1IDX2_" + generateUniqueName();
+                String fullView1IndexName1 = SchemaUtil.getTableName(schemaName, view1IndexName1);
+                String fullView1IndexName2 =  SchemaUtil.getTableName(schemaName, view1IndexName2);
+                String view2Name = "VW2_" + generateUniqueName();
+                String view2IndexName1 = "VW2IDX1_" + generateUniqueName();
+                String fullView1Name = SchemaUtil.getTableName(schemaName, view1Name);
+                String fullView2Name = SchemaUtil.getTableName(schemaName, view2Name);
+                String fullView2IndexName1 =  SchemaUtil.getTableName(schemaName, view2IndexName1);
+
+                HashMap<String, ArrayList<String>> expected = testWithViewsAndIndex_BaseTableChange(conn, conn2, schemaName, tableName, view1Name, view1IndexName1, view1IndexName2, view2Name, view2IndexName1);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName1);
+                IndexToolIT.runIndexTool(true, false, schemaName, view1Name, view1IndexName2);
+                IndexToolIT.runIndexTool(true, false, schemaName, view2Name, view2IndexName1);
+
+                SingleCellIndexIT.dumpTable("_IDX_" + SchemaUtil.getTableName(schemaName, tableName));

Review comment:
       please remove dumpTable

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,819 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullTableName + " WHERE PK1='PK10'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP TABLE " + fullTableName);
+                // check that the physical data table is dropped
+                Admin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+                assertEquals(false, admin.tableExists(TableName.valueOf(SchemaUtil.getTableName(schemaName,NEW_TABLE_PREFIX + tableName))));
+
+                // check that index is dropped
+                assertEquals(false, admin.tableExists(TableName.valueOf(fullIndexName)));
+
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, indexName));
+                List<Job>
+                        completedJobs =
+                        IndexScrutinyToolBaseIT.runScrutinyTool(schemaName, tableName, indexName, 1L,
+                                IndexScrutinyTool.SourceTable.DATA_TABLE_SOURCE);
+
+                Job job = completedJobs.get(0);
+                assertTrue(job.isSuccessful());
+
+                Counters counters = job.getCounters();
+                if (createChildAfterTransform) {
+                    assertEquals(3, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(0, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                } else {
+                    // Since we didn't build the index, we expect 1 missing index row
+                    assertEquals(2, counters.findCounter(VALID_ROW_COUNT).getValue());
+                    assertEquals(1, counters.findCounter(INVALID_ROW_COUNT).getValue());
+                }
+            }
+        }
+    }
+
+    private  HashMap<String, ArrayList<String>> test_IndexTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName, byte[] verifiedBytes) throws Exception {
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+        conn.setAutoCommit(true);
+        createTable(conn, fullTableName);
+        createIndexOnTable(conn, fullTableName, indexName);
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table for index and add 1 more row
+        String newTableName = "NEW_IDXTBL_" + generateUniqueName();
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices()
+                .getAdmin()) {
+            String snapshotName = new StringBuilder(indexName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullIndexName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put
+                        put =
+                        new Put(ByteUtil.concat(Bytes.toBytes("V13"), QueryConstants.SEPARATOR_BYTE_ARRAY, Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        verifiedBytes);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("0:V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT * FROM " + fullIndexName;
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, indexName, newTableName);
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);

Review comment:
       Can remove dumpTable here

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,793 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.*;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.*;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.printResultSet;
+import static org.junit.Assert.*;
+
+@RunWith(Parameterized.class)
+public class LogicalTableNameIT extends ParallelStatsDisabledIT  {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, Integer.toString(3000));
+        //When we run all tests together we are using global cluster(driver)
+        //so to make drop work we need to re register driver with DROP_METADATA_ATTRIB property
+        destroyDriver();
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created across connections
+        //so that metadata changes doesn't get propagated across connections
+        DriverManager.registerDriver(PhoenixDriver.INSTANCE);
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);

Review comment:
       Before merging could you please remove?

##########
File path: phoenix-core/src/it/java/org/apache/phoenix/end2end/LogicalTableNameIT.java
##########
@@ -0,0 +1,819 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import org.apache.curator.shaded.com.google.common.base.Joiner;
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.curator.shaded.com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.ScanInfoUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.phoenix.end2end.index.SingleCellIndexIT;
+import org.apache.phoenix.hbase.index.IndexRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.mapreduce.index.IndexScrutinyTool;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.PhoenixTestBuilder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import static java.util.Arrays.asList;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.INVALID_ROW_COUNT;
+import static org.apache.phoenix.mapreduce.index.PhoenixScrutinyJobCounters.VALID_ROW_COUNT;
+import static org.apache.phoenix.query.PhoenixTestBuilder.DDLDefaults.MAX_ROWS;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+
+@RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
+public class LogicalTableNameIT extends BaseTest {
+    private static final Logger LOGGER = LoggerFactory.getLogger(LogicalTableNameIT.class);
+
+    private final boolean createChildAfterTransform;
+    private final boolean immutable;
+    private String dataTableDdl;
+    public static final String NEW_TABLE_PREFIX = "NEW_TBL_";
+    private Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newConcurrentMap();
+        props.put(QueryServices.DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+        props.put(ScanInfoUtil.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, Integer.toString(60*60*1000)); // An hour
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    public LogicalTableNameIT(boolean createChildAfterTransform, boolean immutable)  {
+        this.createChildAfterTransform = createChildAfterTransform;
+        this.immutable = immutable;
+        StringBuilder optionBuilder = new StringBuilder();
+        if (immutable) {
+            optionBuilder.append(" ,IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, IMMUTABLE_ROWS=true");
+        }
+        this.dataTableDdl = optionBuilder.toString();
+    }
+
+    @Parameterized.Parameters(
+            name = "createChildAfterTransform={0}, immutable={1}")
+    public static synchronized Collection<Object[]> data() {
+        List<Object[]> list = Lists.newArrayListWithExpectedSize(2);
+        boolean[] Booleans = new boolean[] { false, true };
+        for (boolean immutable : Booleans) {
+            for (boolean createAfter : Booleans) {
+                list.add(new Object[] { createAfter, immutable });
+            }
+        }
+
+        return list;
+    }
+
+    private Connection getConnection(Properties props) throws Exception {
+        props.setProperty(QueryServices.DROP_METADATA_ATTRIB, Boolean.toString(true));
+        // Force real driver to be used as the test one doesn't handle creating
+        // more than one ConnectionQueryService
+        props.setProperty(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, StringUtil.EMPTY_STRING);
+        // Create new ConnectionQueryServices so that we can set DROP_METADATA_ATTRIB
+        String url = QueryUtil.getConnectionUrl(props, config, "PRINCIPAL");
+        return DriverManager.getConnection(url, props);
+    }
+
+    private  HashMap<String, ArrayList<String>> testBaseTableWithIndex_BaseTableChange(Connection conn, Connection conn2, String schemaName, String tableName, String indexName) throws Exception {
+        conn.setAutoCommit(true);
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        createTable(conn, fullTableName);
+        if (!createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+        HashMap<String, ArrayList<String>> expected = populateTable(conn, fullTableName, 1, 2);
+
+        // Create another hbase table and add 1 more row
+        String newTableName =  NEW_TABLE_PREFIX + tableName;
+        String fullNewTableName = SchemaUtil.getTableName(schemaName, newTableName);
+        try (HBaseAdmin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+            String snapshotName = new StringBuilder(fullTableName).append("-Snapshot").toString();
+            admin.snapshot(snapshotName, TableName.valueOf(fullTableName));
+            admin.cloneSnapshot(Bytes.toBytes(snapshotName), Bytes.toBytes(fullNewTableName));
+
+            try (HTableInterface htable = conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(Bytes.toBytes(fullNewTableName))) {
+                Put put = new Put(ByteUtil.concat(Bytes.toBytes("PK3")));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
+                        QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V1"), Bytes.toBytes("V13"));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V2"),
+                        PInteger.INSTANCE.toBytes(3));
+                put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, Bytes.toBytes("V3"),
+                        PInteger.INSTANCE.toBytes(4));
+                htable.put(put);
+                expected.put("PK3", Lists.newArrayList("PK3", "V13", "3", "4"));
+            }
+        }
+
+        // Query to cache on the second connection
+        String selectTable1 = "SELECT PK1, V1, V2, V3 FROM " + fullTableName + " ORDER BY PK1 DESC";
+        ResultSet rs1 = conn2.createStatement().executeQuery(selectTable1);
+        assertTrue(rs1.next());
+
+        // Rename table to point to the new hbase table
+        renameAndDropPhysicalTable(conn, "NULL", schemaName, tableName, newTableName);
+
+        if (createChildAfterTransform) {
+            createIndexOnTable(conn, fullTableName, indexName);
+        }
+
+        SingleCellIndexIT.dumpTable(fullNewTableName);
+        return expected;
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+        String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+        String fullIndexName = SchemaUtil.getTableName(schemaName, indexName);
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                HashMap<String, ArrayList<String>> expected = testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                // We have to rebuild index for this to work
+                IndexToolIT.runIndexTool(true, false, schemaName, tableName, indexName);
+
+                validateTable(conn, fullTableName);
+                validateTable(conn2, fullTableName);
+                validateIndex(conn, fullIndexName, false, expected);
+                validateIndex(conn2, fullIndexName, false, expected);
+
+                // Add row and check
+                populateTable(conn, fullTableName, 10, 1);
+                ResultSet rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(true, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(true, rs.next());
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, NEW_TABLE_PREFIX+tableName));
+                // Drop row and check
+                conn.createStatement().execute("DELETE from " + fullTableName + " WHERE PK1='PK10'");
+                rs = conn2.createStatement().executeQuery("SELECT * FROM " + fullIndexName + " WHERE \":PK1\"='PK10'");
+                assertEquals(false, rs.next());
+                rs = conn.createStatement().executeQuery("SELECT * FROM " + fullTableName  + " WHERE PK1='PK10'");
+                assertEquals(false, rs.next());
+
+                conn2.createStatement().execute("DROP TABLE " + fullTableName);
+                // check that the physical data table is dropped
+                Admin admin = conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+                assertEquals(false, admin.tableExists(TableName.valueOf(SchemaUtil.getTableName(schemaName,NEW_TABLE_PREFIX + tableName))));
+
+                // check that index is dropped
+                assertEquals(false, admin.tableExists(TableName.valueOf(fullIndexName)));
+
+            }
+        }
+    }
+
+    @Test
+    public void testUpdatePhysicalTableNameWithIndex_runScrutiny() throws Exception {
+        String schemaName = "S_" + generateUniqueName();
+        String tableName = "TBL_" + generateUniqueName();
+        String indexName = "IDX_" + generateUniqueName();
+
+        try (Connection conn = getConnection(props)) {
+            try (Connection conn2 = getConnection(props)) {
+                testBaseTableWithIndex_BaseTableChange(conn, conn2, schemaName, tableName, indexName);
+
+                SingleCellIndexIT.dumpTable(SchemaUtil.getTableName(schemaName, indexName));

Review comment:
       Please remove dumpTable

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -1877,7 +1923,9 @@ public void createTable(RpcController controller, CreateTableRequest request,
                     cPhysicalName = parentTable.getPhysicalName().getBytes();
                     cParentPhysicalName = parentTable.getPhysicalName().getBytes();
                 } else if (parentTable.getType() == PTableType.VIEW) {
-                    cPhysicalName = MetaDataUtil.getViewIndexPhysicalName(parentTable.getPhysicalName().getBytes());
+                    // Logical name of base table

Review comment:
       nit: let's specify in the comment that the physical name of the view index table is constructed from the logical name of the base table, so it's more clear what's going on here. (similar to the good comment on 2206)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gjacoby126 commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gjacoby126 commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605997136



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -728,6 +729,13 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
      * (use @getPhysicalTableName for this case) 
      */
     PName getParentTableName();
+
+    /**
+     * @return the logical name of the parent. In case of the view index, it is the _IDX_+logical name of base table

Review comment:
       why _IDX_ + logical name of base table for view index? That's the physical table the view index is stored in, not the logical name. 

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -583,6 +583,7 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
     PName getName();
     PName getSchemaName();
     PName getTableName();
+    PName getPhysicalTableNameColumnInSyscat();

Review comment:
       Thinking about this more, I don't think just changing the name solves the problem of having two methods that are almost-but-not-quite the same thing. I'd still be confused about which to call. 
   
   Can we not use the existing getPhysicalNames() to return the physical table name column from syscat in the situations where that's appropriate?

##########
File path: phoenix-core/src/main/protobuf/PTable.proto
##########
@@ -111,6 +111,9 @@ message PTable {
   optional bool viewModifiedPhoenixTTL = 44;
   optional int64 lastDDLTimestamp = 45;
   optional bool changeDetectionEnabled = 46;
+  optional bytes physicalTableNameBytes = 47;
+  optional bool isModifiable = 48;

Review comment:
       Since we generate proto code on-demand now, I don't think we gain anything by clumping unrelated protobuf changes into this PR. We can save isModifiable for next time when the change can be considered as a whole

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -583,6 +583,7 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
     PName getName();
     PName getSchemaName();
     PName getTableName();
+    PName getPhysicalTableNameColumnInSyscat();

Review comment:
       getPhysicalNames makes a point of returning a list of PNames to leave the interface open for views that span multiple tables (not currently supported but long on the wishlist). Does anything in this PR, such as getPhysicalTableNameColumnInSyscat returning a single String, prevent us from having multi-table views later?

##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -728,6 +729,13 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
      * (use @getPhysicalTableName for this case) 
      */
     PName getParentTableName();
+
+    /**
+     * @return the logical name of the parent. In case of the view index, it is the _IDX_+logical name of base table
+     * Ex: For hierarchical views like tableLogicalName --> view1 --> view2, for view2, returns _IDX_+tableLogicalName
+     */
+    PName getParentLogicalName();

Review comment:
       Likewise with getParentName vs getParentLogicalName. If I'm reading the current logic right, getParentName _also_ returns only logical names. They're almost-but-not-quite the same thing. Can we consolidate? Otherwise we're going to create lots of subtle, really-hard-to-spot bugs going forward when people use the wrong one. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r605086281



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
##########
@@ -1209,7 +1231,30 @@ private PTable getTable(RegionScanner scanner, long clientTimeStamp, long tableT
                 if (linkType == LinkType.INDEX_TABLE) {
                     addIndexToTable(tenantId, schemaName, famName, tableName, clientTimeStamp, indexes, clientVersion);
                 } else if (linkType == LinkType.PHYSICAL_TABLE) {
-                    physicalTables.add(famName);
+                    // famName contains the logical name of the parent table. We need to get the actual physical name of the table
+                    PTable parentTable = null;
+                    if (indexType != IndexType.LOCAL) {
+                        parentTable = getTable(null, SchemaUtil.getSchemaNameFromFullName(famName.getBytes()).getBytes(),
+                                SchemaUtil.getTableNameFromFullName(famName.getBytes()).getBytes(), clientTimeStamp, clientVersion);
+                        if (parentTable == null) {
+                            // parentTable is not in the cache. Since famName is only logical name, we need to find the physical table.
+                            try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class)) {
+                                parentTable = PhoenixRuntime.getTableNoCache(connection, famName.getString());
+                            } catch (TableNotFoundException e) {
+                                // It is ok to swallow this exception since this could be a view index and _IDX_ table is not there.
+                            }
+                        }
+                    }
+
+                    if (parentTable == null) {
+                        physicalTables.add(famName);
+                        // If this is a view index, then one of the link is IDX_VW -> _IDX_ PhysicalTable link. Since famName is _IDX_ and we can't get this table hence it is null, we need to use actual view name
+                        parentLogicalName = (tableType == INDEX ? SchemaUtil.getTableName(parentSchemaName, parentTableName) : famName);

Review comment:
       INDEX is PTableType not string




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] stoty commented on pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
stoty commented on pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#issuecomment-806279415


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ 4.x-PHOENIX-6247 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  15m 41s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  checkstyle  |   4m  9s |  4.x-PHOENIX-6247 passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  4.x-PHOENIX-6247 passed  |
   | +0 :ok: |  spotbugs  |   3m 16s |  phoenix-core in 4.x-PHOENIX-6247 has 941 extant spotbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  cc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | -1 :x: |  checkstyle  |   4m 13s |  phoenix-core: The patch generated 200 new + 13820 unchanged - 85 fixed = 14020 total (was 13905)  |
   | +1 :green_heart: |  prototool  |   0m  1s |  There were no new prototool issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  the patch passed  |
   | -1 :x: |  spotbugs  |   3m 33s |  phoenix-core generated 4 new + 941 unchanged - 0 fixed = 945 total (was 941)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 185m 28s |  phoenix-core in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate ASF License warnings.  |
   |  |   | 231m 40s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | FindBugs | module:phoenix-core |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(RpcController, MetaDataProtos$CreateTableRequest, RpcCallback): String.getBytes()  At MetaDataEndpointImpl.java:[line 1929] |
   |  |  Found reliance on default encoding in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int):in org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(RegionScanner, long, long, int): String.getBytes()  At MetaDataEndpointImpl.java:[line 1237] |
   |  |  org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PHYSICAL_TABLE_NAME_BYTES is a mutable array  At MetaDataEndpointImpl.java: At MetaDataEndpointImpl.java:[line 326] |
   |  |  Found reliance on default encoding in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean):in org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureViewIndexTableCreated(PTable, long, boolean): String.getBytes()  At ConnectionQueryServicesImpl.java:[line 2249] |
   | Failed junit tests | phoenix.end2end.index.MutableIndexIT |
   |   | phoenix.end2end.LogicalTableNameIT |
   |   | phoenix.end2end.BackwardCompatibilityIT |
   |   | phoenix.end2end.ViewUtilIT |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/8/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/phoenix/pull/1170 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs hbaseanti checkstyle compile cc prototool |
   | uname | Linux 0ca007e04de6 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev/phoenix-personality.sh |
   | git revision | 4.x-PHOENIX-6247 / 410f738 |
   | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 |
   | checkstyle | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/8/artifact/yetus-general-check/output/diff-checkstyle-phoenix-core.txt |
   | spotbugs | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/8/artifact/yetus-general-check/output/new-spotbugs-phoenix-core.html |
   | unit | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/8/artifact/yetus-general-check/output/patch-unit-phoenix-core.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/8/testReport/ |
   | Max. process+thread count | 4980 (vs. ulimit of 30000) |
   | modules | C: phoenix-core U: phoenix-core |
   | Console output | https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1170/8/console |
   | versions | git=2.7.4 maven=3.3.9 spotbugs=4.1.3 prototool=1.10.0-dev |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [phoenix] gokceni commented on a change in pull request #1170: PHOENIX-6247 Separating logical and physical table names

Posted by GitBox <gi...@apache.org>.
gokceni commented on a change in pull request #1170:
URL: https://github.com/apache/phoenix/pull/1170#discussion_r606318514



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
##########
@@ -583,6 +583,7 @@ private static int getReservedQualifier(byte[] bytes, int offset, int length) {
     PName getName();
     PName getSchemaName();
     PName getTableName();
+    PName getPhysicalTableNameColumnInSyscat();

Review comment:
       getPhysicalName already returns physical table name column from syscat when appropriate.
   
   We still need 2 methods. One represents the actual column in Syscat. The other is inferred (like views or view indexes).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org