You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by gj...@apache.org on 2019/03/25 23:14:18 UTC

[phoenix] branch PHOENIX-5138-2 updated (c62b301 -> e357c6b)

This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a change to branch PHOENIX-5138-2
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


 discard c62b301  PHOENIX-5138 - ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it
     add 6e3cb31  PHOENIX-5196 Fix rat check in pre commit
     add c73947a  PHOENIX-5185 support Math PI function (#461)
     add 05b9901  PHOENIX-5131 Make spilling to disk for order/group by configurable
     add dd8789b  PHOENIX-5148 Improve OrderPreservingTracker to optimize OrderBy/GroupBy for ClientScanPlan and ClientAggregatePlan
     add 43609f8  PHOENIX-5062 Create a new repo for the phoenix connectors
     add 3c7b108  PHOENIX-4900 Modify MAX_MUTATION_SIZE_EXCEEDED and MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning autocommit on for deletes
     add f256004  PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil
     add c06bb59  PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism and more logging
     add 69e5bb0  PHOENIX-1614 ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
     new e357c6b  PHOENIX-5138 - ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c62b301)
            \
             N -- N -- N   refs/heads/PHOENIX-5138-2 (e357c6b)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 bin/omid-server-configuration.yml                  |   16 +
 phoenix-assembly/pom.xml                           |   16 -
 phoenix-client/pom.xml                             |   18 -
 .../org/apache/phoenix/end2end/AlterTableIT.java   |   43 +-
 .../org/apache/phoenix/end2end/DerivedTableIT.java |    2 -
 .../phoenix/end2end/LnLogFunctionEnd2EndIT.java    |   16 -
 .../phoenix/end2end/MathPIFunctionEnd2EndIT.java   |   61 ++
 .../apache/phoenix/end2end/MutationStateIT.java    |   50 +-
 .../java/org/apache/phoenix/end2end/OrderByIT.java |  193 ++++
 ...OrderByWithServerClientSpoolingDisabledIT.java} |   17 +-
 .../end2end/OrderByWithServerMemoryLimitIT.java    |   81 ++
 .../phoenix/end2end/OrderByWithSpillingIT.java     |    3 +-
 .../phoenix/end2end/PowerFunctionEnd2EndIT.java    |   16 -
 .../phoenix/end2end/SortMergeJoinMoreIT.java       |  126 ++-
 .../phoenix/end2end/SpooledTmpFileDeleteIT.java    |    2 +-
 .../end2end/join/SortMergeJoinGlobalIndexIT.java   |    3 +-
 .../end2end/join/SortMergeJoinLocalIndexIT.java    |    3 +-
 .../end2end/join/SortMergeJoinNoSpoolingIT.java    |   83 ++
 .../end2end/join/SubqueryUsingSortMergeJoinIT.java |   26 +-
 .../apache/phoenix/compile/ExpressionCompiler.java |    6 +-
 .../apache/phoenix/compile/GroupByCompiler.java    |   35 +-
 .../apache/phoenix/compile/ListJarsQueryPlan.java  |    5 +
 .../apache/phoenix/compile/OrderByCompiler.java    |  104 +-
 .../phoenix/compile/OrderPreservingTracker.java    |  556 ++++++++---
 .../org/apache/phoenix/compile/QueryCompiler.java  |   90 +-
 .../java/org/apache/phoenix/compile/QueryPlan.java |   20 +-
 .../compile/StatelessExpressionCompiler.java       |   57 ++
 .../org/apache/phoenix/compile/TraceQueryPlan.java |    5 +
 .../org/apache/phoenix/compile/UnionCompiler.java  |    2 +-
 .../phoenix/coprocessor/MetaDataProtocol.java      |    7 +
 .../phoenix/coprocessor/ScanRegionObserver.java    |    4 +-
 .../UngroupedAggregateRegionObserver.java          |   22 +-
 .../apache/phoenix/exception/SQLExceptionCode.java |    8 +-
 .../org/apache/phoenix/execute/AggregatePlan.java  |   61 +-
 .../phoenix/execute/ClientAggregatePlan.java       |   61 +-
 .../org/apache/phoenix/execute/ClientScanPlan.java |   38 +-
 .../apache/phoenix/execute/CursorFetchPlan.java    |    6 +-
 .../apache/phoenix/execute/DelegateQueryPlan.java  |    5 +
 .../execute/LiteralResultIterationPlan.java        |    5 +
 .../java/org/apache/phoenix/execute/ScanPlan.java  |   34 +-
 .../apache/phoenix/execute/SortMergeJoinPlan.java  |  253 ++---
 .../phoenix/execute/TupleProjectionPlan.java       |  112 ++-
 .../java/org/apache/phoenix/execute/UnionPlan.java |   13 +
 .../apache/phoenix/execute/UnnestArrayPlan.java    |    7 +
 .../apache/phoenix/expression/ExpressionType.java  |    1 +
 .../phoenix/expression/OrderByExpression.java      |   85 +-
 ...tValueBaseFunction.java => MathPIFunction.java} |   45 +-
 .../phoenix/hbase/index/util/VersionUtil.java      |   12 +
 .../hbase/index/write/RecoveryIndexWriter.java     |   30 +-
 .../org/apache/phoenix/iterate/BufferedQueue.java  |   20 +-
 .../phoenix/iterate/BufferedSortedQueue.java       |   33 +-
 .../apache/phoenix/iterate/BufferedTupleQueue.java |  134 +++
 .../iterate/NonAggregateRegionScannerFactory.java  |   45 +-
 .../iterate/OrderedAggregatingResultIterator.java  |    5 +-
 .../phoenix/iterate/OrderedResultIterator.java     |   72 +-
 .../org/apache/phoenix/iterate/PhoenixQueues.java  |   96 ++
 .../org/apache/phoenix/iterate/SizeAwareQueue.java |   11 +-
 .../org/apache/phoenix/iterate/SizeBoundQueue.java |   96 ++
 .../phoenix/iterate/SpoolingResultIterator.java    |    5 +-
 .../org/apache/phoenix/jdbc/PhoenixStatement.java  |    5 +
 .../phoenix/mapreduce/AbstractBulkLoadTool.java    |  114 ++-
 .../apache/phoenix/mapreduce/OrphanViewTool.java   |   73 +-
 .../phoenix/mapreduce/PhoenixRecordWriter.java     |   18 +-
 .../mapreduce/index/DirectHTableWriter.java        |   19 +-
 .../mapreduce/index/IndexScrutinyMapper.java       |   25 +-
 .../apache/phoenix/mapreduce/index/IndexTool.java  |   85 +-
 .../index/PhoenixIndexImportDirectMapper.java      |   26 +-
 .../mapreduce/index/PhoenixIndexImportMapper.java  |   16 +-
 .../index/PhoenixIndexPartialBuildMapper.java      |   25 +-
 .../mapreduce/util/PhoenixConfigurationUtil.java   |   45 +-
 .../org/apache/phoenix/query/QueryServices.java    |   11 +-
 .../apache/phoenix/query/QueryServicesOptions.java |   19 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |   29 +-
 .../org/apache/phoenix/tool/PhoenixCanaryTool.java |  212 ++--
 .../org/apache/phoenix/util/ExpressionUtil.java    |  294 ++++++
 .../apache/phoenix/compile/QueryCompilerTest.java  |  451 +++++++++
 .../apache/phoenix/expression/ExpFunctionTest.java |   19 +-
 .../phoenix/expression/LnLogFunctionTest.java      |   23 +-
 .../MathPIFunctionTest.java}                       |   32 +-
 .../phoenix/expression/PowerFunctionTest.java      |   22 +-
 .../phoenix/expression/SqrtFunctionTest.java       |   20 +-
 .../phoenix/iterate/OrderedResultIteratorTest.java |   55 +-
 .../java/org/apache/phoenix/query/BaseTest.java    |   17 +-
 .../phoenix/query/ParallelIteratorsSplitTest.java  |    6 +-
 .../phoenix/query/QueryServicesTestImpl.java       |    3 +-
 .../tool/ParameterizedPhoenixCanaryToolIT.java     |  280 ++++++
 .../apache/phoenix/tool/PhoenixCanaryToolTest.java |   53 +-
 .../org/apache/phoenix/util/MetaDataUtilTest.java  |   10 +-
 .../resources/phoenix-canary-file-sink.properties  |   13 +-
 .../apache/phoenix/flume/CsvEventSerializerIT.java |  416 --------
 .../phoenix/flume/JsonEventSerializerIT.java       |  541 ----------
 .../phoenix/flume/RegexEventSerializerIT.java      |  417 --------
 .../phoenix/flume/serializer/CustomSerializer.java |   43 -
 .../apache/phoenix/flume/sink/NullPhoenixSink.java |   21 -
 .../apache/phoenix/flume/DefaultKeyGenerator.java  |   69 --
 .../org/apache/phoenix/flume/FlumeConstants.java   |   94 --
 .../org/apache/phoenix/flume/SchemaHandler.java    |   47 -
 .../flume/serializer/BaseEventSerializer.java      |  245 -----
 .../flume/serializer/CsvEventSerializer.java       |  196 ----
 .../phoenix/flume/serializer/EventSerializer.java  |   42 -
 .../phoenix/flume/serializer/EventSerializers.java |   36 -
 .../flume/serializer/JsonEventSerializer.java      |  226 -----
 .../flume/serializer/RegexEventSerializer.java     |  145 ---
 .../org/apache/phoenix/flume/sink/PhoenixSink.java |  212 ----
 .../org/apache/phoenix/hive/PhoenixMetaHook.java   |  229 -----
 .../java/org/apache/phoenix/hive/PhoenixRow.java   |   64 --
 .../org/apache/phoenix/hive/PhoenixRowKey.java     |   62 --
 .../java/org/apache/phoenix/hive/PhoenixSerDe.java |  152 ---
 .../org/apache/phoenix/hive/PhoenixSerializer.java |  173 ----
 .../org/apache/phoenix/hive/PrimaryKeyData.java    |   88 --
 .../phoenix/hive/mapreduce/PhoenixInputSplit.java  |  160 ---
 .../hive/mapreduce/PhoenixOutputFormat.java        |  112 ---
 .../hive/mapreduce/PhoenixRecordReader.java        |  217 ----
 .../hive/mapreduce/PhoenixResultWritable.java      |  217 ----
 .../AbstractPhoenixObjectInspector.java            |   59 --
 .../PhoenixBinaryObjectInspector.java              |   58 --
 .../PhoenixBooleanObjectInspector.java             |   55 -
 .../PhoenixCharObjectInspector.java                |   56 --
 .../PhoenixDateObjectInspector.java                |   63 --
 .../PhoenixDecimalObjectInspector.java             |   72 --
 .../PhoenixFloatObjectInspector.java               |   60 --
 .../objectinspector/PhoenixIntObjectInspector.java |   62 --
 .../PhoenixListObjectInspector.java                |  105 --
 .../PhoenixLongObjectInspector.java                |   56 --
 .../PhoenixObjectInspectorFactory.java             |  148 ---
 .../PhoenixShortObjectInspector.java               |   56 --
 .../PhoenixStringObjectInspector.java              |   72 --
 .../PhoenixTimestampObjectInspector.java           |   61 --
 .../hive/ppd/PhoenixPredicateDecomposer.java       |   95 --
 .../hive/ql/index/IndexSearchCondition.java        |  143 ---
 .../hive/ql/index/PredicateAnalyzerFactory.java    |   40 -
 .../phoenix/hive/util/ColumnMappingUtils.java      |   76 --
 .../apache/phoenix/hive/PrimaryKeyDataTest.java    |   79 --
 .../apache/phoenix/kafka/PhoenixConsumerIT.java    |  276 -----
 phoenix-kafka/src/it/resources/consumer.props      |   32 -
 .../org/apache/phoenix/kafka/KafkaConstants.java   |   52 -
 .../phoenix/kafka/consumer/PhoenixConsumer.java    |  276 -----
 .../kafka/consumer/PhoenixConsumerTool.java        |  107 --
 .../it/java/org/apache/phoenix/pig/BasePigIT.java  |   87 --
 .../apache/phoenix/pig/PhoenixHBaseLoaderIT.java   |  838 ----------------
 .../apache/phoenix/pig/PhoenixHBaseStorerIT.java   |  292 ------
 .../phoenix/pig/udf/ReserveNSequenceTestIT.java    |  306 ------
 .../apache/phoenix/pig/PhoenixHBaseStorage.java    |  236 -----
 .../apache/phoenix/pig/udf/ReserveNSequence.java   |  129 ---
 .../pig/util/QuerySchemaParserFunction.java        |  118 ---
 .../pig/util/SqlQueryToColumnInfoFunction.java     |   82 --
 .../pig/util/TableSchemaParserFunction.java        |   52 -
 .../java/org/apache/phoenix/pig/util/TypeUtil.java |  349 -------
 .../phoenix/pig/util/PhoenixPigSchemaUtilTest.java |   92 --
 .../pig/util/QuerySchemaParserFunctionTest.java    |   97 --
 .../pig/util/SqlQueryToColumnInfoFunctionTest.java |   63 --
 .../pig/util/TableSchemaParserFunctionTest.java    |   54 -
 .../org/apache/phoenix/pig/util/TypeUtilTest.java  |   83 --
 phoenix-spark/README.md                            |  164 ---
 .../java/org/apache/phoenix/spark/AggregateIT.java |   91 --
 .../java/org/apache/phoenix/spark/OrderByIT.java   |  432 --------
 .../org/apache/phoenix/spark/SaltedTableIT.java    |   53 -
 .../java/org/apache/phoenix/spark/SparkUtil.java   |   80 --
 phoenix-spark/src/it/resources/globalSetup.sql     |   64 --
 phoenix-spark/src/it/resources/log4j.xml           |   70 --
 phoenix-spark/src/it/resources/tenantSetup.sql     |   18 -
 .../phoenix/spark/AbstractPhoenixSparkIT.scala     |  117 ---
 .../org/apache/phoenix/spark/PhoenixSparkIT.scala  |  733 --------------
 .../spark/PhoenixSparkITTenantSpecific.scala       |  135 ---
 .../org/apache/phoenix/spark/SparkResultSet.java   | 1056 --------------------
 .../spark/datasource/v2/PhoenixDataSource.java     |   82 --
 .../v2/reader/PhoenixDataSourceReadOptions.java    |   51 -
 .../v2/reader/PhoenixInputPartition.java           |   44 -
 .../v2/writer/PhoenixDataSourceWriteOptions.java   |  109 --
 .../datasource/v2/writer/PhoenixDataWriter.java    |  100 --
 .../v2/writer/PhoenixDataWriterFactory.java        |   19 -
 .../v2/writer/PhoenixDatasourceWriter.java         |   34 -
 ...org.apache.spark.sql.sources.DataSourceRegister |    1 -
 .../apache/phoenix/spark/ConfigurationUtil.scala   |  100 --
 .../apache/phoenix/spark/DataFrameFunctions.scala  |   79 --
 .../org/apache/phoenix/spark/DefaultSource.scala   |   60 --
 .../phoenix/spark/FilterExpressionCompiler.scala   |  135 ---
 .../org/apache/phoenix/spark/PhoenixRDD.scala      |  150 ---
 .../phoenix/spark/PhoenixRecordWritable.scala      |  115 ---
 .../org/apache/phoenix/spark/PhoenixRelation.scala |   69 --
 .../apache/phoenix/spark/ProductRDDFunctions.scala |   64 --
 .../phoenix/spark/SparkContextFunctions.scala      |   42 -
 .../org/apache/phoenix/spark/SparkSchemaUtil.scala |   84 --
 .../phoenix/spark/SparkSqlContextFunctions.scala   |   42 -
 .../scala/org/apache/phoenix/spark/package.scala   |   36 -
 .../datasources/jdbc/PhoenixJdbcDialect.scala      |   21 -
 .../execution/datasources/jdbc/SparkJdbcUtil.scala |  309 ------
 pom.xml                                            |    5 -
 188 files changed, 3921 insertions(+), 15105 deletions(-)
 create mode 100644 phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
 copy phoenix-core/src/it/java/org/apache/phoenix/end2end/{OrderByWithSpillingIT.java => OrderByWithServerClientSpoolingDisabledIT.java} (66%)
 create mode 100644 phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByWithServerMemoryLimitIT.java
 create mode 100644 phoenix-core/src/it/java/org/apache/phoenix/end2end/join/SortMergeJoinNoSpoolingIT.java
 create mode 100644 phoenix-core/src/main/java/org/apache/phoenix/compile/StatelessExpressionCompiler.java
 copy phoenix-core/src/main/java/org/apache/phoenix/expression/function/{FirstLastValueBaseFunction.java => MathPIFunction.java} (61%)
 create mode 100644 phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedTupleQueue.java
 create mode 100644 phoenix-core/src/main/java/org/apache/phoenix/iterate/PhoenixQueues.java
 rename phoenix-flume/src/main/java/org/apache/phoenix/flume/KeyGenerator.java => phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeAwareQueue.java (81%)
 create mode 100644 phoenix-core/src/main/java/org/apache/phoenix/iterate/SizeBoundQueue.java
 copy phoenix-core/src/test/java/org/apache/phoenix/{util/LikeExpressionTest.java => expression/MathPIFunctionTest.java} (55%)
 create mode 100644 phoenix-core/src/test/java/org/apache/phoenix/tool/ParameterizedPhoenixCanaryToolIT.java
 rename phoenix-kafka/src/it/resources/producer.props => phoenix-core/src/test/resources/phoenix-canary-file-sink.properties (64%)
 delete mode 100644 phoenix-flume/src/it/java/org/apache/phoenix/flume/CsvEventSerializerIT.java
 delete mode 100644 phoenix-flume/src/it/java/org/apache/phoenix/flume/JsonEventSerializerIT.java
 delete mode 100644 phoenix-flume/src/it/java/org/apache/phoenix/flume/RegexEventSerializerIT.java
 delete mode 100644 phoenix-flume/src/it/java/org/apache/phoenix/flume/serializer/CustomSerializer.java
 delete mode 100644 phoenix-flume/src/it/java/org/apache/phoenix/flume/sink/NullPhoenixSink.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/DefaultKeyGenerator.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/FlumeConstants.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/SchemaHandler.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/BaseEventSerializer.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/CsvEventSerializer.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/EventSerializer.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/EventSerializers.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/JsonEventSerializer.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/RegexEventSerializer.java
 delete mode 100644 phoenix-flume/src/main/java/org/apache/phoenix/flume/sink/PhoenixSink.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixMetaHook.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixRow.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixRowKey.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixSerDe.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixSerializer.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/PrimaryKeyData.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputSplit.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixOutputFormat.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixRecordReader.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixResultWritable.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/AbstractPhoenixObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBinaryObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBooleanObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixCharObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDateObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixFloatObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixIntObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixListObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixLongObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixShortObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixStringObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixTimestampObjectInspector.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/ppd/PhoenixPredicateDecomposer.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/ql/index/IndexSearchCondition.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/ql/index/PredicateAnalyzerFactory.java
 delete mode 100644 phoenix-hive/src/main/java/org/apache/phoenix/hive/util/ColumnMappingUtils.java
 delete mode 100644 phoenix-hive/src/test/java/org/apache/phoenix/hive/PrimaryKeyDataTest.java
 delete mode 100644 phoenix-kafka/src/it/java/org/apache/phoenix/kafka/PhoenixConsumerIT.java
 delete mode 100644 phoenix-kafka/src/it/resources/consumer.props
 delete mode 100644 phoenix-kafka/src/main/java/org/apache/phoenix/kafka/KafkaConstants.java
 delete mode 100644 phoenix-kafka/src/main/java/org/apache/phoenix/kafka/consumer/PhoenixConsumer.java
 delete mode 100644 phoenix-kafka/src/main/java/org/apache/phoenix/kafka/consumer/PhoenixConsumerTool.java
 delete mode 100644 phoenix-pig/src/it/java/org/apache/phoenix/pig/BasePigIT.java
 delete mode 100644 phoenix-pig/src/it/java/org/apache/phoenix/pig/PhoenixHBaseLoaderIT.java
 delete mode 100644 phoenix-pig/src/it/java/org/apache/phoenix/pig/PhoenixHBaseStorerIT.java
 delete mode 100644 phoenix-pig/src/it/java/org/apache/phoenix/pig/udf/ReserveNSequenceTestIT.java
 delete mode 100644 phoenix-pig/src/main/java/org/apache/phoenix/pig/PhoenixHBaseStorage.java
 delete mode 100644 phoenix-pig/src/main/java/org/apache/phoenix/pig/udf/ReserveNSequence.java
 delete mode 100644 phoenix-pig/src/main/java/org/apache/phoenix/pig/util/QuerySchemaParserFunction.java
 delete mode 100644 phoenix-pig/src/main/java/org/apache/phoenix/pig/util/SqlQueryToColumnInfoFunction.java
 delete mode 100644 phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TableSchemaParserFunction.java
 delete mode 100644 phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TypeUtil.java
 delete mode 100644 phoenix-pig/src/test/java/org/apache/phoenix/pig/util/PhoenixPigSchemaUtilTest.java
 delete mode 100644 phoenix-pig/src/test/java/org/apache/phoenix/pig/util/QuerySchemaParserFunctionTest.java
 delete mode 100644 phoenix-pig/src/test/java/org/apache/phoenix/pig/util/SqlQueryToColumnInfoFunctionTest.java
 delete mode 100644 phoenix-pig/src/test/java/org/apache/phoenix/pig/util/TableSchemaParserFunctionTest.java
 delete mode 100644 phoenix-pig/src/test/java/org/apache/phoenix/pig/util/TypeUtilTest.java
 delete mode 100644 phoenix-spark/README.md
 delete mode 100644 phoenix-spark/src/it/java/org/apache/phoenix/spark/AggregateIT.java
 delete mode 100644 phoenix-spark/src/it/java/org/apache/phoenix/spark/OrderByIT.java
 delete mode 100644 phoenix-spark/src/it/java/org/apache/phoenix/spark/SaltedTableIT.java
 delete mode 100644 phoenix-spark/src/it/java/org/apache/phoenix/spark/SparkUtil.java
 delete mode 100644 phoenix-spark/src/it/resources/globalSetup.sql
 delete mode 100644 phoenix-spark/src/it/resources/log4j.xml
 delete mode 100644 phoenix-spark/src/it/resources/tenantSetup.sql
 delete mode 100644 phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
 delete mode 100644 phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
 delete mode 100644 phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/PhoenixDataSource.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/reader/PhoenixDataSourceReadOptions.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/reader/PhoenixInputPartition.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataSourceWriteOptions.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataWriter.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataWriterFactory.java
 delete mode 100644 phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDatasourceWriter.java
 delete mode 100644 phoenix-spark/src/main/resources/META-INF/services/org.apache.spark.sql.sources.DataSourceRegister
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/DefaultSource.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/FilterExpressionCompiler.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRecordWritable.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRelation.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/ProductRDDFunctions.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/SparkContextFunctions.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/SparkSchemaUtil.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/SparkSqlContextFunctions.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/phoenix/spark/package.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/PhoenixJdbcDialect.scala
 delete mode 100644 phoenix-spark/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/SparkJdbcUtil.scala


[phoenix] 01/01: PHOENIX-5138 - ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it

Posted by gj...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

gjacoby pushed a commit to branch PHOENIX-5138-2
in repository https://gitbox.apache.org/repos/asf/phoenix.git

commit e357c6b8117cf45b5a6474aa07cc10a0b3c423a7
Author: Geoffrey Jacoby <gj...@apache.org>
AuthorDate: Mon Mar 25 16:12:52 2019 -0700

    PHOENIX-5138 - ViewIndexId sequences created after PHOENIX-5132 shouldn't collide with ones created before it
---
 .../java/org/apache/phoenix/end2end/UpgradeIT.java | 100 ++++++++++++++++++++-
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |  47 +++++-----
 .../phoenix/query/ConnectionQueryServicesImpl.java |   6 ++
 .../java/org/apache/phoenix/util/MetaDataUtil.java |  30 ++++++-
 .../java/org/apache/phoenix/util/UpgradeUtil.java  |  98 ++++++++++++++++++++
 5 files changed, 257 insertions(+), 24 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
index 632a2bb..85d580c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
@@ -21,6 +21,7 @@ import static com.google.common.base.Preconditions.checkNotNull;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -30,6 +31,7 @@ import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.List;
 import java.util.Properties;
 import java.util.Set;
 import java.util.concurrent.Callable;
@@ -38,6 +40,7 @@ import java.util.concurrent.FutureTask;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import com.google.common.collect.Lists;
 import org.apache.curator.shaded.com.google.common.collect.Sets;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
@@ -60,6 +63,9 @@ import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.schema.SequenceAllocation;
+import org.apache.phoenix.schema.SequenceKey;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -67,6 +73,7 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.apache.phoenix.util.UpgradeUtil;
 import org.junit.Test;
+import sun.jvm.hotspot.oops.Metadata;
 
 public class UpgradeIT extends ParallelStatsDisabledIT {
 
@@ -507,12 +514,20 @@ public class UpgradeIT extends ParallelStatsDisabledIT {
         return DriverManager.getConnection(getUrl(), props);
     }
 
-    private Connection getConnection(boolean tenantSpecific, String tenantId) throws SQLException {
+    private Connection getConnection(boolean tenantSpecific, String tenantId, boolean isNamespaceMappingEnabled)
+        throws SQLException {
         if (tenantSpecific) {
             checkNotNull(tenantId);
             return createTenantConnection(tenantId);
         }
-        return DriverManager.getConnection(getUrl());
+        Properties props = new Properties();
+        if (isNamespaceMappingEnabled){
+            props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, "true");
+        }
+        return DriverManager.getConnection(getUrl(), props);
+    }
+    private Connection getConnection(boolean tenantSpecific, String tenantId) throws SQLException {
+        return getConnection(tenantSpecific, tenantId, false);
     }
     
     @Test
@@ -588,4 +603,85 @@ public class UpgradeIT extends ParallelStatsDisabledIT {
         return childLinkSet;
     }
 
+    @Test
+    public void testMergeViewIndexSequences() throws Exception {
+        testMergeViewIndexSequencesHelper(false);
+    }
+
+    @Test
+    public void testMergeViewIndexSequencesWithNamespaces() throws Exception {
+        testMergeViewIndexSequencesHelper(true);
+    }
+
+    private void testMergeViewIndexSequencesHelper(boolean isNamespaceMappingEnabled) throws Exception {
+        PhoenixConnection conn = getConnection(false, null, isNamespaceMappingEnabled).unwrap(PhoenixConnection.class);
+        ConnectionQueryServices cqs = conn.getQueryServices();
+        //First delete any sequences that may exist from previous tests
+        conn.createStatement().execute("DELETE FROM " + PhoenixDatabaseMetaData.SYSTEM_SEQUENCE);
+        conn.commit();
+        cqs.clearCache();
+        //Now make sure that running the merge logic doesn't cause a problem when there are no
+        //sequences
+        UpgradeUtil.mergeViewIndexIdSequences(cqs, conn);
+        PName tenantOne = PNameFactory.newName("TENANT_ONE");
+        PName tenantTwo = PNameFactory.newName("TENANT_TWO");
+        String tableName =
+            SchemaUtil.getPhysicalHBaseTableName("TEST",
+                "T_" + generateUniqueName(), isNamespaceMappingEnabled).getString();
+        PName viewIndexTable = PNameFactory.newName(MetaDataUtil.getViewIndexPhysicalName(tableName));
+        SequenceKey sequenceOne =
+            createViewIndexSequenceWithOldName(cqs, tenantOne, viewIndexTable, isNamespaceMappingEnabled);
+        SequenceKey sequenceTwo =
+            createViewIndexSequenceWithOldName(cqs, tenantTwo, viewIndexTable, isNamespaceMappingEnabled);
+        SequenceKey sequenceGlobal =
+            createViewIndexSequenceWithOldName(cqs, null, viewIndexTable, isNamespaceMappingEnabled);
+
+        List<SequenceAllocation> allocations = Lists.newArrayList();
+        long val1 = 10;
+        long val2 = 100;
+        long val3 = 1000;
+        allocations.add(new SequenceAllocation(sequenceOne, val1));
+        allocations.add(new SequenceAllocation(sequenceGlobal, val2));
+        allocations.add(new SequenceAllocation(sequenceTwo, val3));
+
+
+        long[] incrementedValues = new long[3];
+        SQLException[] exceptions = new SQLException[3];
+        cqs.incrementSequences(allocations, EnvironmentEdgeManager.currentTimeMillis(), incrementedValues,
+            exceptions);
+        for (SQLException e : exceptions) {
+            assertNull(e);
+        }
+
+        UpgradeUtil.mergeViewIndexIdSequences(cqs, conn);
+        //now check that there exists a sequence using the new naming convention, whose value is the
+        //max of all the previous sequences for this table.
+
+        List<SequenceAllocation> afterUpgradeAllocations = Lists.newArrayList();
+        SequenceKey sequenceUpgrade = MetaDataUtil.getViewIndexSequenceKey(null, viewIndexTable, 0, isNamespaceMappingEnabled);
+        afterUpgradeAllocations.add(new SequenceAllocation(sequenceUpgrade, 1));
+        long[] afterUpgradeValues = new long[1];
+        SQLException[] afterUpgradeExceptions = new SQLException[1];
+        cqs.incrementSequences(afterUpgradeAllocations, EnvironmentEdgeManager.currentTimeMillis(), afterUpgradeValues, afterUpgradeExceptions);
+
+        assertNull(afterUpgradeExceptions[0]);
+        if (isNamespaceMappingEnabled){
+            //since one sequence (the global one) will be reused as the "new" sequence,
+            // it's already in cache and will reflect the final increment immediately
+            assertEquals(Long.MIN_VALUE + val3 + 1, afterUpgradeValues[0]);
+        } else {
+            assertEquals(Long.MIN_VALUE + val3, afterUpgradeValues[0]);
+        }
+    }
+
+    private SequenceKey createViewIndexSequenceWithOldName(ConnectionQueryServices cqs, PName tenant, PName viewIndexTable, boolean isNamespaceMapped) throws SQLException {
+        String tenantId = tenant == null ? null : tenant.getString();
+        SequenceKey key = MetaDataUtil.getOldViewIndexSequenceKey(tenantId, viewIndexTable, 0, isNamespaceMapped);
+        //Sequences are owned globally even if they contain a tenantId in the name
+        String sequenceTenantId = isNamespaceMapped ? tenantId : null;
+        cqs.createSequence(sequenceTenantId, key.getSchemaName(), key.getSequenceName(),
+            Long.MIN_VALUE, 1, 1, Long.MIN_VALUE, Long.MAX_VALUE, false, EnvironmentEdgeManager.currentTimeMillis());
+        return key;
+    }
+
 }
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 192d004..5e019bc 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2362,26 +2362,7 @@ public class MetaDataEndpointImpl extends MetaDataProtocol implements RegionCopr
                     String tenantIdStr = tenantIdBytes.length == 0 ? null : Bytes.toString(tenantIdBytes);
                     try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class)) {
                         PName physicalName = parentTable.getPhysicalName();
-                        int nSequenceSaltBuckets = connection.getQueryServices().getSequenceSaltBuckets();
-                        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
-                            nSequenceSaltBuckets, parentTable.isNamespaceMapped() );
-                        // TODO Review Earlier sequence was created at (SCN-1/LATEST_TIMESTAMP) and incremented at the client max(SCN,dataTable.getTimestamp), but it seems we should
-                        // use always LATEST_TIMESTAMP to avoid seeing wrong sequence values by different connection having SCN
-                        // or not.
-                        long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
-                        try {
-                            connection.getQueryServices().createSequence(key.getTenantId(), key.getSchemaName(), key.getSequenceName(),
-                                Long.MIN_VALUE, 1, 1, Long.MIN_VALUE, Long.MAX_VALUE, false, sequenceTimestamp);
-                        } catch (SequenceAlreadyExistsException e) {
-                        }
-                        long[] seqValues = new long[1];
-                        SQLException[] sqlExceptions = new SQLException[1];
-                        connection.getQueryServices().incrementSequences(Collections.singletonList(new SequenceAllocation(key, 1)),
-                            HConstants.LATEST_TIMESTAMP, seqValues, sqlExceptions);
-                        if (sqlExceptions[0] != null) {
-                            throw sqlExceptions[0];
-                        }
-                        long seqValue = seqValues[0];
+                        long seqValue = getViewIndexSequenceValue(connection, tenantIdStr, parentTable, physicalName);
                         Put tableHeaderPut = MetaDataUtil.getPutOnlyTableHeaderRow(tableMetadata);
 
                         NavigableMap<byte[], List<Cell>> familyCellMap = tableHeaderPut.getFamilyCellMap();
@@ -2501,6 +2482,32 @@ public class MetaDataEndpointImpl extends MetaDataProtocol implements RegionCopr
         }
     }
 
+    private long getViewIndexSequenceValue(PhoenixConnection connection, String tenantIdStr, PTable parentTable, PName physicalName) throws SQLException {
+        int nSequenceSaltBuckets = connection.getQueryServices().getSequenceSaltBuckets();
+
+        SequenceKey key = MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+            nSequenceSaltBuckets, parentTable.isNamespaceMapped() );
+        // Earlier sequence was created at (SCN-1/LATEST_TIMESTAMP) and incremented at the client max(SCN,dataTable.getTimestamp), but it seems we should
+        // use always LATEST_TIMESTAMP to avoid seeing wrong sequence values by different connection having SCN
+        // or not.
+        long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+        try {
+            connection.getQueryServices().createSequence(key.getTenantId(), key.getSchemaName(), key.getSequenceName(),
+                Long.MIN_VALUE, 1, 1, Long.MIN_VALUE, Long.MAX_VALUE, false, sequenceTimestamp);
+        } catch (SequenceAlreadyExistsException e) {
+        }
+
+
+        long[] seqValues = new long[1];
+        SQLException[] sqlExceptions = new SQLException[1];
+        connection.getQueryServices().incrementSequences(Collections.singletonList(new SequenceAllocation(key, 1)),
+            HConstants.LATEST_TIMESTAMP, seqValues, sqlExceptions);
+        if (sqlExceptions[0] != null) {
+            throw sqlExceptions[0];
+        }
+        return seqValues[0];
+    }
+
     public static void dropChildViews(RegionCoprocessorEnvironment env, byte[] tenantIdBytes, byte[] schemaName, byte[] tableName)
             throws IOException, SQLException, ClassNotFoundException {
         Table hTable =
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 1f5cd48..ad9455f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -70,7 +70,9 @@ import static org.apache.phoenix.util.UpgradeUtil.syncTableAndIndexProperties;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.lang.ref.WeakReference;
+import java.sql.DatabaseMetaData;
 import java.sql.PreparedStatement;
+import java.sql.ResultSet;
 import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.sql.Types;
@@ -228,6 +230,7 @@ import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.SaltingUtil;
 import org.apache.phoenix.schema.Sequence;
 import org.apache.phoenix.schema.SequenceAllocation;
+import org.apache.phoenix.schema.SequenceAlreadyExistsException;
 import org.apache.phoenix.schema.SequenceKey;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.SystemFunctionSplitPolicy;
@@ -3439,6 +3442,9 @@ public class ConnectionQueryServicesImpl extends DelegateQueryServices implement
                 // See PHOENIX-3955
                 if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0) {
                     syncTableAndIndexProperties(metaConnection, getAdmin());
+                    //Combine view index id sequences for the same physical view index table
+                    //to avoid collisions. See PHOENIX-5132 and PHOENIX-5138
+                    UpgradeUtil.mergeViewIndexIdSequences(this, metaConnection);
                 }
             }
 
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java b/phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
index 3c92a99..a3912cf 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
@@ -618,13 +618,39 @@ public class MetaDataUtil {
         }
     }
 
-    public static String getViewIndexSequenceSchemaName(PName physicalName, boolean isNamespaceMapped) {
+    public static String getOldViewIndexSequenceSchemaName(PName physicalName, boolean isNamespaceMapped) {
         if (!isNamespaceMapped) { return VIEW_INDEX_SEQUENCE_PREFIX + physicalName.getString(); }
         return SchemaUtil.getSchemaNameFromFullName(physicalName.toString());
     }
 
+    public static String getOldViewIndexSequenceName(PName physicalName, PName tenantId, boolean isNamespaceMapped) {
+        if (!isNamespaceMapped) { return VIEW_INDEX_SEQUENCE_NAME_PREFIX + (tenantId == null ? "" : tenantId); }
+        return SchemaUtil.getTableNameFromFullName(physicalName.toString()) + VIEW_INDEX_SEQUENCE_NAME_PREFIX;
+    }
+
+    public static SequenceKey getOldViewIndexSequenceKey(String tenantId, PName physicalName, int nSaltBuckets,
+                                                      boolean isNamespaceMapped) {
+        // Create global sequence of the form: <prefixed base table name><tenant id>
+        // rather than tenant-specific sequence, as it makes it much easier
+        // to cleanup when the physical table is dropped, as we can delete
+        // all global sequences leading with <prefix> + physical name.
+        String schemaName = getOldViewIndexSequenceSchemaName(physicalName, isNamespaceMapped);
+        String tableName = getOldViewIndexSequenceName(physicalName, PNameFactory.newName(tenantId), isNamespaceMapped);
+        return new SequenceKey(isNamespaceMapped ? tenantId : null, schemaName, tableName, nSaltBuckets);
+    }
+
+    public static String getViewIndexSequenceSchemaName(PName physicalName, boolean isNamespaceMapped) {
+        if (!isNamespaceMapped) {
+            String baseTableName = SchemaUtil.getParentTableNameFromIndexTable(physicalName.getString(),
+                MetaDataUtil.VIEW_INDEX_TABLE_PREFIX);
+            return SchemaUtil.getSchemaNameFromFullName(baseTableName);
+        } else {
+            return SchemaUtil.getSchemaNameFromFullName(physicalName.toString());
+        }
+
+    }
+
     public static String getViewIndexSequenceName(PName physicalName, PName tenantId, boolean isNamespaceMapped) {
-        if (!isNamespaceMapped) { return VIEW_INDEX_SEQUENCE_NAME_PREFIX; }
         return SchemaUtil.getTableNameFromFullName(physicalName.toString()) + VIEW_INDEX_SEQUENCE_NAME_PREFIX;
     }
 
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index f0ee816..e4c4a5a 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -36,11 +36,13 @@ import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.LINK_TYPE;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MAX_VALUE;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MIN_VALUE;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SORT_ORDER;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.START_WITH;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_CAT;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SCHEM;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_SEQ_NUM;
@@ -53,6 +55,7 @@ import static org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_
 
 import java.io.IOException;
 import java.sql.Connection;
+import java.sql.DatabaseMetaData;
 import java.sql.Date;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
@@ -117,6 +120,9 @@ import org.apache.phoenix.schema.PTable.IndexType;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.SaltingUtil;
+import org.apache.phoenix.schema.SequenceAllocation;
+import org.apache.phoenix.schema.SequenceAlreadyExistsException;
+import org.apache.phoenix.schema.SequenceKey;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.TableNotFoundException;
 import org.apache.phoenix.schema.types.PBinary;
@@ -2310,6 +2316,98 @@ public class UpgradeUtil {
         }
     }
 
+    public static void mergeViewIndexIdSequences(ConnectionQueryServices cqs, PhoenixConnection metaConnection)
+        throws SQLException{
+         /* before PHOENIX-5132, there was a per-tenant sequence to generate view index ids,
+           which could cause problems if global and tenant-owned view indexes were mixed for the
+           same physical base table. Now there's just one sequence for all view indexes of the same
+           physical table, but we have to check to see if there are any legacy sequences, and
+           merge them into a single sequence equal to max + 1 of the largest legacy sequence
+           to avoid collisons.
+         */
+        Map<String, List<SequenceKey>> sequenceTableMap = new HashMap<>();
+        DatabaseMetaData metaData = metaConnection.getMetaData();
+
+        try (ResultSet sequenceRS = metaData.getTables(null, null,
+            "%" + MetaDataUtil.VIEW_INDEX_SEQUENCE_NAME_PREFIX + "%",
+            new String[] {PhoenixDatabaseMetaData.SEQUENCE_TABLE_TYPE})) {
+            while (sequenceRS.next()) {
+                String tenantId = sequenceRS.getString(TABLE_CAT);
+                String schemaName = sequenceRS.getString(TABLE_SCHEM);
+                String sequenceName = sequenceRS.getString(TABLE_NAME);
+                int numBuckets = sequenceRS.getInt(SALT_BUCKETS);
+                SequenceKey key = new SequenceKey(tenantId, schemaName, sequenceName, numBuckets);
+                String baseTableName;
+                //under the old naming convention, view index sequences
+                // of non-namespace mapped tables stored their physical table name in the sequence schema for
+                //some reason. Namespace-mapped tables stored it in the sequence name itself.
+                //Note the difference between VIEW_INDEX_SEQUENCE_PREFIX (_SEQ_)
+                //and VIEW_INDEX_SEQUENCE_NAME_PREFIX (_ID_)
+                if (schemaName != null && schemaName.contains(MetaDataUtil.VIEW_INDEX_SEQUENCE_PREFIX)) {
+                    baseTableName = schemaName.replace(MetaDataUtil.VIEW_INDEX_SEQUENCE_PREFIX, "");
+                } else {
+                    baseTableName = SchemaUtil.getTableName(schemaName,
+                        sequenceName.replace(MetaDataUtil.VIEW_INDEX_SEQUENCE_NAME_PREFIX, ""));
+                }
+                if (!sequenceTableMap.containsKey(baseTableName)) {
+                    sequenceTableMap.put(baseTableName, new ArrayList<SequenceKey>());
+                }
+                sequenceTableMap.get(baseTableName).add(key);
+            }
+        }
+        for (String baseTableName : sequenceTableMap.keySet()){
+            Map<SequenceKey, Long> currentSequenceValues = new HashMap<SequenceKey, Long>();
+            long maxViewIndexId = Long.MIN_VALUE;
+            PName name = PNameFactory.newName(baseTableName);
+            boolean hasNamespaceMapping =
+                SchemaUtil.isNamespaceMappingEnabled(null, cqs.getConfiguration()) ||
+                    cqs.getProps().getBoolean(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, false);
+            List<SequenceKey> existingSequenceKeys = sequenceTableMap.get(baseTableName);
+            for (SequenceKey sequenceKey : existingSequenceKeys){
+                long[] currentValueArray = new long[1];
+                SQLException[] sqlExceptions = new SQLException[1];
+                cqs.incrementSequences(
+                    Lists.newArrayList(new SequenceAllocation(sequenceKey, 1L)),
+                    EnvironmentEdgeManager.currentTimeMillis(),
+                    currentValueArray, new SQLException[1]);
+
+                if (sqlExceptions[0] != null) {
+                    continue;
+                }
+                if (currentValueArray[0] > maxViewIndexId){
+                    maxViewIndexId = currentValueArray[0];
+                }
+                currentSequenceValues.put(sequenceKey, currentValueArray[0]);
+            }
+            try {
+                //In one case (namespaced-mapped base table, global view index), the new sequence
+                //is the same as the old sequence, so rather than create it we just increment it
+                //to the right value.
+                SequenceKey newSequenceKey = new SequenceKey(null, MetaDataUtil.getViewIndexSequenceSchemaName(name, hasNamespaceMapping),
+                    MetaDataUtil.getViewIndexSequenceName(name, null, hasNamespaceMapping), cqs.getSequenceSaltBuckets());
+                if (currentSequenceValues.containsKey(newSequenceKey)){
+                    long incrementValue = maxViewIndexId - currentSequenceValues.get(newSequenceKey);
+                    SQLException[] incrementExceptions = new SQLException[1];
+                    List<SequenceAllocation> incrementAllocations = Lists.newArrayList(new SequenceAllocation(newSequenceKey, incrementValue));
+                    cqs.incrementSequences(incrementAllocations, EnvironmentEdgeManager.currentTimeMillis(),
+                        new long[1], incrementExceptions);
+                    if (incrementExceptions[0] != null){
+                        throw incrementExceptions[0];
+                    }
+                } else {
+                    cqs.createSequence(null, newSequenceKey.getSchemaName(),
+                        newSequenceKey.getSequenceName(), maxViewIndexId, 1, 1,
+                        Long.MIN_VALUE, Long.MAX_VALUE,
+                        false, EnvironmentEdgeManager.currentTimeMillis());
+                }
+            } catch(SequenceAlreadyExistsException sae) {
+                logger.info("Tried to create view index sequence "
+                    + SchemaUtil.getTableName(sae.getSchemaName(), sae.getSequenceName()) +
+                    " during upgrade but it already existed. This is probably fine.");
+            }
+        }
+    }
+
     public static final String getSysCatalogSnapshotName(long currentSystemTableTimestamp) {
         String tableString = SYSTEM_CATALOG_NAME;
         Format formatter = new SimpleDateFormat("yyyyMMddHHmmss");