You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Prathap Rajendran <pr...@gmail.com> on 2020/01/14 13:23:06 UTC

Query on phoenix upgrade to 5.1.0

Hello All,

We are trying to upgrade the phoenix version from
"apache-phoenix-4.14.0-cdh5.14.2"
to "APACHE_PHOENIX-5.1.0-cdh6.1.0."

I couldn't find out any upgrade steps for the same. Please help me out to
get any documents available.

*Note:*
I have downloaded the below phoenix parcel and trying to access some DML
operation. I am getting the following error

https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel

*Error:*
20/01/13 04:22:41 WARN client.HTable: Error calling coprocessor service
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for
row \x00\x00WEB_STAT
java.util.concurrent.ExecutionException:
org.apache.hadoop.hbase.TableNotFoundException:
org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
        at
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
        at
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
        at
org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
        at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
        at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
        at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
        at
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
        at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
        at
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
        at
org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
        at
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
        at
org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
        at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
        at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
        at
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
        at
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
        at
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
        at
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
        at
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)

Thanks,
Prathap

Re: Query on phoenix upgrade to 5.1.0

Posted by Josh Elser <el...@apache.org>.
You may be hitting a legitimate upgrade bug, Aleksandr. There is no 
Apache Phoenix 5.1.0 release -- it is still under development.

Upgrade testing is one of the final things done for a release.

If I had to lob a guess out there (completely a guess), this is an 
ordering bug. You don't have a SYSTEM.CHILD_LINK table yet created, but 
the migration code is trying to copy out the links from the old location 
in SYSTEM.CATALOG into that new CHILD_LINK table. Tracing through 
ConnectionQueryServicesImpl and UpgradeUtil, using the logging you have, 
would be necessary to figure out what's actually going on.

On 1/30/20 8:30 AM, Aleksandr Saraseka wrote:
> Hello, Josh.
> Thank you for your response!
> I beg your pardon, our version is 5.1.0
> 
> There is what I see in PSQ logs with DEBUG enabled.
> 
> 2020-01-30 13:20:57,483 DEBUG 
> org.apache.phoenix.query.ConnectionQueryServicesImpl: System mutex table 
> already appears to exist, not creating it
> 2020-01-30 13:20:57,521 DEBUG 
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Found quorum: 
> hbase-data-upgrade001-stg.foo.bar:2181,hbase-data-upgrade002-stg.foo.bar:2181,hbase-master-upgrade001-stg.foo.bar:2181:/hbase
> 2020-01-30 13:20:57,557 DEBUG 
> org.apache.phoenix.query.ConnectionQueryServicesImpl: 
> 8406@hbase-master-upgrade001-stg acquired mutex for  tenantId : null 
> schemaName : SYSTEM tableName : CATALOG columnName : null familyName : null
> 2020-01-30 13:20:57,557 DEBUG 
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Acquired lock in 
> SYSMUTEX table for migrating SYSTEM tables to SYSTEM namespace and/or 
> upgrading SYSTEM:CATALOG
> 2020-01-30 13:20:57,559 DEBUG 
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Migrated SYSTEM 
> tables to SYSTEM namespace
> 2020-01-30 13:20:57,559 INFO org.apache.phoenix.util.UpgradeUtil: 
> Upgrading metadata to add parent links for indexes on views
> 2020-01-30 13:20:57,560 DEBUG org.apache.phoenix.jdbc.PhoenixStatement: 
> {CurrentSCN=9223372036854775807} Execute query: SELECT TENANT_ID, 
> TABLE_SCHEM, TABLE_NAME, COLUMN_FAMILY FROM SYSTEM.CATALOG WHERE 
> LINK_TYPE = 1
> 2020-01-30 13:20:57,562 DEBUG org.apache.phoenix.execute.BaseQueryPlan: 
> {CurrentSCN=9223372036854775807} Scan ready for iteration: 
> {"loadColumnFamiliesOnDemand":true,"filter":"LINK_TYPE = 
> 1","startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":0,"maxResultSize":-1,"families":{},"caching":2147483647,"maxVersions":1,"timeRange":[0,9223372036854775807]}
> 2020-01-30 13:20:57,562 DEBUG org.apache.phoenix.execute.BaseQueryPlan: 
> {CurrentSCN=9223372036854775807} Iterator ready: 
> org.apache.phoenix.iterate.RoundRobinResultIterator@27ba13aa
> 2020-01-30 13:20:57,562 DEBUG org.apache.phoenix.jdbc.PhoenixStatement: 
> {CurrentSCN=9223372036854775807} Explain plan: CLIENT 1-CHUNK PARALLEL 
> 1-WAY ROUND ROBIN FULL SCAN OVER SYSTEM:CATALOG
>      SERVER FILTER BY LINK_TYPE = 1
> 2020-01-30 13:20:57,563 DEBUG 
> org.apache.phoenix.iterate.BaseResultIterators: 
> {CurrentSCN=9223372036854775807} Getting iterators for ResultIterators 
> [name=PARALLEL,id=3241642d-a34a-131b-c1ac-bcc1241482c0,scans=[[{"loadColumnFamiliesOnDemand":true,"filter":"LINK_TYPE 
> = 
> 1","startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["LINK_TYPE"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,9223372036854775807]}]]]
> 2020-01-30 13:20:57,563 DEBUG 
> org.apache.phoenix.iterate.ParallelIterators: 
> {CurrentSCN=9223372036854775807} Id: 
> 3241642d-a34a-131b-c1ac-bcc1241482c0, Time: 0ms, Scan: 
> {"loadColumnFamiliesOnDemand":true,"filter":"LINK_TYPE = 
> 1","startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["LINK_TYPE"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,9223372036854775807]}
> 2020-01-30 13:20:57,600 INFO org.apache.phoenix.util.UpgradeUtil: 
> Upgrading metadata to add parent to child links for views
> 2020-01-30 13:20:57,604 DEBUG org.apache.phoenix.jdbc.PhoenixStatement: 
> Reloading table CHILD_LINK data from server
> 2020-01-30 13:20:57,619 DEBUG 
> org.apache.phoenix.query.ConnectionQueryServicesImpl: 
> 8406@hbase-master-upgrade001-stg released mutex for  tenantId : null 
> schemaName : SYSTEM tableName : CATALOG columnName : null familyName : null
> 2020-01-30 13:20:57,619 INFO 
> org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master 
> protocol: MasterService
> 2020-01-30 13:20:57,623 DEBUG 
> org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Closing 
> session: 0x16fece700a300fb
> 2020-01-30 13:20:57,623 DEBUG 
> org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxn: Closing 
> client for session: 0x16fece700a300fb
> 2020-01-30 13:20:57,623 DEBUG org.apache.phoenix.jdbc.PhoenixDriver: 
> Expiring 
> hbase-data-upgrade001-stg.foo.bar,hbase-data-upgrade002-stg.foo.bar,hbase-master-upgrade001-stg.foo.bar:2181:/hbase 
> because of EXPLICIT
> 2020-01-30 13:20:57,623 INFO 
> org.apache.phoenix.log.QueryLoggerDisruptor: Shutting down 
> QueryLoggerDisruptor..
> 2020-01-30 13:20:57,627 DEBUG 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server: RESPONSE / 
>   500 handled=true
> 
> As for Phoenix - I used version built by our devs.
> To make sure that everything is right, I want to use the official 
> version, but I can't find tarbal on 
> https://phoenix.apache.org/download.html 
> <https://phoenix.apache.org/download.html> page.
> 
> On Wed, Jan 29, 2020 at 6:55 PM Josh Elser <elserj@apache.org 
> <ma...@apache.org>> wrote:
> 
>     Aleksandr and Prathap,
> 
>     Upgrades are done in Phoenix as they always have been. You should
>     deploy
>     the new version of phoenix-server jars to HBase, and then the first
>     time
>     a client connects with the Phoenix JDBC driver, that client will
>     trigger
>     an update to any system tables schema.
> 
>     As such, you need to make sure that this client has permission to alter
>     the phoenix system tables that exist, often requiring admin-level
>     access
>     to hbase. Your first step should be collecting DEBUG log from your
>     Phoenix JDBC client on upgrade.
> 
>     Please also remember that 5.0.0 is pretty old at this point -- we're
>     overdue for a 5.1.0. There may be existing issues that have already
>     been
>     fixed around the upgrade. Doing a search on Jira if you've not done so
>     already is important.
> 
>     On 1/29/20 4:30 AM, Aleksandr Saraseka wrote:
>      > Hello.
>      > I'm second on this.
>      > We upgraded phoenix from 4.14.0 to 5.0.0 (with all underlying things
>      > like hdfs, hbase) and have the same problem.
>      >
>      > We are using queryserver + thin-client
>      > So on PQS side we have:
>      > 2020-01-29 09:24:21,579 INFO org.apache.phoenix.util.UpgradeUtil:
>      > Upgrading metadata to add parent links for indexes on views
>      > 2020-01-29 09:24:21,615 INFO org.apache.phoenix.util.UpgradeUtil:
>      > Upgrading metadata to add parent to child links for views
>      > 2020-01-29 09:24:21,628 INFO
>      > org.apache.hadoop.hbase.client.ConnectionImplementation: Closing
>     master
>      > protocol: MasterService
>      > 2020-01-29 09:24:21,631 INFO
>      > org.apache.phoenix.log.QueryLoggerDisruptor: Shutting down
>      > QueryLoggerDisruptor..
>      >
>      > On client side:
>      > java.lang.RuntimeException:
>      > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012
>     (42M03):
>      > Table undefined. tableName=SYSTEM.CHILD_LINK
>      >
>      > Can you point me to upgrade guide for Phoenix ? I tried to
>     find it by
>      > myself and have no luck.
>      >
>      > On Thu, Jan 16, 2020 at 1:08 PM Prathap Rajendran
>     <prathapmdu@gmail.com <ma...@gmail.com>
>      > <mailto:prathapmdu@gmail.com <ma...@gmail.com>>> wrote:
>      >
>      >     Hi All,
>      >
>      >     Thanks for the quick update. Still we have some clarification
>     about
>      >     the context.
>      >
>      >     Actually we are upgrading from the below version
>      >     Source      : apache-phoenix-4.14.0-cdh5.14.2
>      >     Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
>      >   
>       <http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz <http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz>>
>      >
>      >     Just FYI, we have already upgraded to Hbase  2.0.
>      >
>      >     Still we are facing the issue below, Once we create this table
>      >     manually, then there is no issues to run DML operations.
>      >        >     org.apache.hadoop.hbase.TableNotFoundException:
>      >     SYSTEM.CHILD_LINK
>      >
>      >     Please let me know if any steps/documents for phoenix upgrade
>     from
>      >     4.14 to 5.0.
>      >
>      >     Thanks,
>      >     Prathap
>      >
>      >
>      >     On Tue, Jan 14, 2020 at 11:34 PM Josh Elser
>     <elserj@apache.org <ma...@apache.org>
>      >     <mailto:elserj@apache.org <ma...@apache.org>>> wrote:
>      >
>      >         (with VP-Phoenix hat on)
>      >
>      >         This is not an official Apache Phoenix release, nor does it
>      >         follow the
>      >         ASF trademarks/branding rules. I'll be following up with the
>      >         author to
>      >         address the trademark violations.
>      >
>      >         Please direct your questions to the author of this project.
>      >         Again, it is
>      >         *not* Apache Phoenix.
>      >
>      >         On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
>      >          > Phoenix 5.1 doesn't actually exist yet, at least not
>     at the
>      >         Apache
>      >          > level. We haven't released it yet. It's possible that a
>      >         vendor or user
>      >          > has cut an unofficial release off one of our
>      >         development branches, but
>      >          > that's not something we can give support on. You should
>      >         contact your
>      >          > vendor.
>      >          >
>      >          > Also, since I see you're upgrading from Phoenix 4.14
>     to 5.1:
>      >         The 4.x
>      >          > branch of Phoenix is for HBase 1.x systems, and the 5.x
>      >         branch is for
>      >          > HBase 2.x systems. If you're upgrading from a 4.x to a
>     5.x,
>      >         make sure
>      >          > that you also upgrade your HBase. If you're still on HBase
>      >         1.x, we
>      >          > recently released Phoenix 4.15, which does have a
>     supported
>      >         upgrade path
>      >          > from 4.14 (and a very similar set of features to what
>     5.1 will
>      >          > eventually get).
>      >          >
>      >          > Geoffrey
>      >          >
>      >          > On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran
>      >         <prathapmdu@gmail.com <ma...@gmail.com>
>     <mailto:prathapmdu@gmail.com <ma...@gmail.com>>
>      >          > <mailto:prathapmdu@gmail.com
>     <ma...@gmail.com> <mailto:prathapmdu@gmail.com
>     <ma...@gmail.com>>>>
>      >         wrote:
>      >          >
>      >          >     Hello All,
>      >          >
>      >          >     We are trying to upgrade the phoenix version from
>      >          >     "apache-phoenix-4.14.0-cdh5.14.2" to
>      >         "APACHE_PHOENIX-5.1.0-cdh6.1.0."
>      >          >
>      >          >     I couldn't find out any upgrade steps for the same.
>      >         Please help me
>      >          >     out to get any documents available.
>      >          >     *_Note:_*
>      >          >     I have downloaded the below phoenix parcel and
>     trying to
>      >         access some
>      >          >     DML operation. I am getting the following error
>      >          >
>      >          >
>      >
>     https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
>     <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>
>      >       
>       <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>>
>      >          >
>      >         
>       <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel> <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>>>
>      >          >
>      >          >     *_Error:_*
>      >          >     20/01/13 04:22:41 WARN client.HTable: Error calling
>      >         coprocessor
>      >          >     service
>      >          >
>      >         
>       org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
>      >          >     for row \x00\x00WEB_STAT
>      >          >     java.util.concurrent.ExecutionException:
>      >          >     org.apache.hadoop.hbase.TableNotFoundException:
>      >          >     org.apache.hadoop.hbase.TableNotFoundException:
>      >         SYSTEM.CHILD_LINK
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
>      >          >              at
>      >          >
>      >         
>       org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
>      >          >              at
>      >          >
>      >         
>       org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
>      >          >              at
>      >          >
>      >         
>       org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
>      >          >              at
>      >          >
>      >         
>       org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
>      >          >              at
>      >          >
>      >         
>       org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
>      >          >     _*
>      >          >
>      >          >     Thanks,
>      >          >     Prathap
>      >          >
>      >
>      >
>      >
>      > --
>      >               Aleksandr Saraseka
>      > DBA
>      > 380997600401
>      > <tel:380997600401> *•* asaraseka@eztexting.com
>     <ma...@eztexting.com>
>      > <mailto:asaraseka@eztexting.com <ma...@eztexting.com>>
>     *•* eztexting.com <http://eztexting.com>
>      >
>     <http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>      >
>      >
>     <http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>     <http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>     <http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>     <https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>     <https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>     <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
> 
>      >
>     <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
>     <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>>
>      >
> 
> 
> 
> -- 
> 		Aleksandr Saraseka
> DBA
> 380997600401
> <tel:380997600401> *•* asaraseka@eztexting.com 
> <ma...@eztexting.com> *•* eztexting.com 
> <http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> 
> 
> <http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
> 

Re: Query on phoenix upgrade to 5.1.0

Posted by Aleksandr Saraseka <as...@eztexting.com>.
Hello, Josh.
Thank you for your response!
I beg your pardon, our version is 5.1.0

There is what I see in PSQ logs with DEBUG enabled.

2020-01-30 13:20:57,483 DEBUG
org.apache.phoenix.query.ConnectionQueryServicesImpl: System mutex table
already appears to exist, not creating it
2020-01-30 13:20:57,521 DEBUG
org.apache.phoenix.query.ConnectionQueryServicesImpl: Found quorum:
hbase-data-upgrade001-stg.foo.bar:2181,hbase-data-upgrade002-stg.foo.bar:2181,hbase-master-upgrade001-stg.foo.bar:2181:/hbase
2020-01-30 13:20:57,557 DEBUG
org.apache.phoenix.query.ConnectionQueryServicesImpl:
8406@hbase-master-upgrade001-stg acquired mutex for  tenantId : null
schemaName : SYSTEM tableName : CATALOG columnName : null familyName : null
2020-01-30 13:20:57,557 DEBUG
org.apache.phoenix.query.ConnectionQueryServicesImpl: Acquired lock in
SYSMUTEX table for migrating SYSTEM tables to SYSTEM namespace and/or
upgrading SYSTEM:CATALOG
2020-01-30 13:20:57,559 DEBUG
org.apache.phoenix.query.ConnectionQueryServicesImpl: Migrated SYSTEM
tables to SYSTEM namespace
2020-01-30 13:20:57,559 INFO org.apache.phoenix.util.UpgradeUtil: Upgrading
metadata to add parent links for indexes on views
2020-01-30 13:20:57,560 DEBUG org.apache.phoenix.jdbc.PhoenixStatement:
{CurrentSCN=9223372036854775807} Execute query: SELECT TENANT_ID,
TABLE_SCHEM, TABLE_NAME, COLUMN_FAMILY FROM SYSTEM.CATALOG WHERE LINK_TYPE
= 1
2020-01-30 13:20:57,562 DEBUG org.apache.phoenix.execute.BaseQueryPlan:
{CurrentSCN=9223372036854775807} Scan ready for iteration:
{"loadColumnFamiliesOnDemand":true,"filter":"LINK_TYPE =
1","startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":0,"maxResultSize":-1,"families":{},"caching":2147483647,"maxVersions":1,"timeRange":[0,9223372036854775807]}
2020-01-30 13:20:57,562 DEBUG org.apache.phoenix.execute.BaseQueryPlan:
{CurrentSCN=9223372036854775807} Iterator ready:
org.apache.phoenix.iterate.RoundRobinResultIterator@27ba13aa
2020-01-30 13:20:57,562 DEBUG org.apache.phoenix.jdbc.PhoenixStatement:
{CurrentSCN=9223372036854775807} Explain plan: CLIENT 1-CHUNK PARALLEL
1-WAY ROUND ROBIN FULL SCAN OVER SYSTEM:CATALOG
    SERVER FILTER BY LINK_TYPE = 1
2020-01-30 13:20:57,563 DEBUG
org.apache.phoenix.iterate.BaseResultIterators:
{CurrentSCN=9223372036854775807} Getting iterators for ResultIterators
[name=PARALLEL,id=3241642d-a34a-131b-c1ac-bcc1241482c0,scans=[[{"loadColumnFamiliesOnDemand":true,"filter":"LINK_TYPE
=
1","startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["LINK_TYPE"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,9223372036854775807]}]]]
2020-01-30 13:20:57,563 DEBUG org.apache.phoenix.iterate.ParallelIterators:
{CurrentSCN=9223372036854775807} Id: 3241642d-a34a-131b-c1ac-bcc1241482c0,
Time: 0ms, Scan: {"loadColumnFamiliesOnDemand":true,"filter":"LINK_TYPE =
1","startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{"0":["LINK_TYPE"]},"caching":2147483647,"maxVersions":1,"timeRange":[0,9223372036854775807]}
2020-01-30 13:20:57,600 INFO org.apache.phoenix.util.UpgradeUtil: Upgrading
metadata to add parent to child links for views
2020-01-30 13:20:57,604 DEBUG org.apache.phoenix.jdbc.PhoenixStatement:
Reloading table CHILD_LINK data from server
2020-01-30 13:20:57,619 DEBUG
org.apache.phoenix.query.ConnectionQueryServicesImpl:
8406@hbase-master-upgrade001-stg released mutex for  tenantId : null
schemaName : SYSTEM tableName : CATALOG columnName : null familyName : null
2020-01-30 13:20:57,619 INFO
org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master
protocol: MasterService
2020-01-30 13:20:57,623 DEBUG
org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper: Closing session:
0x16fece700a300fb
2020-01-30 13:20:57,623 DEBUG
org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxn: Closing client
for session: 0x16fece700a300fb
2020-01-30 13:20:57,623 DEBUG org.apache.phoenix.jdbc.PhoenixDriver:
Expiring
hbase-data-upgrade001-stg.foo.bar,hbase-data-upgrade002-stg.foo.bar,hbase-master-upgrade001-stg.foo.bar:2181:/hbase
because of EXPLICIT
2020-01-30 13:20:57,623 INFO org.apache.phoenix.log.QueryLoggerDisruptor:
Shutting down QueryLoggerDisruptor..
2020-01-30 13:20:57,627 DEBUG
org.apache.phoenix.shaded.org.eclipse.jetty.server.Server: RESPONSE /  500
handled=true

As for Phoenix - I used version built by our devs.
To make sure that everything is right, I want to use the official version,
but I can't find tarbal on https://phoenix.apache.org/download.html page.

On Wed, Jan 29, 2020 at 6:55 PM Josh Elser <el...@apache.org> wrote:

> Aleksandr and Prathap,
>
> Upgrades are done in Phoenix as they always have been. You should deploy
> the new version of phoenix-server jars to HBase, and then the first time
> a client connects with the Phoenix JDBC driver, that client will trigger
> an update to any system tables schema.
>
> As such, you need to make sure that this client has permission to alter
> the phoenix system tables that exist, often requiring admin-level access
> to hbase. Your first step should be collecting DEBUG log from your
> Phoenix JDBC client on upgrade.
>
> Please also remember that 5.0.0 is pretty old at this point -- we're
> overdue for a 5.1.0. There may be existing issues that have already been
> fixed around the upgrade. Doing a search on Jira if you've not done so
> already is important.
>
> On 1/29/20 4:30 AM, Aleksandr Saraseka wrote:
> > Hello.
> > I'm second on this.
> > We upgraded phoenix from 4.14.0 to 5.0.0 (with all underlying things
> > like hdfs, hbase) and have the same problem.
> >
> > We are using queryserver + thin-client
> > So on PQS side we have:
> > 2020-01-29 09:24:21,579 INFO org.apache.phoenix.util.UpgradeUtil:
> > Upgrading metadata to add parent links for indexes on views
> > 2020-01-29 09:24:21,615 INFO org.apache.phoenix.util.UpgradeUtil:
> > Upgrading metadata to add parent to child links for views
> > 2020-01-29 09:24:21,628 INFO
> > org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master
> > protocol: MasterService
> > 2020-01-29 09:24:21,631 INFO
> > org.apache.phoenix.log.QueryLoggerDisruptor: Shutting down
> > QueryLoggerDisruptor..
> >
> > On client side:
> > java.lang.RuntimeException:
> > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03):
> > Table undefined. tableName=SYSTEM.CHILD_LINK
> >
> > Can you point me to upgrade guide for Phoenix ? I tried to find it by
> > myself and have no luck.
> >
> > On Thu, Jan 16, 2020 at 1:08 PM Prathap Rajendran <prathapmdu@gmail.com
> > <ma...@gmail.com>> wrote:
> >
> >     Hi All,
> >
> >     Thanks for the quick update. Still we have some clarification about
> >     the context.
> >
> >     Actually we are upgrading from the below version
> >     Source      : apache-phoenix-4.14.0-cdh5.14.2
> >     Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
> >     <
> http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
> >
> >
> >     Just FYI, we have already upgraded to Hbase  2.0.
> >
> >     Still we are facing the issue below, Once we create this table
> >     manually, then there is no issues to run DML operations.
> >        >     org.apache.hadoop.hbase.TableNotFoundException:
> >     SYSTEM.CHILD_LINK
> >
> >     Please let me know if any steps/documents for phoenix upgrade from
> >     4.14 to 5.0.
> >
> >     Thanks,
> >     Prathap
> >
> >
> >     On Tue, Jan 14, 2020 at 11:34 PM Josh Elser <elserj@apache.org
> >     <ma...@apache.org>> wrote:
> >
> >         (with VP-Phoenix hat on)
> >
> >         This is not an official Apache Phoenix release, nor does it
> >         follow the
> >         ASF trademarks/branding rules. I'll be following up with the
> >         author to
> >         address the trademark violations.
> >
> >         Please direct your questions to the author of this project.
> >         Again, it is
> >         *not* Apache Phoenix.
> >
> >         On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
> >          > Phoenix 5.1 doesn't actually exist yet, at least not at the
> >         Apache
> >          > level. We haven't released it yet. It's possible that a
> >         vendor or user
> >          > has cut an unofficial release off one of our
> >         development branches, but
> >          > that's not something we can give support on. You should
> >         contact your
> >          > vendor.
> >          >
> >          > Also, since I see you're upgrading from Phoenix 4.14 to 5.1:
> >         The 4.x
> >          > branch of Phoenix is for HBase 1.x systems, and the 5.x
> >         branch is for
> >          > HBase 2.x systems. If you're upgrading from a 4.x to a 5.x,
> >         make sure
> >          > that you also upgrade your HBase. If you're still on HBase
> >         1.x, we
> >          > recently released Phoenix 4.15, which does have a supported
> >         upgrade path
> >          > from 4.14 (and a very similar set of features to what 5.1 will
> >          > eventually get).
> >          >
> >          > Geoffrey
> >          >
> >          > On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran
> >         <prathapmdu@gmail.com <ma...@gmail.com>
> >          > <mailto:prathapmdu@gmail.com <ma...@gmail.com>>>
> >         wrote:
> >          >
> >          >     Hello All,
> >          >
> >          >     We are trying to upgrade the phoenix version from
> >          >     "apache-phoenix-4.14.0-cdh5.14.2" to
> >         "APACHE_PHOENIX-5.1.0-cdh6.1.0."
> >          >
> >          >     I couldn't find out any upgrade steps for the same.
> >         Please help me
> >          >     out to get any documents available.
> >          >     *_Note:_*
> >          >     I have downloaded the below phoenix parcel and trying to
> >         access some
> >          >     DML operation. I am getting the following error
> >          >
> >          >
> >
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
> >         <
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
> >
> >          >
> >           <
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
> <
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
> >>
> >          >
> >          >     *_Error:_*
> >          >     20/01/13 04:22:41 WARN client.HTable: Error calling
> >         coprocessor
> >          >     service
> >          >
> >
>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
> >          >     for row \x00\x00WEB_STAT
> >          >     java.util.concurrent.ExecutionException:
> >          >     org.apache.hadoop.hbase.TableNotFoundException:
> >          >     org.apache.hadoop.hbase.TableNotFoundException:
> >         SYSTEM.CHILD_LINK
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
> >          >              at
> >          >
> >
>  org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
> >          >              at
> >          >
> >
>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
> >          >              at
> >          >
> >
>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
> >          >              at
> >          >
> >
>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
> >          >              at
> >          >
> >
>  org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
> >          >     _*
> >          >
> >          >     Thanks,
> >          >     Prathap
> >          >
> >
> >
> >
> > --
> >               Aleksandr Saraseka
> > DBA
> > 380997600401
> > <tel:380997600401> *•* asaraseka@eztexting.com
> > <ma...@eztexting.com> *•* eztexting.com
> > <
> http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> >
> >
> > <
> http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
> >
> >
>


-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaraseka@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>

Re: Query on phoenix upgrade to 5.1.0

Posted by Josh Elser <el...@apache.org>.
Aleksandr and Prathap,

Upgrades are done in Phoenix as they always have been. You should deploy 
the new version of phoenix-server jars to HBase, and then the first time 
a client connects with the Phoenix JDBC driver, that client will trigger 
an update to any system tables schema.

As such, you need to make sure that this client has permission to alter 
the phoenix system tables that exist, often requiring admin-level access 
to hbase. Your first step should be collecting DEBUG log from your 
Phoenix JDBC client on upgrade.

Please also remember that 5.0.0 is pretty old at this point -- we're 
overdue for a 5.1.0. There may be existing issues that have already been 
fixed around the upgrade. Doing a search on Jira if you've not done so 
already is important.

On 1/29/20 4:30 AM, Aleksandr Saraseka wrote:
> Hello.
> I'm second on this.
> We upgraded phoenix from 4.14.0 to 5.0.0 (with all underlying things 
> like hdfs, hbase) and have the same problem.
> 
> We are using queryserver + thin-client
> So on PQS side we have:
> 2020-01-29 09:24:21,579 INFO org.apache.phoenix.util.UpgradeUtil: 
> Upgrading metadata to add parent links for indexes on views
> 2020-01-29 09:24:21,615 INFO org.apache.phoenix.util.UpgradeUtil: 
> Upgrading metadata to add parent to child links for views
> 2020-01-29 09:24:21,628 INFO 
> org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master 
> protocol: MasterService
> 2020-01-29 09:24:21,631 INFO 
> org.apache.phoenix.log.QueryLoggerDisruptor: Shutting down 
> QueryLoggerDisruptor..
> 
> On client side:
> java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): 
> Table undefined. tableName=SYSTEM.CHILD_LINK
> 
> Can you point me to upgrade guide for Phoenix ? I tried to find it by 
> myself and have no luck.
> 
> On Thu, Jan 16, 2020 at 1:08 PM Prathap Rajendran <prathapmdu@gmail.com 
> <ma...@gmail.com>> wrote:
> 
>     Hi All,
> 
>     Thanks for the quick update. Still we have some clarification about
>     the context.
> 
>     Actually we are upgrading from the below version
>     Source      : apache-phoenix-4.14.0-cdh5.14.2
>     Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
>     <http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz>
> 
>     Just FYI, we have already upgraded to Hbase  2.0.
> 
>     Still we are facing the issue below, Once we create this table
>     manually, then there is no issues to run DML operations.
>        >     org.apache.hadoop.hbase.TableNotFoundException:
>     SYSTEM.CHILD_LINK
> 
>     Please let me know if any steps/documents for phoenix upgrade from
>     4.14 to 5.0.
> 
>     Thanks,
>     Prathap
> 
> 
>     On Tue, Jan 14, 2020 at 11:34 PM Josh Elser <elserj@apache.org
>     <ma...@apache.org>> wrote:
> 
>         (with VP-Phoenix hat on)
> 
>         This is not an official Apache Phoenix release, nor does it
>         follow the
>         ASF trademarks/branding rules. I'll be following up with the
>         author to
>         address the trademark violations.
> 
>         Please direct your questions to the author of this project.
>         Again, it is
>         *not* Apache Phoenix.
> 
>         On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
>          > Phoenix 5.1 doesn't actually exist yet, at least not at the
>         Apache
>          > level. We haven't released it yet. It's possible that a
>         vendor or user
>          > has cut an unofficial release off one of our
>         development branches, but
>          > that's not something we can give support on. You should
>         contact your
>          > vendor.
>          >
>          > Also, since I see you're upgrading from Phoenix 4.14 to 5.1:
>         The 4.x
>          > branch of Phoenix is for HBase 1.x systems, and the 5.x
>         branch is for
>          > HBase 2.x systems. If you're upgrading from a 4.x to a 5.x,
>         make sure
>          > that you also upgrade your HBase. If you're still on HBase
>         1.x, we
>          > recently released Phoenix 4.15, which does have a supported
>         upgrade path
>          > from 4.14 (and a very similar set of features to what 5.1 will
>          > eventually get).
>          >
>          > Geoffrey
>          >
>          > On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran
>         <prathapmdu@gmail.com <ma...@gmail.com>
>          > <mailto:prathapmdu@gmail.com <ma...@gmail.com>>>
>         wrote:
>          >
>          >     Hello All,
>          >
>          >     We are trying to upgrade the phoenix version from
>          >     "apache-phoenix-4.14.0-cdh5.14.2" to
>         "APACHE_PHOENIX-5.1.0-cdh6.1.0."
>          >
>          >     I couldn't find out any upgrade steps for the same.
>         Please help me
>          >     out to get any documents available.
>          >     *_Note:_*
>          >     I have downloaded the below phoenix parcel and trying to
>         access some
>          >     DML operation. I am getting the following error
>          >
>          >
>         https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
>         <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>
>          >   
>           <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>>
>          >
>          >     *_Error:_*
>          >     20/01/13 04:22:41 WARN client.HTable: Error calling
>         coprocessor
>          >     service
>          >   
>           org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
>          >     for row \x00\x00WEB_STAT
>          >     java.util.concurrent.ExecutionException:
>          >     org.apache.hadoop.hbase.TableNotFoundException:
>          >     org.apache.hadoop.hbase.TableNotFoundException:
>         SYSTEM.CHILD_LINK
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
>          >              at
>          >   
>           org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
>          >              at
>          >   
>           org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
>          >              at
>          >   
>           org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
>          >              at
>          >   
>           org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
>          >              at
>          >   
>           org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
>          >              at
>          >   
>           org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
>          >              at
>          >   
>           org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
>          >              at
>          >   
>           org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
>          >              at
>          >   
>           org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
>          >     _*
>          >
>          >     Thanks,
>          >     Prathap
>          >
> 
> 
> 
> -- 
> 		Aleksandr Saraseka
> DBA
> 380997600401
> <tel:380997600401> *•* asaraseka@eztexting.com 
> <ma...@eztexting.com> *•* eztexting.com 
> <http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> 
> 
> <http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> 
> <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
> 

Re: Query on phoenix upgrade to 5.1.0

Posted by Aleksandr Saraseka <as...@eztexting.com>.
Hello.
I'm second on this.
We upgraded phoenix from 4.14.0 to 5.0.0 (with all underlying things like
hdfs, hbase) and have the same problem.

We are using queryserver + thin-client
So on PQS side we have:
2020-01-29 09:24:21,579 INFO org.apache.phoenix.util.UpgradeUtil: Upgrading
metadata to add parent links for indexes on views
2020-01-29 09:24:21,615 INFO org.apache.phoenix.util.UpgradeUtil: Upgrading
metadata to add parent to child links for views
2020-01-29 09:24:21,628 INFO
org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master
protocol: MasterService
2020-01-29 09:24:21,631 INFO org.apache.phoenix.log.QueryLoggerDisruptor:
Shutting down QueryLoggerDisruptor..

On client side:
java.lang.RuntimeException:
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table
undefined. tableName=SYSTEM.CHILD_LINK

Can you point me to upgrade guide for Phoenix ? I tried to find it by
myself and have no luck.

On Thu, Jan 16, 2020 at 1:08 PM Prathap Rajendran <pr...@gmail.com>
wrote:

> Hi All,
>
> Thanks for the quick update. Still we have some clarification about the
> context.
>
> Actually we are upgrading from the below version
> Source      : apache-phoenix-4.14.0-cdh5.14.2
> Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
> <http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz>
>
> Just FYI, we have already upgraded to Hbase  2.0.
>
> Still we are facing the issue below,  Once we create this table manually,
> then there is no issues to run DML operations.
>   >     org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
>
> Please let me know if any steps/documents for phoenix upgrade from 4.14 to
> 5.0.
>
> Thanks,
> Prathap
>
>
> On Tue, Jan 14, 2020 at 11:34 PM Josh Elser <el...@apache.org> wrote:
>
>> (with VP-Phoenix hat on)
>>
>> This is not an official Apache Phoenix release, nor does it follow the
>> ASF trademarks/branding rules. I'll be following up with the author to
>> address the trademark violations.
>>
>> Please direct your questions to the author of this project. Again, it is
>> *not* Apache Phoenix.
>>
>> On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
>> > Phoenix 5.1 doesn't actually exist yet, at least not at the Apache
>> > level. We haven't released it yet. It's possible that a vendor or user
>> > has cut an unofficial release off one of our development branches, but
>> > that's not something we can give support on. You should contact your
>> > vendor.
>> >
>> > Also, since I see you're upgrading from Phoenix 4.14 to 5.1: The 4.x
>> > branch of Phoenix is for HBase 1.x systems, and the 5.x branch is for
>> > HBase 2.x systems. If you're upgrading from a 4.x to a 5.x, make sure
>> > that you also upgrade your HBase. If you're still on HBase 1.x, we
>> > recently released Phoenix 4.15, which does have a supported upgrade
>> path
>> > from 4.14 (and a very similar set of features to what 5.1 will
>> > eventually get).
>> >
>> > Geoffrey
>> >
>> > On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran <prathapmdu@gmail.com
>> > <ma...@gmail.com>> wrote:
>> >
>> >     Hello All,
>> >
>> >     We are trying to upgrade the phoenix version from
>> >     "apache-phoenix-4.14.0-cdh5.14.2" to
>> "APACHE_PHOENIX-5.1.0-cdh6.1.0."
>> >
>> >     I couldn't find out any upgrade steps for the same. Please help me
>> >     out to get any documents available.
>> >     *_Note:_*
>> >     I have downloaded the below phoenix parcel and trying to access some
>> >     DML operation. I am getting the following error
>> >
>> >
>> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
>> >     <
>> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
>> >
>> >
>> >     *_Error:_*
>> >     20/01/13 04:22:41 WARN client.HTable: Error calling coprocessor
>> >     service
>> >
>>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
>> >     for row \x00\x00WEB_STAT
>> >     java.util.concurrent.ExecutionException:
>> >     org.apache.hadoop.hbase.TableNotFoundException:
>> >     org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
>> >              at
>> >
>>  org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
>> >              at
>> >
>>  org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
>> >              at
>> >
>>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
>> >              at
>> >
>>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
>> >              at
>> >
>>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
>> >              at
>> >
>>  org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
>> >              at
>> >
>>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
>> >              at
>> >
>>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
>> >              at
>> >
>>  org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
>> >     _*
>> >
>> >     Thanks,
>> >     Prathap
>> >
>>
>

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaraseka@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>

Re: Query on phoenix upgrade to 5.1.0

Posted by Prathap Rajendran <pr...@gmail.com>.
Hi All,

Thanks for the quick update. Still we have some clarification about the
context.

Actually we are upgrading from the below version
Source      : apache-phoenix-4.14.0-cdh5.14.2
Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
<http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz>

Just FYI, we have already upgraded to Hbase  2.0.

Still we are facing the issue below,  Once we create this table manually,
then there is no issues to run DML operations.
  >     org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK

Please let me know if any steps/documents for phoenix upgrade from 4.14 to
5.0.

Thanks,
Prathap


On Tue, Jan 14, 2020 at 11:34 PM Josh Elser <el...@apache.org> wrote:

> (with VP-Phoenix hat on)
>
> This is not an official Apache Phoenix release, nor does it follow the
> ASF trademarks/branding rules. I'll be following up with the author to
> address the trademark violations.
>
> Please direct your questions to the author of this project. Again, it is
> *not* Apache Phoenix.
>
> On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
> > Phoenix 5.1 doesn't actually exist yet, at least not at the Apache
> > level. We haven't released it yet. It's possible that a vendor or user
> > has cut an unofficial release off one of our development branches, but
> > that's not something we can give support on. You should contact your
> > vendor.
> >
> > Also, since I see you're upgrading from Phoenix 4.14 to 5.1: The 4.x
> > branch of Phoenix is for HBase 1.x systems, and the 5.x branch is for
> > HBase 2.x systems. If you're upgrading from a 4.x to a 5.x, make sure
> > that you also upgrade your HBase. If you're still on HBase 1.x, we
> > recently released Phoenix 4.15, which does have a supported upgrade path
> > from 4.14 (and a very similar set of features to what 5.1 will
> > eventually get).
> >
> > Geoffrey
> >
> > On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran <prathapmdu@gmail.com
> > <ma...@gmail.com>> wrote:
> >
> >     Hello All,
> >
> >     We are trying to upgrade the phoenix version from
> >     "apache-phoenix-4.14.0-cdh5.14.2" to "APACHE_PHOENIX-5.1.0-cdh6.1.0."
> >
> >     I couldn't find out any upgrade steps for the same. Please help me
> >     out to get any documents available.
> >     *_Note:_*
> >     I have downloaded the below phoenix parcel and trying to access some
> >     DML operation. I am getting the following error
> >
> >
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
> >     <
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
> >
> >
> >     *_Error:_*
> >     20/01/13 04:22:41 WARN client.HTable: Error calling coprocessor
> >     service
> >
>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
> >     for row \x00\x00WEB_STAT
> >     java.util.concurrent.ExecutionException:
> >     org.apache.hadoop.hbase.TableNotFoundException:
> >     org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
> >              at
> >
>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
> >              at
> >
>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
> >              at
> >
>  org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
> >              at
> >
>  org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
> >              at
> >
>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
> >              at
> >
>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
> >              at
> >
>  org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
> >              at
> >
>  org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
> >              at
> >
>  org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
> >              at
> >
>  org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
> >              at
> >
>  org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
> >              at
> >
>  org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
> >              at
> >
>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
> >              at
> >
>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
> >              at
> >
>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
> >              at
> >
>  org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
> >              at
> >
>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
> >              at
> >
>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
> >              at
> >
>  org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
> >     _*
> >
> >     Thanks,
> >     Prathap
> >
>

Re: Query on phoenix upgrade to 5.1.0

Posted by Josh Elser <el...@apache.org>.
(with VP-Phoenix hat on)

This is not an official Apache Phoenix release, nor does it follow the 
ASF trademarks/branding rules. I'll be following up with the author to 
address the trademark violations.

Please direct your questions to the author of this project. Again, it is 
*not* Apache Phoenix.

On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
> Phoenix 5.1 doesn't actually exist yet, at least not at the Apache 
> level. We haven't released it yet. It's possible that a vendor or user 
> has cut an unofficial release off one of our development branches, but 
> that's not something we can give support on. You should contact your 
> vendor.
> 
> Also, since I see you're upgrading from Phoenix 4.14 to 5.1: The 4.x 
> branch of Phoenix is for HBase 1.x systems, and the 5.x branch is for 
> HBase 2.x systems. If you're upgrading from a 4.x to a 5.x, make sure 
> that you also upgrade your HBase. If you're still on HBase 1.x, we 
> recently released Phoenix 4.15, which does have a supported upgrade path 
> from 4.14 (and a very similar set of features to what 5.1 will 
> eventually get).
> 
> Geoffrey
> 
> On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran <prathapmdu@gmail.com 
> <ma...@gmail.com>> wrote:
> 
>     Hello All,
> 
>     We are trying to upgrade the phoenix version from
>     "apache-phoenix-4.14.0-cdh5.14.2" to "APACHE_PHOENIX-5.1.0-cdh6.1.0."
> 
>     I couldn't find out any upgrade steps for the same. Please help me
>     out to get any documents available.
>     *_Note:_*
>     I have downloaded the below phoenix parcel and trying to access some
>     DML operation. I am getting the following error
> 
>     https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
>     <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>
> 
>     *_Error:_*
>     20/01/13 04:22:41 WARN client.HTable: Error calling coprocessor
>     service
>     org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
>     for row \x00\x00WEB_STAT
>     java.util.concurrent.ExecutionException:
>     org.apache.hadoop.hbase.TableNotFoundException:
>     org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
>              at
>     org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
>              at
>     org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
>              at
>     org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
>              at
>     org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
>              at
>     org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>              at
>     org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>              at
>     org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>              at
>     org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
>              at
>     org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
>              at
>     org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
>              at
>     org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
>              at
>     org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
>              at
>     org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
>              at
>     org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
>              at
>     org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
>              at
>     org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
>              at
>     org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
>              at
>     org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
>              at
>     org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
>     _*
> 
>     Thanks,
>     Prathap
> 

Re: Query on phoenix upgrade to 5.1.0

Posted by Geoffrey Jacoby <gj...@salesforce.com>.
Phoenix 5.1 doesn't actually exist yet, at least not at the Apache level.
We haven't released it yet. It's possible that a vendor or user has cut an
unofficial release off one of our development branches, but that's not
something we can give support on. You should contact your vendor.

Also, since I see you're upgrading from Phoenix 4.14 to 5.1: The 4.x branch
of Phoenix is for HBase 1.x systems, and the 5.x branch is for HBase 2.x
systems. If you're upgrading from a 4.x to a 5.x, make sure that you also
upgrade your HBase. If you're still on HBase 1.x, we recently released
Phoenix 4.15, which does have a supported upgrade path from 4.14 (and a
very similar set of features to what 5.1 will eventually get).

Geoffrey

On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran <pr...@gmail.com>
wrote:

> Hello All,
>
> We are trying to upgrade the phoenix version from "apache-phoenix-4.14.0-cdh5.14.2"
> to "APACHE_PHOENIX-5.1.0-cdh6.1.0."
>
> I couldn't find out any upgrade steps for the same. Please help me out to
> get any documents available.
>
> *Note:*
> I have downloaded the below phoenix parcel and trying to access some DML
> operation. I am getting the following error
>
>
> https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
>
> *Error:*
> 20/01/13 04:22:41 WARN client.HTable: Error calling coprocessor service
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for
> row \x00\x00WEB_STAT
> java.util.concurrent.ExecutionException:
> org.apache.hadoop.hbase.TableNotFoundException:
> org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CHILD_LINK
>         at
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
>         at
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
>         at
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
>         at
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
>         at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>         at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>         at
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>         at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
>         at
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
>         at
> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
>         at
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
>         at
> org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
>         at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
>         at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
>         at
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
>         at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
>         at
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>
> Thanks,
> Prathap
>