You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Vitalyi Brodetskyi <vb...@hortonworks.com> on 2014/08/20 20:17:00 UTC

Review Request 24900: Schema upgrade failed during upgrade from BWM20 with default Postgres DB

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24900/
-----------------------------------------------------------

Review request for Ambari, Dmitro Lisnichenko and Myroslav Papirkovskyy.


Bugs: AMBARI-6952
    https://issues.apache.org/jira/browse/AMBARI-6952


Repository: ambari


Description
-------

*STR:*
1) Install Ambari server (BWM20) and setup it by default (http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo)
2) Deploy cluster
3) Make Ambari only upgrade to 1.7.0
4) Make schema upgrade

*Result:* schema upgrade failed. ambari-server.log:
{noformat}
org.postgresql.util.PSQLException: ERROR: relation "clusters_cluster_id_seq" does not exist
  Position: 87
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
	at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:499)
	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:485)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:443)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
03:15:30,101  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1)
03:15:30,102  WARN [main] DBAccessorImpl:505 - Error executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1), errorCode = 0, message = ERROR: duplicate key value violates unique constraint "ambari_sequences_pkey"
03:15:30,102  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('requestschedule_id_seq', 1)
03:15:30,124  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('resourcefilter_id_seq', 1)
03:15:30,790  INFO [main] StackExtensionHelper:467 - No services defined for stack: HDP-1.3.3
03:15:31,662  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:31,900  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:32,090  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:32,263  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
03:15:32,432  INFO [main] ActionDefinitionManager:124 - Added custom action definition for nagios_update_ignore
03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for check_host
03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for validate_configs
03:15:32,543 ERROR [main] AbstractUpgradeCatalog:150 - Error in transaction 
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version, cluster_id) VALUES (13, NULL, E'{"content":"\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions an
 d\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nhbase.root.logger\u003dINFO,console\nhbase.security.logger\u003dINFO,console\nhbase.log.dir\u003d.\nhbase.log.file\u003dhbase.log\n\n# Define the root logger to the system property \"hbase.root.logger\".\nlog4j.rootLogger\u003d${hbase.root.logger}\n\n# Logging Threshold\nlog4j.threshold\u003dALL\n\n#\n# Daily Rolling File Appender\n#\nlog4j.appender.DRFA\u003dorg.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File\u003d${hbase.log.dir}/${hbase.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern\u003d.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex\u003d30\nlog4j.appender.DRFA.layout\u003dorg.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender properties\nhbase.log.maxfilesize\u003d2
 56MB\nhbase.log.maxbackupindex\u003d20\n\n# Rolling File Appender\nlog4j.appender.RFA\u003dorg.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File\u003d${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize\u003d${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex\u003d${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout\u003dorg.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit appender\n#\nhbase.security.log.file\u003dSecurityAuth.audit\nhbase.security.log.maxfilesize\u003d256MB\nhbase.security.log.maxbackupindex\u003d20\nlog4j.appender.RFAS\u003dorg.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File\u003d${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize\u003d${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex\u003d${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout\u003dorg.apache.log4j.PatternLayout\n
 log4j.appender.RFAS.layout.ConversionPattern\u003d%d{ISO8601} %p %c: %m%n\nlog4j.category.SecurityLogger\u003d${hbase.security.logger}\nlog4j.additivity.SecurityLogger\u003dfalse\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController\u003dTRACE\n\n#\n# Null Appender\n#\nlog4j.appender.NullAppender\u003dorg.apache.log4j.varia.NullAppender\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\nlog4j.appender.console\u003dorg.apache.log4j.ConsoleAppender\nlog4j.appender.console.target\u003dSystem.err\nlog4j.appender.console.layout\u003dorg.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Custom Logging levels\n\nlog4j.logger.org.apache.zookeeper\u003dINFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem\u003dDEBUG\nlog4j.logger.org.apache.hadoop.hbase\u003dDEBUG\n# Make these two classes INFO-level. Make them DEBUG to see more zk debug.\nlog4j.logger.org.ap
 ache.hadoop.hbase.zookeeper.ZKUtil\u003dINFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher\u003dINFO\n#log4j.logger.org.apache.hadoop.dfs\u003dDEBUG\n# Set this class to log INFO only otherwise its OTT\n# Enable this to get detailed connection error/retry logging.\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\u003dTRACE\n\n\n# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace\u003dDEBUG\n\n# Uncomment the below if you want to remove logging of client region caching\u0027\n# and scan of .META. messages\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\u003dINFO\n# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner\u003dINFO\n\n    "}', 'version1', 1408443332466, 'hbase-log4j', NULL, 2) was aborted.  Call getNextException to see the cause.
Error Code: 0
Call: INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version, cluster_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
	bind => [8 parameters bound]
Query: InsertObjectQuery(org.apache.ambari.server.orm.entities.ClusterConfigMappingEntity@45e881b6)
	at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:804)
	at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:857)
	at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:180)
	at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:442)
	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:382)
	at org.apache.ambari.server.orm.dao.DaoUtils.selectOne(DaoUtils.java:70)
	at org.apache.ambari.server.orm.dao.ClusterDAO.findConfig(ClusterDAO.java:96)
	at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150.addMissingLog4jConfigs(UpgradeCatalog150.java:699)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150$5.run(UpgradeCatalog150.java:555)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.executeInTransaction(AbstractUpgradeCatalog.java:147)
	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:552)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
{noformat}


Diffs
-----

  ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog150.java 620c076 

Diff: https://reviews.apache.org/r/24900/diff/


Testing
-------

testing now


Thanks,

Vitalyi Brodetskyi


Re: Review Request 24900: Schema upgrade failed during upgrade from BWM20 with default Postgres DB

Posted by Dmitro Lisnichenko <dl...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24900/#review51120
-----------------------------------------------------------

Ship it!


Ship It!

- Dmitro Lisnichenko


On Aug. 20, 2014, 6:17 p.m., Vitalyi Brodetskyi wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24900/
> -----------------------------------------------------------
> 
> (Updated Aug. 20, 2014, 6:17 p.m.)
> 
> 
> Review request for Ambari, Dmitro Lisnichenko and Myroslav Papirkovskyy.
> 
> 
> Bugs: AMBARI-6952
>     https://issues.apache.org/jira/browse/AMBARI-6952
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> *STR:*
> 1) Install Ambari server (BWM20) and setup it by default (http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo)
> 2) Deploy cluster
> 3) Make Ambari only upgrade to 1.7.0
> 4) Make schema upgrade
> 
> *Result:* schema upgrade failed. ambari-server.log:
> {noformat}
> org.postgresql.util.PSQLException: ERROR: relation "clusters_cluster_id_seq" does not exist
>   Position: 87
> 	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> 	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> 	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> 	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> 	at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> 	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> 	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:499)
> 	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:485)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:443)
> 	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
> 03:15:30,101  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1)
> 03:15:30,102  WARN [main] DBAccessorImpl:505 - Error executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1), errorCode = 0, message = ERROR: duplicate key value violates unique constraint "ambari_sequences_pkey"
> 03:15:30,102  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('requestschedule_id_seq', 1)
> 03:15:30,124  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('resourcefilter_id_seq', 1)
> 03:15:30,790  INFO [main] StackExtensionHelper:467 - No services defined for stack: HDP-1.3.3
> 03:15:31,662  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:31,900  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:32,090  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:32,263  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:32,432  INFO [main] ActionDefinitionManager:124 - Added custom action definition for nagios_update_ignore
> 03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for check_host
> 03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for validate_configs
> 03:15:32,543 ERROR [main] AbstractUpgradeCatalog:150 - Error in transaction 
> javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version, cluster_id) VALUES (13, NULL, E'{"content":"\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions 
 and\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nhbase.root.logger\u003dINFO,console\nhbase.security.logger\u003dINFO,console\nhbase.log.dir\u003d.\nhbase.log.file\u003dhbase.log\n\n# Define the root logger to the system property \"hbase.root.logger\".\nlog4j.rootLogger\u003d${hbase.root.logger}\n\n# Logging Threshold\nlog4j.threshold\u003dALL\n\n#\n# Daily Rolling File Appender\n#\nlog4j.appender.DRFA\u003dorg.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File\u003d${hbase.log.dir}/${hbase.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern\u003d.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex\u003d30\nlog4j.appender.DRFA.layout\u003dorg.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender properties\nhbase.log.maxfilesize\u003
 d256MB\nhbase.log.maxbackupindex\u003d20\n\n# Rolling File Appender\nlog4j.appender.RFA\u003dorg.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File\u003d${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize\u003d${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex\u003d${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout\u003dorg.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit appender\n#\nhbase.security.log.file\u003dSecurityAuth.audit\nhbase.security.log.maxfilesize\u003d256MB\nhbase.security.log.maxbackupindex\u003d20\nlog4j.appender.RFAS\u003dorg.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File\u003d${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize\u003d${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex\u003d${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout\u003dorg.apache.log4j.PatternLayout
 \nlog4j.appender.RFAS.layout.ConversionPattern\u003d%d{ISO8601} %p %c: %m%n\nlog4j.category.SecurityLogger\u003d${hbase.security.logger}\nlog4j.additivity.SecurityLogger\u003dfalse\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController\u003dTRACE\n\n#\n# Null Appender\n#\nlog4j.appender.NullAppender\u003dorg.apache.log4j.varia.NullAppender\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\nlog4j.appender.console\u003dorg.apache.log4j.ConsoleAppender\nlog4j.appender.console.target\u003dSystem.err\nlog4j.appender.console.layout\u003dorg.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Custom Logging levels\n\nlog4j.logger.org.apache.zookeeper\u003dINFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem\u003dDEBUG\nlog4j.logger.org.apache.hadoop.hbase\u003dDEBUG\n# Make these two classes INFO-level. Make them DEBUG to see more zk debug.\nlog4j.logger.org.
 apache.hadoop.hbase.zookeeper.ZKUtil\u003dINFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher\u003dINFO\n#log4j.logger.org.apache.hadoop.dfs\u003dDEBUG\n# Set this class to log INFO only otherwise its OTT\n# Enable this to get detailed connection error/retry logging.\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\u003dTRACE\n\n\n# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace\u003dDEBUG\n\n# Uncomment the below if you want to remove logging of client region caching\u0027\n# and scan of .META. messages\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\u003dINFO\n# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner\u003dINFO\n\n    "}', 'version1', 1408443332466, 'hbase-log4j', NULL, 2) was aborted.  Call getNextException to see the cause.
> Error Code: 0
> Call: INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version, cluster_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
> 	bind => [8 parameters bound]
> Query: InsertObjectQuery(org.apache.ambari.server.orm.entities.ClusterConfigMappingEntity@45e881b6)
> 	at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:804)
> 	at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:857)
> 	at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:180)
> 	at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:442)
> 	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:382)
> 	at org.apache.ambari.server.orm.dao.DaoUtils.selectOne(DaoUtils.java:70)
> 	at org.apache.ambari.server.orm.dao.ClusterDAO.findConfig(ClusterDAO.java:96)
> 	at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150.addMissingLog4jConfigs(UpgradeCatalog150.java:699)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150$5.run(UpgradeCatalog150.java:555)
> 	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.executeInTransaction(AbstractUpgradeCatalog.java:147)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:552)
> 	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
> {noformat}
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog150.java 620c076 
> 
> Diff: https://reviews.apache.org/r/24900/diff/
> 
> 
> Testing
> -------
> 
> testing now
> 
> 
> Thanks,
> 
> Vitalyi Brodetskyi
> 
>


Re: Review Request 24900: Schema upgrade failed during upgrade from BWM20 with default Postgres DB

Posted by Myroslav Papirkovskyy <mp...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24900/#review51119
-----------------------------------------------------------

Ship it!


Ship It!

- Myroslav Papirkovskyy


On Сер. 20, 2014, 9:17 після полудня, Vitalyi Brodetskyi wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24900/
> -----------------------------------------------------------
> 
> (Updated Сер. 20, 2014, 9:17 після полудня)
> 
> 
> Review request for Ambari, Dmitro Lisnichenko and Myroslav Papirkovskyy.
> 
> 
> Bugs: AMBARI-6952
>     https://issues.apache.org/jira/browse/AMBARI-6952
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> *STR:*
> 1) Install Ambari server (BWM20) and setup it by default (http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo)
> 2) Deploy cluster
> 3) Make Ambari only upgrade to 1.7.0
> 4) Make schema upgrade
> 
> *Result:* schema upgrade failed. ambari-server.log:
> {noformat}
> org.postgresql.util.PSQLException: ERROR: relation "clusters_cluster_id_seq" does not exist
>   Position: 87
> 	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> 	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> 	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> 	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> 	at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> 	at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> 	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:499)
> 	at org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:485)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:443)
> 	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
> 03:15:30,101  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1)
> 03:15:30,102  WARN [main] DBAccessorImpl:505 - Error executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('configgroup_id_seq', 1), errorCode = 0, message = ERROR: duplicate key value violates unique constraint "ambari_sequences_pkey"
> 03:15:30,102  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('requestschedule_id_seq', 1)
> 03:15:30,124  INFO [main] DBAccessorImpl:496 - Executing query: INSERT INTO ambari_sequences(sequence_name, "value") VALUES('resourcefilter_id_seq', 1)
> 03:15:30,790  INFO [main] StackExtensionHelper:467 - No services defined for stack: HDP-1.3.3
> 03:15:31,662  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:31,900  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:32,090  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:32,263  INFO [Stack Version Loading Thread] LatestRepoCallable:72 - Loading latest URL info from http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
> 03:15:32,432  INFO [main] ActionDefinitionManager:124 - Added custom action definition for nagios_update_ignore
> 03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for check_host
> 03:15:32,433  INFO [main] ActionDefinitionManager:124 - Added custom action definition for validate_configs
> 03:15:32,543 ERROR [main] AbstractUpgradeCatalog:150 - Error in transaction 
> javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version, cluster_id) VALUES (13, NULL, E'{"content":"\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions 
 and\n# limitations under the License.\n\n\n# Define some default values that can be overridden by system properties\nhbase.root.logger\u003dINFO,console\nhbase.security.logger\u003dINFO,console\nhbase.log.dir\u003d.\nhbase.log.file\u003dhbase.log\n\n# Define the root logger to the system property \"hbase.root.logger\".\nlog4j.rootLogger\u003d${hbase.root.logger}\n\n# Logging Threshold\nlog4j.threshold\u003dALL\n\n#\n# Daily Rolling File Appender\n#\nlog4j.appender.DRFA\u003dorg.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File\u003d${hbase.log.dir}/${hbase.log.file}\n\n# Rollver at midnight\nlog4j.appender.DRFA.DatePattern\u003d.yyyy-MM-dd\n\n# 30-day backup\n#log4j.appender.DRFA.MaxBackupIndex\u003d30\nlog4j.appender.DRFA.layout\u003dorg.apache.log4j.PatternLayout\n\n# Pattern format: Date LogLevel LoggerName LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender properties\nhbase.log.maxfilesize\u003
 d256MB\nhbase.log.maxbackupindex\u003d20\n\n# Rolling File Appender\nlog4j.appender.RFA\u003dorg.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File\u003d${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize\u003d${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex\u003d${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout\u003dorg.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n#\n# Security audit appender\n#\nhbase.security.log.file\u003dSecurityAuth.audit\nhbase.security.log.maxfilesize\u003d256MB\nhbase.security.log.maxbackupindex\u003d20\nlog4j.appender.RFAS\u003dorg.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File\u003d${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize\u003d${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex\u003d${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout\u003dorg.apache.log4j.PatternLayout
 \nlog4j.appender.RFAS.layout.ConversionPattern\u003d%d{ISO8601} %p %c: %m%n\nlog4j.category.SecurityLogger\u003d${hbase.security.logger}\nlog4j.additivity.SecurityLogger\u003dfalse\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController\u003dTRACE\n\n#\n# Null Appender\n#\nlog4j.appender.NullAppender\u003dorg.apache.log4j.varia.NullAppender\n\n#\n# console\n# Add \"console\" to rootlogger above if you want to use this\n#\nlog4j.appender.console\u003dorg.apache.log4j.ConsoleAppender\nlog4j.appender.console.target\u003dSystem.err\nlog4j.appender.console.layout\u003dorg.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern\u003d%d{ISO8601} %-5p [%t] %c{2}: %m%n\n\n# Custom Logging levels\n\nlog4j.logger.org.apache.zookeeper\u003dINFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem\u003dDEBUG\nlog4j.logger.org.apache.hadoop.hbase\u003dDEBUG\n# Make these two classes INFO-level. Make them DEBUG to see more zk debug.\nlog4j.logger.org.
 apache.hadoop.hbase.zookeeper.ZKUtil\u003dINFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher\u003dINFO\n#log4j.logger.org.apache.hadoop.dfs\u003dDEBUG\n# Set this class to log INFO only otherwise its OTT\n# Enable this to get detailed connection error/retry logging.\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\u003dTRACE\n\n\n# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace\u003dDEBUG\n\n# Uncomment the below if you want to remove logging of client region caching\u0027\n# and scan of .META. messages\n# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation\u003dINFO\n# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner\u003dINFO\n\n    "}', 'version1', 1408443332466, 'hbase-log4j', NULL, 2) was aborted.  Call getNextException to see the cause.
> Error Code: 0
> Call: INSERT INTO clusterconfig (config_id, config_attributes, config_data, version_tag, create_timestamp, type_name, version, cluster_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
> 	bind => [8 parameters bound]
> Query: InsertObjectQuery(org.apache.ambari.server.orm.entities.ClusterConfigMappingEntity@45e881b6)
> 	at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:804)
> 	at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:857)
> 	at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:180)
> 	at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:442)
> 	at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:382)
> 	at org.apache.ambari.server.orm.dao.DaoUtils.selectOne(DaoUtils.java:70)
> 	at org.apache.ambari.server.orm.dao.ClusterDAO.findConfig(ClusterDAO.java:96)
> 	at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150.addMissingLog4jConfigs(UpgradeCatalog150.java:699)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150$5.run(UpgradeCatalog150.java:555)
> 	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.executeInTransaction(AbstractUpgradeCatalog.java:147)
> 	at org.apache.ambari.server.upgrade.UpgradeCatalog150.executeDMLUpdates(UpgradeCatalog150.java:552)
> 	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:272)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:194)
> 	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:243)
> {noformat}
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog150.java 620c076 
> 
> Diff: https://reviews.apache.org/r/24900/diff/
> 
> 
> Testing
> -------
> 
> testing now
> 
> 
> Thanks,
> 
> Vitalyi Brodetskyi
> 
>