You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-commits@db.apache.org by ma...@apache.org on 2008/05/13 21:29:13 UTC

svn commit: r655980 - in /db/derby/code/trunk/java: engine/org/apache/derby/catalog/ engine/org/apache/derby/iapi/db/ engine/org/apache/derby/impl/sql/compile/ engine/org/apache/derby/impl/sql/execute/ testing/org/apache/derbyTesting/functionTests/test...

Author: mamta
Date: Tue May 13 12:29:12 2008
New Revision: 655980

URL: http://svn.apache.org/viewvc?rev=655980&view=rev
Log:
This commit is for DERBY-1062. Currently SYSCS_INPLACE_COMPRESS_TABLE is implemented on it's own in
OnlineCompress.java It will be nice for us to share the existing code for ALTER TABLE so we do all the
necessary checks that are already done in ALTER TABLE rather than repeat the code in OnlineCompress.
Another procedure similar to SYSCS_INPLACE_COMPRESS_TABLE is SYSCS_COMPRESS_TABLE and 
SYSCS_COMPRESS_TABLE is written using the ALTER TABLE code. With this commit, I am getting rid of
OnlineCompress.java and moving the necessary code into ALTER TABLE related classes. One thing that 
SYSCS_INPLACE_COMPRESS_TABLE allows is compressing tables in SYSTEM schemas. The compile code currently
throws an exception if the operation is being done on system schema for DDLs. I had to make changes
such that we allow SYSTEM schema handling in DDL. 

This sharing of code also fixes the GRANT/REVOKE code for SYSCS_INPLACE_COMPRESS_TABLE. Earlier we didn't
look for permissions when letting a user issue SYSCS_INPLACE_COMPRESS_TABLE. But now since we use the
existing code in ALTER TABLE, it already handles the permission issues.


Removed:
    db/derby/code/trunk/java/engine/org/apache/derby/iapi/db/OnlineCompress.java
Modified:
    db/derby/code/trunk/java/engine/org/apache/derby/catalog/SystemProcedures.java
    db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java
    db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/DDLStatementNode.java
    db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj
    db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java
    db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java
    db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/GrantRevokeDDLTest.java
    db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/SysDiagVTIMappingTest.java

Modified: db/derby/code/trunk/java/engine/org/apache/derby/catalog/SystemProcedures.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/engine/org/apache/derby/catalog/SystemProcedures.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/engine/org/apache/derby/catalog/SystemProcedures.java (original)
+++ db/derby/code/trunk/java/engine/org/apache/derby/catalog/SystemProcedures.java Tue May 13 12:29:12 2008
@@ -930,23 +930,143 @@
         return(ret_val ? 1 : 0);
     }
 
+    /**
+
+    Implementation of SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE().
+    <p>
+    Code which implements the following system procedure:
+
+    void SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE(
+        IN SCHEMANAME        VARCHAR(128),
+        IN TABLENAME         VARCHAR(128),
+        IN PURGE_ROWS        SMALLINT,
+        IN DEFRAGMENT_ROWS   SMALLINT,
+        IN TRUNCATE_END      SMALLINT)
+    <p>
+    Use the SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE system procedure to reclaim 
+    unused, allocated space in a table and its indexes. Typically, unused allocated
+    space exists when a large amount of data is deleted from a table, and there
+    have not been subsequent inserts to use the space freed by the deletes.  
+    By default, Derby does not return unused space to the operating system. For 
+    example, once a page has been allocated to a table or index, it is not 
+    automatically returned to the operating system until the table or index is 
+    destroyed. SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE allows you to return unused 
+    space to the operating system.
+    <p>
+    This system procedure can be used to force 3 levels of in place compression
+    of a SQL table: PURGE_ROWS, DEFRAGMENT_ROWS, TRUNCATE_END.  Unlike 
+    SYSCS_UTIL.SYSCS_COMPRESS_TABLE() all work is done in place in the existing
+    table/index.
+    <p>
+    Syntax:
+    SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE(
+        IN SCHEMANAME        VARCHAR(128),
+        IN TABLENAME         VARCHAR(128),
+        IN PURGE_ROWS        SMALLINT,
+        IN DEFRAGMENT_ROWS   SMALLINT,
+        IN TRUNCATE_END      SMALLINT)
+    <p>
+    SCHEMANAME: 
+    An input argument of type VARCHAR(128) that specifies the schema of the table. Passing a null will result in an error.
+    <p>
+    TABLENAME:
+    An input argument of type VARCHAR(128) that specifies the table name of the 
+    table. The string must exactly match the case of the table name, and the 
+    argument of "Fred" will be passed to SQL as the delimited identifier 'Fred'. 
+    Passing a null will result in an error.
+    <p>
+    PURGE_ROWS:
+    If PURGE_ROWS is set to non-zero then a single pass is made through the table 
+    which will purge committed deleted rows from the table.  This space is then
+    available for future inserted rows, but remains allocated to the table.
+    As this option scans every page of the table, it's performance is linearly 
+    related to the size of the table.
+    <p>
+    DEFRAGMENT_ROWS:
+    If DEFRAGMENT_ROWS is set to non-zero then a single defragment pass is made
+    which will move existing rows from the end of the table towards the front
+    of the table.  The goal of the defragment run is to empty a set of pages
+    at the end of the table which can then be returned to the OS by the
+    TRUNCATE_END option.  It is recommended to only run DEFRAGMENT_ROWS, if also
+    specifying the TRUNCATE_END option.  This option scans the whole table and
+    needs to update index entries for every base table row move, and thus execution
+    time is linearly related to the size of the table.
+    <p>
+    TRUNCATE_END:
+    If TRUNCATE_END is set to non-zero then all contiguous pages at the end of
+    the table will be returned to the OS.  Running the PURGE_ROWS and/or 
+    DEFRAGMENT_ROWS passes options may increase the number of pages affected.  
+    This option itself does no scans of the table, so performs on the order of a 
+    few system calls.
+    <p>
+    SQL example:
+    To compress a table called CUSTOMER in a schema called US, using all 
+    available compress options:
+    call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('US', 'CUSTOMER', 1, 1, 1);
+
+    To quickly just return the empty free space at the end of the same table, 
+    this option will run much quicker than running all phases but will likely
+    return much less space:
+    call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('US', 'CUSTOMER', 0, 0, 1);
+
+    Java example:
+    To compress a table called CUSTOMER in a schema called US, using all 
+    available compress options:
+
+    CallableStatement cs = conn.prepareCall
+    ("CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE(?, ?, ?, ?, ?)");
+    cs.setString(1, "US");
+    cs.setString(2, "CUSTOMER");
+    cs.setShort(3, (short) 1);
+    cs.setShort(4, (short) 1);
+    cs.setShort(5, (short) 1);
+    cs.execute();
+
+    To quickly just return the empty free space at the end of the same table, 
+    this option will run much quicker than running all phases but will likely
+    return much less space:
+
+    CallableStatement cs = conn.prepareCall
+    ("CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE(?, ?, ?, ?, ?)");
+    cs.setString(1, "US");
+    cs.setString(2, "CUSTOMER");
+    cs.setShort(3, (short) 0);
+    cs.setShort(4, (short) 0);
+    cs.setShort(5, (short) 1);
+    cs.execute();
+
+    <p>
+    It is recommended that the SYSCS_UTIL.SYSCS_COMPRESS_TABLE procedure is 
+    issued in auto-commit mode.
+    Note: This procedure acquires an exclusive table lock on the table being compressed. All statement plans dependent on the table or its indexes are invalidated. For information on identifying unused space, see the Derby Server and Administration Guide.
+
+    TODO LIST:
+    o defragment requires table level lock in nested user transaction, which
+      will conflict with user lock on same table in user transaction.
+
+    **/
     public static void SYSCS_INPLACE_COMPRESS_TABLE(
     String  schema,
     String  tablename,
     int     purgeRows,
-    int     defragementRows,
+    int     defragmentRows,
     int     truncateEnd)
 		throws SQLException
     {
  
-        org.apache.derby.iapi.db.OnlineCompress.compressTable(
-            schema, 
-            tablename, 
-            (purgeRows != 0),
-            (defragementRows != 0),
-            (truncateEnd != 0));
+        String query = 
+            "alter table " + "\"" + schema + "\"" + "." + "\"" +  tablename + "\"" + 
+			" compress inplace" +  (purgeRows != 0 ? " purge" : "")
+			 +  (defragmentRows != 0 ? " defragment" : "")
+			  +  (truncateEnd != 0 ? " truncate_end" : "");
+
+		Connection conn = getDefaultConn();
+        
+        PreparedStatement ps = conn.prepareStatement(query);
+		ps.executeUpdate();
+        ps.close();
 
-        return;
+		conn.close();
     }
 
     public static String SYSCS_GET_RUNTIMESTATISTICS()

Modified: db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java (original)
+++ db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/AlterTableNode.java Tue May 13 12:29:12 2008
@@ -53,6 +53,12 @@
 	public  char				lockGranularity;
 	public	boolean				compressTable = false;
 	public	boolean				sequential = false;
+	//The following three (purge, defragment and truncateEndOfTable) apply for 
+	//inplace compress
+	public	boolean				purge = false;
+	public	boolean				defragment = false;
+	public	boolean				truncateEndOfTable = false;
+	
 	public	int					behavior;	// currently for drop column
 
 	public	TableDescriptor		baseTable;
@@ -98,7 +104,8 @@
 	}
 
 	/**
-	 * Initializer for a AlterTableNode for COMPRESS
+	 * Initializer for a AlterTableNode for COMPRESS using temporary tables
+	 * rather than inplace compress
 	 *
 	 * @param objectName		The name of the table being altered
 	 * @param sequential		Whether or not the COMPRESS is SEQUENTIAL
@@ -120,12 +127,43 @@
 	}
 
 	/**
+	 * Initializer for a AlterTableNode for INPLACE COMPRESS
+	 *
+	 * @param objectName			The name of the table being altered
+	 * @param purge					PURGE during INPLACE COMPRESS?
+	 * @param defragment			DEFRAGMENT during INPLACE COMPRESS?
+	 * @param truncateEndOfTable	TRUNCATE END during INPLACE COMPRESS?
+	 *
+	 * @exception StandardException		Thrown on error
+	 */
+
+	public void init(Object objectName,
+			 Object purge,
+			 Object defragment,
+			 Object truncateEndOfTable)
+		throws StandardException
+	{
+		initAndCheck(objectName);
+
+		this.purge = ((Boolean) purge).booleanValue();
+		this.defragment = ((Boolean) defragment).booleanValue();
+		this.truncateEndOfTable = ((Boolean) truncateEndOfTable).booleanValue();
+		compressTable = true;
+		schemaDescriptor = getSchemaDescriptor(true, false);
+	}
+
+	/**
 	 * Initializer for a AlterTableNode
 	 *
 	 * @param objectName		The name of the table being altered
 	 * @param tableElementList	The alter table action
 	 * @param lockGranularity	The new lock granularity, if any
 	 * @param changeType		ADD_TYPE or DROP_TYPE
+	 * @param behavior			If drop column is CASCADE or RESTRICTED
+	 * @param sequential		Whether or not the COMPRESS is SEQUENTIAL
+	 * @param purge				PURGE during INPLACE COMPRESS?
+	 * @param defragment		DEFRAGMENT during INPLACE COMPRESS?
+	 * @param truncateEndOfTable	TRUNCATE END during INPLACE COMPRESS?
 	 *
 	 * @exception StandardException		Thrown on error
 	 */
@@ -136,7 +174,10 @@
 							Object lockGranularity,
 							Object changeType,
 							Object behavior,
-							Object sequential )
+							Object sequential,
+							Object purge,
+							Object defragment,
+							Object truncateEndOfTable )
 		throws StandardException
 	{
 		initAndCheck(objectName);
@@ -148,6 +189,12 @@
 		this.behavior = bh[0];
 		boolean[]	seq = (boolean[]) sequential;
 		this.sequential = seq[0];
+		boolean[]	booleanPurge = (boolean[]) purge;
+		this.purge = booleanPurge[0];
+		boolean[]	booleanDefragment = (boolean[]) defragment;
+		this.defragment = booleanDefragment[0];
+		boolean[]	booleanTruncateEndOfTable = (boolean[]) truncateEndOfTable;
+		this.truncateEndOfTable = booleanTruncateEndOfTable[0];
 		switch ( this.changeType )
 		{
 		    case ADD_TYPE:
@@ -182,7 +229,10 @@
 				"lockGranularity: " + "\n" + lockGranularity + "\n" +
 				"compressTable: " + "\n" + compressTable + "\n" +
 				"sequential: " + "\n" + sequential + "\n" +
-				"truncateTable: " + "\n" + truncateTable + "\n";
+				"truncateTable: " + "\n" + truncateTable + "\n" +
+				"purge: " + "\n" + purge + "\n" +
+				"defragment: " + "\n" + defragment + "\n" +
+				"truncateEndOfTable: " + "\n" + truncateEndOfTable + "\n";
 		}
 		else
 		{
@@ -221,7 +271,17 @@
 		** Get the table descriptor.  Checks the schema
 		** and the table.
 		*/
-		baseTable = getTableDescriptor();
+		if(compressTable && (purge || defragment || truncateEndOfTable)) {
+			//We are dealing with inplace compress here and inplace compress is 
+			//allowed on system schemas. In order to support inplace compress
+			//on user as well as system tables, we need to use special 
+			//getTableDescriptor(boolean) call to get TableDescriptor. This
+			//getTableDescriptor(boolean) allows getting TableDescriptor for
+			//system tables without throwing an exception.
+			baseTable = getTableDescriptor(false);
+		} else
+			baseTable = getTableDescriptor();
+
 		//throw an exception if user is attempting to alter a temporary table
 		if (baseTable.getTableType() == TableDescriptor.GLOBAL_TEMPORARY_TABLE_TYPE)
 		{
@@ -364,7 +424,10 @@
 											 compressTable,
 											 behavior,
         								     sequential,
- 										     truncateTable);
+ 										     truncateTable,
+ 										     purge,
+ 										     defragment,
+ 										     truncateEndOfTable );
 	}
 
 	/**

Modified: db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/DDLStatementNode.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/DDLStatementNode.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/DDLStatementNode.java (original)
+++ db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/DDLStatementNode.java Tue May 13 12:29:12 2008
@@ -203,22 +203,42 @@
 	*/
 	protected final SchemaDescriptor getSchemaDescriptor() throws StandardException
 	{
-		return getSchemaDescriptor(true);
+		return getSchemaDescriptor(true, true);
 	}
 
 	/**
 	* Get a schema descriptor for this DDL object.
 	* Uses this.objectName.  Always returns a schema,
 	* we lock in the schema name prior to execution.
+	* 
+	* The most common call to this method is with 2nd 
+	* parameter true which says that SchemaDescriptor
+	* should not be requested for system schema. The only
+	* time this method will get called with 2nd parameter
+	* set to false will be when user has requested for
+	* inplace compress using 
+	* SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE
+	* Above inplace compress can be invoked on system tables.
+	* A call to SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE 
+	* internally gets translated into ALTER TABLE sql.
+	* When ALTER TABLE is executed for SYSCS_INPLACE_COMPRESS_TABLE,
+	* we want to allow SchemaDescriptor request for system
+	* tables. DERBY-1062
 	*
 	* @param ownerCheck		If check for schema owner is needed
+	* @param doSystemSchemaCheck   If check for system schema is needed.
+	*    If set to true, then throw an exception if schema descriptor
+	*    is requested for a system schema. The only time this param 
+	*    will be set to false is when user is asking for inplace
+	*    compress of a system table. DERBY-1062
 	*
 	* @return Schema Descriptor
 	*
 	* @exception	StandardException	throws on schema name
 	*						that doesn't exist	
 	*/
-	protected final SchemaDescriptor getSchemaDescriptor(boolean ownerCheck)
+	protected final SchemaDescriptor getSchemaDescriptor(boolean ownerCheck,
+			boolean doSystemSchemaCheck)
 		 throws StandardException
 	{
 		String schemaName = objectName.getSchemaName();
@@ -247,9 +267,11 @@
 						Authorizer.MODIFY_SCHEMA_PRIV);
 
 		/*
-		** Catch the system schema here.
+		** Catch the system schema here if the caller wants us to.
+		** Currently, the only time we allow system schema is for inplace
+		** compress table calls.
 		*/	 
-		if (sd.isSystemSchema())
+		if (doSystemSchemaCheck && sd.isSystemSchema())
 		{
 			throw StandardException.newException(SQLState.LANG_NO_USER_DDL_IN_SYSTEM_SCHEMA,
 							statementToString(), sd);
@@ -263,17 +285,39 @@
 		return getTableDescriptor(objectName);
 	}
 
+	/**
+	 * Validate that the table is ok for DDL -- e.g.
+	 * that it exists, it is not a view. It is ok for
+	 * it to be a system table. Also check that its 
+	 * schema is ok. Currently, the only time this method
+	 * is called is when user has asked for inplace 
+	 * compress. eg
+	 * call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('SYS','SYSTABLES',1,1,1);
+	 * Inplace compress is allowed on both system and
+	 * user tables.
+	 *
+	 * @return the validated table descriptor, never null
+	 *
+	 * @exception StandardException on error
+	 */
+	protected final TableDescriptor getTableDescriptor(boolean doSystemTableCheck)
+	throws StandardException
+	{
+		TableDescriptor td = justGetDescriptor(objectName);
+		td = checkTableDescriptor(td,doSystemTableCheck);
+		return td;
+	}
+
 	protected final TableDescriptor getTableDescriptor(UUID tableId)
 		throws StandardException {
 
 		TableDescriptor td = getDataDictionary().getTableDescriptor(tableId);
 
-		td = checkTableDescriptor(td);
+		td = checkTableDescriptor(td,true);
 		return td;
 
 	}
 
-
 	/**
 	 * Validate that the table is ok for DDL -- e.g.
 	 * that it exists, it is not a view, and is not
@@ -286,6 +330,28 @@
 	protected final TableDescriptor getTableDescriptor(TableName tableName)
 		throws StandardException
 	{
+		TableDescriptor td = justGetDescriptor(tableName);
+
+		/* beetle 4444, td may have changed when we obtain shared lock */
+		td = checkTableDescriptor(td, true);
+		return td;
+
+	}
+
+	/**
+	 * Just get the table descriptor. Don't worry if it belongs to a view,
+	 * system table, synonym or a real table. Let the caller decide what
+	 * to do.
+	 * 
+	 * @param tableName
+	 * 
+	 * @return TableDescriptor for the give TableName
+	 * 
+	 * @throws StandardException on error
+	 */
+	private TableDescriptor justGetDescriptor(TableName tableName)
+	throws StandardException
+	{
 		String schemaName = tableName.getSchemaName();
 		SchemaDescriptor sd = getSchemaDescriptor(schemaName);
 		
@@ -296,29 +362,33 @@
 			throw StandardException.newException(SQLState.LANG_OBJECT_DOES_NOT_EXIST, 
 						statementToString(), tableName);
 		}
-
-		/* beetle 4444, td may have changed when we obtain shared lock */
-		td = checkTableDescriptor(td);
 		return td;
-
 	}
 
-	private TableDescriptor checkTableDescriptor(TableDescriptor td)
+	private TableDescriptor checkTableDescriptor(TableDescriptor td, 
+			boolean doSystemTableCheck)
 		throws StandardException
 	{
 		String sqlState = null;
 
 		switch (td.getTableType()) {
 		case TableDescriptor.VTI_TYPE:
-		case TableDescriptor.SYSTEM_TABLE_TYPE:
-
-			/*
-			** Not on system tables (though there are no constraints on
-			** system tables as of the time this is writen
-			*/
 			sqlState = SQLState.LANG_INVALID_OPERATION_ON_SYSTEM_TABLE;
 			break;
 
+		case TableDescriptor.SYSTEM_TABLE_TYPE:
+			if (doSystemTableCheck)
+				/*
+				** Not on system tables (though there are no constraints on
+				** system tables as of the time this is writen
+				*/
+				sqlState = SQLState.LANG_INVALID_OPERATION_ON_SYSTEM_TABLE;
+			else
+				//allow system table. The only time this happens currently is
+				//when user is requesting inplace compress on system table
+				return td;
+			break;
+
 		case TableDescriptor.BASE_TABLE_TYPE:
 			/* need to IX lock table if we are a reader in DDL datadictionary
 			 * cache mode, otherwise we may interfere with another DDL thread

Modified: db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj (original)
+++ db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj Tue May 13 12:29:12 2008
@@ -2301,6 +2301,7 @@
 |	<CURSORS: "cursors">
 |	<DB2SQL: "db2sql">
 |	<DERBY_JDBC_RESULT_SET: "derby_jdbc_result_set">
+|	<DEFRAGMENT: "defragment">
 |       <DIRTY: "dirty">
 |	<DOCUMENT: "document">
 |	<EACH: "each">
@@ -2308,6 +2309,7 @@
 |	<EXCLUSIVE: "exclusive">
 |	<FN: "fn">
 |	<INDEX: "index">
+|	<INPLACE: "inplace">
 |	<JAVA: "java">
 |   <LCASE: "lcase">
 |   <LOCATE: "locate">
@@ -2326,6 +2328,7 @@
 |	<PARAMETER: "parameter">
 |	<PASSING: "passing">
 |	<PROPERTIES: "properties">
+|	<PURGE: "purge">
 |	<READS: "reads">
 |	<REF: "ref">
 |	<REFERENCING: "referencing">
@@ -2347,6 +2350,7 @@
 |   <STRIP: "strip">
 |   <STYLE: "style">
 |	<TRIGGER: "trigger">
+|	<TRUNCATE_END: "truncate_end">
 |   <UCASE: "ucase">
 |   <UR: "ur">
 |   <WHITESPACE: "whitespace">
@@ -12237,28 +12241,30 @@
 StatementNode
 alterTableBody(TableName tableName) throws StandardException :
 {
-	StatementNode qtn;
+	StatementNode sn;
 	char				lockGranularity = '\0';
 	String               newTableName;
 	TableElementList	tableElementList =
 									(TableElementList) nodeFactory.getNode(
 												C_NodeTypes.TABLE_ELEMENT_LIST,
 												getContextManager());
-	Token				tok = null;
 	int[]				changeType = new int[1];
 	int[]				behavior = new int[1];
 	boolean[]			sequential = new boolean[1];
+	boolean[]			purge = new boolean[1];
+	boolean[]			defragment = new boolean[1];
+	boolean[]			truncateEndOfTable = new boolean[1];
 }
 {
 //insert special key before compress so that only internal SP can know
-	<COMPRESS> [ tok = <SEQUENTIAL> ]
-	{		
-		checkInternalFeature("COMPRESS");
-		return (StatementNode) nodeFactory.getNode(
-							C_NodeTypes.ALTER_TABLE_NODE,
-							tableName,
-							new Boolean(tok != null),
-							getContextManager());
+	<COMPRESS>
+	(
+		sn = inplaceCompress(tableName)
+		|
+		sn = sequentialCompress(tableName)
+	)
+	{
+		return sn;
 	}
 |
 	lockGranularity = alterTableAction( tableElementList, changeType, behavior, sequential )
@@ -12270,11 +12276,60 @@
 							new Character(lockGranularity),
 							changeType,
 							behavior,
-							sequential,
+							sequential, 
+							purge, 
+							defragment, 
+							truncateEndOfTable,
 							getContextManager());
 	}
 }
 
+StatementNode
+inplaceCompress(TableName tableName) throws StandardException :
+{
+        Token purge = null;
+        Token defragment = null;
+        Token truncate = null;
+}
+{
+	<INPLACE>
+        (
+		[ purge = <PURGE> ]
+		[ defragment = <DEFRAGMENT> ]
+		[ truncate = <TRUNCATE_END> ]
+        )
+	{
+		checkInternalFeature("COMPRESS");
+		return (StatementNode) nodeFactory.getNode(
+							C_NodeTypes.ALTER_TABLE_NODE,
+							tableName,
+							new Boolean(purge != null),
+							new Boolean(defragment != null),
+							new Boolean(truncate != null),
+							getContextManager());
+	}
+}
+
+
+
+StatementNode
+sequentialCompress(TableName tableName) throws StandardException :
+{
+	Token				tok = null;
+}
+{
+	[ tok = <SEQUENTIAL> ]
+	{
+		checkInternalFeature("COMPRESS");
+		return (StatementNode) nodeFactory.getNode(
+							C_NodeTypes.ALTER_TABLE_NODE,
+							tableName,
+							new Boolean(tok != null),
+							getContextManager());
+	}
+}
+
+
 /*
  * <A NAME="alterTableRenameTableStatement">alterTableRenameTableStatement</A>
  */
@@ -13547,6 +13602,7 @@
 	|	tok = <DATA>
 	|	tok = <DATE>
 	|	tok = <DAY>
+	|	tok = <DEFRAGMENT>
         |	tok = <DIRTY>
 	|	tok = <DYNAMIC>
     |   tok = <DATABASE>
@@ -13563,6 +13619,7 @@
 	|	tok = <INCREMENT>
 	|	tok = <INDEX>
 	|	tok = <INITIAL>
+	|	tok = <INPLACE>
 // SQL92 says it is reserved, but we want it to be non-reserved.
 	|	tok = <INTERVAL>
 	|   tok = <JAVA>
@@ -13606,6 +13663,7 @@
 	|	tok = <PLI>
 	|	tok = <PRECISION>
 	|	tok = <PROPERTIES>
+	|	tok = <PURGE>
 	|	tok = <READS>
 	|	tok = <REF>
 // SQL92 says it is reserved, but we want it to be non-reserved.
@@ -13658,6 +13716,7 @@
 	|	tok = <TIMESTAMPDIFF>
     |   tok = <TRIGGER>
 	|	tok = <TRUNCATE>
+	|	tok = <TRUNCATE_END>
 	|	tok = <TS>
 	|	tok = <TYPE>
     |   tok = <UCASE>

Modified: db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java (original)
+++ db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java Tue May 13 12:29:12 2008
@@ -21,6 +21,7 @@
 
 package org.apache.derby.impl.sql.execute;
 
+import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Enumeration;
 import java.util.Iterator;
@@ -32,6 +33,7 @@
 import org.apache.derby.catalog.UUID;
 import org.apache.derby.catalog.types.ReferencedColumnsDescriptorImpl;
 import org.apache.derby.catalog.types.StatisticsImpl;
+import org.apache.derby.iapi.error.PublicAPI;
 import org.apache.derby.iapi.error.StandardException;
 import org.apache.derby.iapi.reference.SQLState;
 import org.apache.derby.iapi.services.io.FormatableBitSet;
@@ -41,6 +43,7 @@
 import org.apache.derby.iapi.sql.PreparedStatement;
 import org.apache.derby.iapi.sql.ResultSet;
 import org.apache.derby.iapi.sql.StatementType;
+import org.apache.derby.iapi.sql.conn.ConnectionUtil;
 import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;
 import org.apache.derby.iapi.sql.depend.DependencyManager;
 import org.apache.derby.iapi.sql.dictionary.CheckConstraintDescriptor;
@@ -106,8 +109,11 @@
     private     int						    behavior;
     private	    boolean					    sequential;
     private     boolean                     truncateTable;
-
-
+	//The following three (purge, defragment and truncateEndOfTable) apply for 
+	//inplace compress
+    private	    boolean					    purge;
+    private	    boolean					    defragment;
+    private	    boolean					    truncateEndOfTable;
 
     // Alter table compress and Drop column
     private     boolean					    doneScan;
@@ -161,6 +167,9 @@
 	 *	@param sequential	        If compress table/drop column, 
      *	                            whether or not sequential
 	 *  @param truncateTable	    Whether or not this is a truncate table
+	 *  @param purge				PURGE during INPLACE COMPRESS?
+	 *  @param defragment			DEFRAGMENT during INPLACE COMPRESS?
+	 *  @param truncateEndOfTable	TRUNCATE END during INPLACE COMPRESS?
 	 */
 	AlterTableConstantAction(
     SchemaDescriptor            sd,
@@ -174,7 +183,10 @@
     boolean			            compressTable,
     int				            behavior,
     boolean			            sequential,
-    boolean                     truncateTable)
+    boolean                     truncateTable,
+    boolean                     purge,
+    boolean                     defragment,
+    boolean                     truncateEndOfTable)
 	{
 		super(tableId);
 		this.sd                     = sd;
@@ -188,6 +200,9 @@
 		this.behavior               = behavior;
 		this.sequential             = sequential;
 		this.truncateTable          = truncateTable;
+		this.purge          		= purge;
+		this.defragment          	= defragment;
+		this.truncateEndOfTable     = truncateEndOfTable;
 
 		if (SanityManager.DEBUG)
 		{
@@ -232,6 +247,36 @@
 		int							numRows = 0;
         boolean						tableScanned = false;
 
+        //Following if is for inplace compress. Compress using temporary
+        //tables to do the compression is done later in this method.
+		if (compressTable)
+		{
+			if (purge || defragment || truncateEndOfTable)
+			{
+				td = dd.getTableDescriptor(tableId);
+				if (td == null)
+				{
+					throw StandardException.newException(
+						SQLState.LANG_TABLE_NOT_FOUND_DURING_EXECUTION, tableName);
+				}
+	            // Each of the following may give up locks allowing ddl on the
+	            // table, so each phase needs to do the data dictionary lookup.
+	            // The order is important as it makes sense to first purge
+	            // deleted rows, then defragment existing non-deleted rows, and
+	            // finally to truncate the end of the file which may have been
+	            // made larger by the previous purge/defragment pass.
+	            if (purge)
+	                purgeRows(tc);
+
+	            if (defragment)
+	                defragmentRows(tc, lcc);
+
+	            if (truncateEndOfTable)
+	                truncateEnd(tc);            
+	            return;				
+			}
+		}
+
 		/*
 		** Inform the data dictionary that we are about to write to it.
 		** There are several calls to data dictionary "get" methods here
@@ -535,6 +580,450 @@
 		}
 	}
 
+    /**
+     * Truncate end of conglomerate.
+     * <p>
+     * Returns the contiguous free space at the end of the table back to
+     * the operating system.  Takes care of space allocation bit maps, and
+     * OS call to return the actual space.
+     * <p>
+     *
+     * @param schemaName        schema of table to defragment
+     * @param tableName         name of table to defragment
+     * @param data_dictionary   An open data dictionary to look up the table in.
+     * @param tc                transaction controller to use to do updates.
+     *
+     **/
+	private void truncateEnd(
+    TransactionController   tc)
+        throws StandardException
+	{
+        switch (td.getTableType())
+        {
+        /* Skip views and vti tables */
+        case TableDescriptor.VIEW_TYPE:
+        case TableDescriptor.VTI_TYPE:
+        	break;
+        // other types give various errors here
+        // DERBY-719,DERBY-720
+        default:
+          {
+          ConglomerateDescriptor[] conglom_descriptors = 
+                td.getConglomerateDescriptors();
+
+            for (int cd_idx = 0; cd_idx < conglom_descriptors.length; cd_idx++)
+            {
+                ConglomerateDescriptor cd = conglom_descriptors[cd_idx];
+
+                tc.compressConglomerate(cd.getConglomerateNumber());
+            }
+          }
+        }
+
+        return;
+    }
+
+    /**
+     * Defragment rows in the given table.
+     * <p>
+     * Scans the rows at the end of a table and moves them to free spots
+     * towards the beginning of the table.  In the same transaction all
+     * associated indexes are updated to reflect the new location of the
+     * base table row.
+     * <p>
+     * After a defragment pass, if was possible, there will be a set of
+     * empty pages at the end of the table which can be returned to the
+     * operating system by calling truncateEnd().  The allocation bit
+     * maps will be set so that new inserts will tend to go to empty and
+     * half filled pages starting from the front of the conglomerate.
+     *
+     * @param schemaName        schema of table to defragment
+     * @param tableName         name of table to defragment
+     * @param data_dictionary   An open data dictionary to look up the table in.
+     * @param tc                transaction controller to use to do updates.
+     *
+     **/
+	private void defragmentRows(
+			TransactionController tc,
+			LanguageConnectionContext lcc)
+        throws StandardException
+	{
+        GroupFetchScanController base_group_fetch_cc = null;
+        int                      num_indexes         = 0;
+
+        int[][]                  index_col_map       =  null;
+        ScanController[]         index_scan          =  null;
+        ConglomerateController[] index_cc            =  null;
+        DataValueDescriptor[][]  index_row           =  null;
+
+		TransactionController     nested_tc = null;
+
+		try {
+
+            nested_tc = 
+                tc.startNestedUserTransaction(false);
+
+            switch (td.getTableType())
+            {
+            /* Skip views and vti tables */
+            case TableDescriptor.VIEW_TYPE:
+            case TableDescriptor.VTI_TYPE:
+            	return;
+            // other types give various errors here
+            // DERBY-719,DERBY-720
+            default:
+            	break;
+            }
+
+
+			ConglomerateDescriptor heapCD = 
+                td.getConglomerateDescriptor(td.getHeapConglomerateId());
+
+			/* Get a row template for the base table */
+			ExecRow baseRow = 
+                lcc.getLanguageConnectionFactory().getExecutionFactory().getValueRow(
+                    td.getNumberOfColumns());
+
+
+			/* Fill the row with nulls of the correct type */
+			ColumnDescriptorList cdl = td.getColumnDescriptorList();
+			int					 cdlSize = cdl.size();
+
+			for (int index = 0; index < cdlSize; index++)
+			{
+				ColumnDescriptor cd = (ColumnDescriptor) cdl.elementAt(index);
+				baseRow.setColumn(cd.getPosition(), cd.getType().getNull());
+			}
+
+            DataValueDescriptor[][] row_array = new DataValueDescriptor[100][];
+            row_array[0] = baseRow.getRowArray();
+            RowLocation[] old_row_location_array = new RowLocation[100];
+            RowLocation[] new_row_location_array = new RowLocation[100];
+
+            // Create the following 3 arrays which will be used to update
+            // each index as the scan moves rows about the heap as part of
+            // the compress:
+            //     index_col_map - map location of index cols in the base row, 
+            //                     ie. index_col_map[0] is column offset of 1st
+            //                     key column in base row.  All offsets are 0 
+            //                     based.
+            //     index_scan - open ScanController used to delete old index row
+            //     index_cc   - open ConglomerateController used to insert new 
+            //                  row
+
+            ConglomerateDescriptor[] conglom_descriptors = 
+                td.getConglomerateDescriptors();
+
+            // conglom_descriptors has an entry for the conglomerate and each 
+            // one of it's indexes.
+            num_indexes = conglom_descriptors.length - 1;
+
+            // if indexes exist, set up data structures to update them
+            if (num_indexes > 0)
+            {
+                // allocate arrays
+                index_col_map   = new int[num_indexes][];
+                index_scan      = new ScanController[num_indexes];
+                index_cc        = new ConglomerateController[num_indexes];
+                index_row       = new DataValueDescriptor[num_indexes][];
+
+                setup_indexes(
+                    nested_tc,
+                    td,
+                    index_col_map,
+                    index_scan,
+                    index_cc,
+                    index_row);
+
+            }
+
+			/* Open the heap for reading */
+			base_group_fetch_cc = 
+                nested_tc.defragmentConglomerate(
+                    td.getHeapConglomerateId(), 
+                    false,
+                    true, 
+                    TransactionController.OPENMODE_FORUPDATE, 
+				    TransactionController.MODE_TABLE,
+					TransactionController.ISOLATION_SERIALIZABLE);
+
+            int num_rows_fetched = 0;
+            while ((num_rows_fetched = 
+                        base_group_fetch_cc.fetchNextGroup(
+                            row_array, 
+                            old_row_location_array, 
+                            new_row_location_array)) != 0)
+            {
+                if (num_indexes > 0)
+                {
+                    for (int row = 0; row < num_rows_fetched; row++)
+                    {
+                        for (int index = 0; index < num_indexes; index++)
+                        {
+                            fixIndex(
+                                row_array[row],
+                                index_row[index],
+                                old_row_location_array[row],
+                                new_row_location_array[row],
+                                index_cc[index],
+                                index_scan[index],
+                                index_col_map[index]);
+                        }
+                    }
+                }
+            }
+
+            // TODO - It would be better if commits happened more frequently
+            // in the nested transaction, but to do that there has to be more
+            // logic to catch a ddl that might jump in the middle of the 
+            // above loop and invalidate the various table control structures
+            // which are needed to properly update the indexes.  For example
+            // the above loop would corrupt an index added midway through
+            // the loop if not properly handled.  See DERBY-1188.  
+            nested_tc.commit();
+			
+		}
+		finally
+		{
+                /* Clean up before we leave */
+                if (base_group_fetch_cc != null)
+                {
+                    base_group_fetch_cc.close();
+                    base_group_fetch_cc = null;
+                }
+
+                if (num_indexes > 0)
+                {
+                    for (int i = 0; i < num_indexes; i++)
+                    {
+                        if (index_scan != null && index_scan[i] != null)
+                        {
+                            index_scan[i].close();
+                            index_scan[i] = null;
+                        }
+                        if (index_cc != null && index_cc[i] != null)
+                        {
+                            index_cc[i].close();
+                            index_cc[i] = null;
+                        }
+                    }
+                }
+
+                if (nested_tc != null)
+                {
+                    nested_tc.destroy();
+                }
+
+		}
+
+		return;
+	}
+
+    private static void setup_indexes(
+    TransactionController       tc,
+    TableDescriptor             td,
+    int[][]                     index_col_map,
+    ScanController[]            index_scan,
+    ConglomerateController[]    index_cc,
+    DataValueDescriptor[][]     index_row)
+		throws StandardException
+    {
+
+        // Initialize the following 3 arrays which will be used to update
+        // each index as the scan moves rows about the heap as part of
+        // the compress:
+        //     index_col_map - map location of index cols in the base row, ie.
+        //                     index_col_map[0] is column offset of 1st key
+        //                     column in base row.  All offsets are 0 based.
+        //     index_scan - open ScanController used to delete old index row
+        //     index_cc   - open ConglomerateController used to insert new row
+
+        ConglomerateDescriptor[] conglom_descriptors =
+                td.getConglomerateDescriptors();
+
+
+        int index_idx = 0;
+        for (int cd_idx = 0; cd_idx < conglom_descriptors.length; cd_idx++)
+        {
+            ConglomerateDescriptor index_cd = conglom_descriptors[cd_idx];
+
+            if (!index_cd.isIndex())
+            {
+                // skip the heap descriptor entry
+                continue;
+            }
+
+            // ScanControllers are used to delete old index row
+            index_scan[index_idx] = 
+                tc.openScan(
+                    index_cd.getConglomerateNumber(),
+                    true,	// hold
+                    TransactionController.OPENMODE_FORUPDATE,
+                    TransactionController.MODE_TABLE,
+                    TransactionController.ISOLATION_SERIALIZABLE,
+                    null,   // full row is retrieved, 
+                            // so that full row can be used for start/stop keys
+                    null,	// startKeyValue - will be reset with reopenScan()
+                    0,		// 
+                    null,	// qualifier
+                    null,	// stopKeyValue  - will be reset with reopenScan()
+                    0);		// 
+
+            // ConglomerateControllers are used to insert new index row
+            index_cc[index_idx] = 
+                tc.openConglomerate(
+                    index_cd.getConglomerateNumber(),
+                    true,  // hold
+                    TransactionController.OPENMODE_FORUPDATE,
+                    TransactionController.MODE_TABLE,
+                    TransactionController.ISOLATION_SERIALIZABLE);
+
+            // build column map to allow index row to be built from base row
+            int[] baseColumnPositions   = 
+                index_cd.getIndexDescriptor().baseColumnPositions();
+            int[] zero_based_map        = 
+                new int[baseColumnPositions.length];
+
+            for (int i = 0; i < baseColumnPositions.length; i++)
+            {
+                zero_based_map[i] = baseColumnPositions[i] - 1; 
+            }
+
+            index_col_map[index_idx] = zero_based_map;
+
+            // build row array to delete from index and insert into index
+            //     length is length of column map + 1 for RowLocation.
+            index_row[index_idx] = 
+                new DataValueDescriptor[baseColumnPositions.length + 1];
+
+            index_idx++;
+        }
+
+        return;
+    }
+
+
+    /**
+     * Delete old index row and insert new index row in input index.
+     * <p>
+     *
+     * @param base_row      all columns of base row
+     * @param index_row     an index row template, filled in by this routine
+     * @param old_row_loc   old location of base row, used to delete index
+     * @param new_row_loc   new location of base row, used to update index
+     * @param index_cc      index conglomerate to insert new row
+     * @param index_scan    index scan to delete old entry
+     * @param index_col_map description of mapping of index row to base row,
+     *                      
+     *
+	 * @exception  StandardException  Standard exception policy.
+     **/
+    private static void fixIndex(
+    DataValueDescriptor[]   base_row,
+    DataValueDescriptor[]   index_row,
+    RowLocation             old_row_loc,
+    RowLocation             new_row_loc,
+    ConglomerateController  index_cc,
+    ScanController          index_scan,
+	int[]					index_col_map)
+        throws StandardException
+    {
+        if (SanityManager.DEBUG)
+        {
+            // baseColumnPositions should describe all columns in index row
+            // except for the final column, which is the RowLocation.
+            SanityManager.ASSERT(index_col_map != null);
+            SanityManager.ASSERT(index_row != null);
+            SanityManager.ASSERT(
+                (index_col_map.length == (index_row.length - 1)));
+        }
+
+        // create the index row to delete from from the base row, using map
+        for (int index = 0; index < index_col_map.length; index++)
+        {
+            index_row[index] = base_row[index_col_map[index]];
+        }
+        // last column in index in the RowLocation
+        index_row[index_row.length - 1] = old_row_loc;
+
+        // position the scan for the delete, the scan should already be open.
+        // This is done by setting start scan to full key, GE and stop scan
+        // to full key, GT.
+        index_scan.reopenScan(
+            index_row,
+            ScanController.GE,
+            (Qualifier[][]) null,
+            index_row,
+            ScanController.GT);
+
+        // position the scan, serious problem if scan does not find the row.
+        if (index_scan.next())
+        {
+            index_scan.delete();
+        }
+        else
+        {
+            // Didn't find the row we wanted to delete.
+            if (SanityManager.DEBUG)
+            {
+                SanityManager.THROWASSERT(
+                    "Did not find row to delete." +
+                    "base_row = " + RowUtil.toString(base_row) +
+                    "index_row = " + RowUtil.toString(index_row));
+            }
+        }
+
+        // insert the new index row into the conglomerate
+        index_row[index_row.length - 1] = new_row_loc;
+
+        index_cc.insert(index_row);
+
+        return;
+    }
+
+    /**
+     * Purge committed deleted rows from conglomerate.
+     * <p>
+     * Scans the table and purges any committed deleted rows from the 
+     * table.  If all rows on a page are purged then page is also 
+     * reclaimed.
+     * <p>
+     *
+     * @param schemaName        schema of table to defragment
+     * @param tableName         name of table to defragment
+     * @param data_dictionary   An open data dictionary to look up the table in.
+     * @param tc                transaction controller to use to do updates.
+     *
+     **/
+	private void purgeRows(TransactionController   tc)
+        throws StandardException
+	{
+        switch (td.getTableType())
+        {
+        /* Skip views and vti tables */
+        case TableDescriptor.VIEW_TYPE:
+        case TableDescriptor.VTI_TYPE:
+        	break;
+        // other types give various errors here
+        // DERBY-719,DERBY-720
+        default:
+          {
+
+            ConglomerateDescriptor[] conglom_descriptors = 
+                td.getConglomerateDescriptors();
+
+            for (int cd_idx = 0; cd_idx < conglom_descriptors.length; cd_idx++)
+            {
+                ConglomerateDescriptor cd = conglom_descriptors[cd_idx];
+
+                tc.purgeConglomerate(cd.getConglomerateNumber());
+            }
+          }
+        }
+
+        return;
+    }
+
 	/**
 	 * Workhorse for adding a new column to a table.
 	 *

Modified: db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java (original)
+++ db/derby/code/trunk/java/engine/org/apache/derby/impl/sql/execute/GenericConstantActionFactory.java Tue May 13 12:29:12 2008
@@ -127,6 +127,10 @@
 	 *	@param compressTable	Whether or not this is a compress table
 	 *	@param behavior			drop behavior of dropping column
 	 *	@param sequential	If compress table/drop column, whether or not sequential
+	 *  @param truncateTable	    Whether or not this is a truncate table
+	 *  @param purge				PURGE during INPLACE COMPRESS?
+	 *  @param defragment			DEFRAGMENT during INPLACE COMPRESS?
+	 *  @param truncateEndOfTable	TRUNCATE END during INPLACE COMPRESS?
 	 */
 	public	ConstantAction	getAlterTableConstantAction
 	(
@@ -142,13 +146,17 @@
 		boolean						compressTable,
 		int							behavior,
 		boolean						sequential,
-		boolean                     truncateTable
+		boolean                     truncateTable,
+		boolean						purge,
+		boolean						defragment,
+		boolean						truncateEndOfTable 
     )
 	{
 		return new	AlterTableConstantAction( sd, tableName, tableId, tableConglomerateId, 
 											  tableType, columnInfo, constraintActions, 
 											  lockGranularity, compressTable,
-											  behavior, sequential, truncateTable);
+											  behavior, sequential, truncateTable,
+											  purge, defragment, truncateEndOfTable);
 	}
 
 	/**

Modified: db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/GrantRevokeDDLTest.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/GrantRevokeDDLTest.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/GrantRevokeDDLTest.java (original)
+++ db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/GrantRevokeDDLTest.java Tue May 13 12:29:12 2008
@@ -1058,7 +1058,7 @@
             " call "
             + "SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('SWIPER', "
             + "'MYTAB', 1, 1, 1)");
-        assertUpdateCount(cSt, 0);
+        assertStatementError("38000", cSt);
         cSt.close();
         
         // Try other system routines. All should fail

Modified: db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/SysDiagVTIMappingTest.java
URL: http://svn.apache.org/viewvc/db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/SysDiagVTIMappingTest.java?rev=655980&r1=655979&r2=655980&view=diff
==============================================================================
--- db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/SysDiagVTIMappingTest.java (original)
+++ db/derby/code/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/lang/SysDiagVTIMappingTest.java Tue May 13 12:29:12 2008
@@ -750,7 +750,7 @@
             "call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE(?, ?, 1, 1, 1)");
         cSt.setString(1, "SYSCS_DIAG");
         cSt.setString(2, vtiTableName.toUpperCase());
-        assertStatementError("42X05", cSt);
+        assertStatementError("42Y55", cSt);
 
         assertStatementError("42X08", st,
             "update new org.apache.derby.diag." + vtiMethodName + args