You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@openjpa.apache.org by Marc Prud'hommeaux <mp...@apache.org> on 2007/04/01 18:34:23 UTC

Re: Using DDL generation in a Java EE environment?

Marina-

The "sql" flag merely says that OpenJPA should write the SQL to an  
external file. It still needs to connect to the database in order to  
see which tables currently exist, so it can determine if it needs to  
create new tables or columns.

If you just want a "fresh" database view for the mapping tool, such  
that the mapping tool thinks that the database has no schema defined,  
then you can specify the "-SchemaFactory" flag to be "file(my- 
schema.xml)", where my-schema.xml file is a schema definition file  
(see docs to the format) that contains no tables or columns. This  
should also prevent OpenJPA from having to connect to the database in  
order to read the columns and tables.




On Mar 30, 2007, at 4:58 PM, Marina Vatkina wrote:

> Marc,
>
> I'm trying to run MappingTool to look at -sql option, but I can't  
> make it to work with a PU without connecting to the database (my  
> persistence.xml has <jta-data-source>), and I can't find in the  
> docs how to specify the DBDictionary without persistence.xml.
>
> thanks,
> -marina
>
> Marc Prud'hommeaux wrote:
>> Marina-
>> The problem is that OpenJPA just ignores extra, unmapped columns.   
>> Since we don't require that you map all of the columns of a  
>> database  table to an entity, tables can exist that have unmapped  
>> columns. By  default, we tend to err on the side of caution, so we  
>> never drop  tables or columns. The "deleteTableContents" flag  
>> merely deletes all  the rows in a table, it doesn't actually drop  
>> the table.
>> We don't have any options for asserting that the table is mapped   
>> completely. That might be a nice enhancement, and would allows   
>> OpenJPA to warn when it sees a existing table with unmapped columns.
>> You could manually drop the tables using the mappingtool by   
>> specifying the "schemaAction" argument to "drop", but there's no  
>> way  to do it automatically using the SynchronizeMappings. Note  
>> that there  is nothing preventing you from manually invoking the  
>> MappingTool  class from any startup to glue code that you want.
>> On Mar 29, 2007, at 4:18 PM, Marina Vatkina wrote:
>>> Marc, Patrick,
>>>
>>> I didn't look into the file story yet, but what I've seen as the   
>>> result of using
>>>
>>>           <property name="openjpa.jdbc.SynchronizeMappings"
>>>                 value="buildSchema  
>>> (SchemaAction='add,deleteTableContents')"/>
>>>
>>> looks surprising: if I have there is an entity Foo with  
>>> persistence  fields 'x' and 'y' and a table FOO already exists in  
>>> the database  with columns A and B (there are no fields 'a' and  
>>> 'b' in the  entity), the table is not recreated, but the columns  
>>> X and Y are  added to the table FOO. The 'deleteTableContents'  
>>> doesn't affect  this behavior.
>>>
>>> Is it an expected behavior?
>>>
>>> What should I use to either create the table properly or get a   
>>> message that such table already exist (and as in my case doesn't   
>>> match the entity)?
>>>
>>> thanks,
>>> -marina
>>>
>>> Marina Vatkina wrote:
>>>
>>>> Then I'll first start with an easier task - check what happens  
>>>> in  EE if entities are not explicitly listed in the  
>>>> persistence.xml  file :).
>>>> thanks,
>>>> -marina
>>>> Marc Prud'hommeaux wrote:
>>>>
>>>>> Marina-
>>>>>
>>>>>> Let me give it a try. How would the persistence.xml property   
>>>>>> look  like to generate .sql file?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Actually, I just took a look at this, and it look like it  
>>>>> isn't   possible to use the "SynchronizeMappings" property to   
>>>>> automatically  output a sql file. The reason is that the  
>>>>> property  takes a standard  OpenJPA plugin string that  
>>>>> configures an  instances of MappingTool,  but the MappingTool  
>>>>> class doesn't have  a setter for the SQL file to  write out to.
>>>>>
>>>>> So I think your only recourse would be to write your own  
>>>>> adapter  to  to this that manually creates a MappingTool  
>>>>> instance and runs  it with  the correct flags for outputting a  
>>>>> sql file. Take a look  at the  javadocs for the MappingTool to  
>>>>> get started, and let us  know if you  have any questions about  
>>>>> proceeding.
>>>>>
>>>>>
>>>>>
>>>>> On Mar 20, 2007, at 4:59 PM, Marina Vatkina wrote:
>>>>>
>>>>>> Marc,
>>>>>>
>>>>>> Marc Prud'hommeaux wrote:
>>>>>>
>>>>>>> Marina-
>>>>>>>
>>>>>>>> They do in SE, but as there is no requirement to do it in   
>>>>>>>> EE,   people try to reduce the amount of typing ;).
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hmm ... we might not actually require it in EE, since we do    
>>>>>>> examine  the ejb jar to look for persistent classes. I'm not   
>>>>>>> sure  though.
>>>>>>> You should test with both listing them and not listing them.   
>>>>>>> I'd  be  interested to know if it works without.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Let me give it a try. How would the persistence.xml property   
>>>>>> look  like to generate .sql file? Where will it be placed in  
>>>>>> EE   environment?  Does it use use the name as-is or prepend  
>>>>>> it with   some path?
>>>>>>
>>>>>> thanks.
>>>>>>
>>>>>>> On Mar 20, 2007, at 4:19 PM, Marina Vatkina wrote:
>>>>>>>
>>>>>>>> Marc,
>>>>>>>>
>>>>>>>> Marc Prud'hommeaux wrote:
>>>>>>>>
>>>>>>>>> Marina-
>>>>>>>>> On Mar 20, 2007, at 4:02 PM, Marina Vatkina wrote:
>>>>>>>>>
>>>>>>>>>> Marc,
>>>>>>>>>>
>>>>>>>>>> Thanks for the pointers. Can you please answer the   
>>>>>>>>>> following  set  of  questions?
>>>>>>>>>>
>>>>>>>>>> 1. The doc requires that "In order to enable automatic    
>>>>>>>>>> runtime   mapping, you must first list all your  
>>>>>>>>>> persistent   classes". Is  this  true for EE case also?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Yes. People usually list them all in the <class> tags in   
>>>>>>>>> the    persistence.xml file.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> They do in SE, but as there is no requirement to do it in   
>>>>>>>> EE,   people try to reduce the amount of typing ;).
>>>>>>>>
>>>>>>>> If OpenJPA can identify all entities in EE world, why can't   
>>>>>>>> it  do  the same for the schema generation?
>>>>>>>>
>>>>>>>> I'll check the rest.
>>>>>>>>
>>>>>>>> thanks,
>>>>>>>> -marina
>>>>>>>>
>>>>>>>>>> 2. Section "1.2.Generating DDL SQL" talks about .sql  
>>>>>>>>>> files,   but   what I am looking for are "jdbc" files,  
>>>>>>>>>> i.e. files  with  the  lines  that can be used directly as  
>>>>>>>>>> java.sql  statements to  be  executed  against database.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The output should be sufficient. Try it out and see if  
>>>>>>>>> the   format  is  something you can use.
>>>>>>>>>
>>>>>>>>>> 3. Is there a document that describes all possible values   
>>>>>>>>>> for   the  "openjpa.jdbc.SynchronizeMappings" property?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Unfortunately, no. Basically, the setting of the      
>>>>>>>>> "SynchronizeMappings" property will be of the form  
>>>>>>>>> "action    (Bean1=value1,Bean2=value2)", where the "bean"  
>>>>>>>>> values are   those   listed in  
>>>>>>>>> org.apache.openjpa.jdbc.meta.MappingTool   (whose javadoc   
>>>>>>>>> you  can see http://incubator.apache.org/ openjpa/ docs/ 
>>>>>>>>> latest/ javadoc/org/ apache/openjpa/jdbc/meta/   
>>>>>>>>> MappingTool.html ).
>>>>>>>>>
>>>>>>>>>> thank you,
>>>>>>>>>> -marina
>>>>>>>>>>
>>>>>>>>>> Marc Prud'hommeaux wrote:
>>>>>>>>>>
>>>>>>>>>>> Marina-
>>>>>>>>>>> On Mar 15, 2007, at 5:01 PM, Marina Vatkina wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I am part of the GlassFish persistence team and was    
>>>>>>>>>>>> wondering   how  does OpenJPA support JPA auto DDL   
>>>>>>>>>>>> generation  (we call it   "java2db")  in a Java EE   
>>>>>>>>>>>> application server.
>>>>>>>>>>>>
>>>>>>>>>>>> Our application server supports java2db via creating  
>>>>>>>>>>>> two   sets  of   files for each PU: a ...dropDDL.jdbc  
>>>>>>>>>>>> and    a ...createDDL.jdbc  file  on deploy (i.e. before  
>>>>>>>>>>>> the    application  is actually loaded  into the   
>>>>>>>>>>>> container) and   then  executing 'create' file as the  
>>>>>>>>>>>> last  step in    deployment, and  'drop' file on  
>>>>>>>>>>>> undeploy or the 1st step   in   redeploy. This  allows  
>>>>>>>>>>>> us to drop tables created by  the   previous  deploy   
>>>>>>>>>>>> operation.
>>>>>>>>>>>>
>>>>>>>>>>>> This approach is done for both, the CMP and the  
>>>>>>>>>>>> default   JPA    provider. It would be nice to add  
>>>>>>>>>>>> java2db support  for  OpenJPA  as   well, and I'm  
>>>>>>>>>>>> wondering if we need to  do  anything special,  or   
>>>>>>>>>>>> it'll  all work just by itself?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> We do have support for runtime creation of the schema  
>>>>>>>>>>> via   the    "openjpa.jdbc.SynchronizeMappings" property.  
>>>>>>>>>>> It is   described at:
>>>>>>>>>>>   http://incubator.apache.org/openjpa/docs/latest/  
>>>>>>>>>>> manual/    manual.html#ref_guide_mapping_synch
>>>>>>>>>>> The property can be configured to run the mappingtool   
>>>>>>>>>>> (also    described  in the documentation) at runtime   
>>>>>>>>>>> against all the    registered  persistent classes.
>>>>>>>>>>>
>>>>>>>>>>>> Here are my 1st set of questions:
>>>>>>>>>>>>
>>>>>>>>>>>> 1. Which API would trigger the process, assuming the    
>>>>>>>>>>>> correct   values  are specified in the persistence.xml   
>>>>>>>>>>>> file?  Is it:
>>>>>>>>>>>> a) <provider>.createContainerEntityManagerFactory(...)? or
>>>>>>>>>>>> b) the 1st call to emf.createEntityManager() in this VM?
>>>>>>>>>>>> c) something else?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> b
>>>>>>>>>>>
>>>>>>>>>>>> 2. How would a user drop the tables in such environment?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I don't think it can be used to automatically drop then    
>>>>>>>>>>> create    tables. The "mappingtool" can be executed   
>>>>>>>>>>> manually  twice, the   first  time to drop all the  
>>>>>>>>>>> tables,  and the  second time to re-  create them,  but I  
>>>>>>>>>>> don't  think it can be  automatically done at   runtime  
>>>>>>>>>>> with the    "SynchronizeMappings" property.
>>>>>>>>>>>
>>>>>>>>>>>> 3. If the answer to either 1a or 1b is yes, how does  
>>>>>>>>>>>> the   code    distinguish between the server startup  
>>>>>>>>>>>> time and  the   application   being loaded for the 1st  
>>>>>>>>>>>> time?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> That is one of the reasons why we think it would be    
>>>>>>>>>>> inadvisable   to  automatically drop tables at runtime :)
>>>>>>>>>>>
>>>>>>>>>>>> 4. Is there a mode that allows creating a file with  
>>>>>>>>>>>> the   jdbc    statements to create or drop the tables  
>>>>>>>>>>>> and  constraints?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Yes. See:
>>>>>>>>>>>   http://incubator.apache.org/openjpa/docs/latest/  
>>>>>>>>>>> manual/    manual.html#ref_guide_ddl_examples
>>>>>>>>>>>
>>>>>>>>>>>> thank you,
>>>>>>>>>>>> -marina
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>>>
>


Re: Using DDL generation in a Java EE environment?

Posted by Marina Vatkina <Ma...@Sun.COM>.
It does and doesn't :(.

If I just use 'build', it doesn't, but if I add drop (as in 'drop,build') it 
creates a table that includes the existing columns but(!) making them not-pk and 
type OTHER (on Oracle):
CREATE TABLE ORDER_TABLE (ORDER_ID NUMBER NOT NULL, SHIPPING_ADDRESS 
VARCHAR2(255), CUSTOMER_ID NUMBER, B OTHER, PRIMARY KEY (ORDER_ID));

It also adds an index but no FK constraint.

thanks,
-marina

Abe White wrote:
>>If you just want a "fresh" database view for the mapping tool, such  
>>that the mapping tool thinks that the database has no schema  
>>defined, then you can specify the "-SchemaFactory" flag to be "file 
>>(my-schema.xml)", where my-schema.xml file is a schema definition file
> 
> 
> The "build" schema action also pretends there is no existing  
> database, though it will still connect. 
>   
> 
> Notice:  This email message, together with any attachments, may contain information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated entities,  that may be confidential,  proprietary,  copyrighted  and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it.

Re: Using DDL generation in a Java EE environment?

Posted by Abe White <aw...@bea.com>.
> If you just want a "fresh" database view for the mapping tool, such  
> that the mapping tool thinks that the database has no schema  
> defined, then you can specify the "-SchemaFactory" flag to be "file 
> (my-schema.xml)", where my-schema.xml file is a schema definition file

The "build" schema action also pretends there is no existing  
database, though it will still connect. 
  

Notice:  This email message, together with any attachments, may contain information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated entities,  that may be confidential,  proprietary,  copyrighted  and/or legally privileged, and is intended solely for the use of the individual or entity named in this message. If you are not the intended recipient, and have received this message in error, please immediately return this by email and then delete it.