You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ode.apache.org by Stefan Jakoubi <sj...@securityresearch.at> on 2008/04/25 13:26:19 UTC

Hibernate mappings

Dear ODE community,

 

in this mail I (a) follow up a topic raised from a colleague
(http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine and
(b) raise myself a question concerning configuring ODE for DB2.

 

@ (a):

When migrating ODE to DB2 there is a problem within the simple
scheduler. In particular, the scalar function MOD(x, y) within a
prepared statement throws an awful exception.

The solution is to replace the MOD(x, y) within the prepared statement
with: x - y * (x / y). This is according to how DB2 internally
determines the result for the scalar function MOD.

 

@ (b):

Can anybody please provide a complete table schema ODE requires?

Does anybody know where to configure all required Hibernate mappings so
that ODE table names may be altered?

When switching persistence to Hibernate, is it true that openjpa is not
used any more (and thus, no jpa mappings have to be modified in case
that I alter table names?)

 

Finally I want to tell that as soon as we get a solution for
JBOSS/ODE/DB2 we will contribute at Jira!

 

Thanks in advance & cheers,

Stefan

 

 


Re: Group activity ?

Posted by Alex Boisvert <bo...@intalio.com>.
This isn't possible in BPEL 2.0 without using a separate process to hold the
common code, which is probably too heavy-weight.

A lightweight approach might be to create C-like macros and use a
preprocessor to transform your BPEL prior to deployment.  XSLT comes to mind
for the implementation.

alex


On Tue, May 6, 2008 at 8:21 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> Is this possible to group activities and use such instruction groups
> several times? I think about something similar to procedure of function
> in any programming language.
>
> I have a piece of BPEL script and I need to insert it in to all my
> global <faultHandlers> catch elements.
> I'd like to reuse my own code.
>
> I cannot solve it by rethrowing faults, because I'm on the top of fault
> handlers.
>
> --
> Regards
> Mateusz Nowakowski
>

Group activity ?

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Is this possible to group activities and use such instruction groups
several times? I think about something similar to procedure of function
in any programming language.

I have a piece of BPEL script and I need to insert it in to all my
global <faultHandlers> catch elements.
I'd like to reuse my own code.

I cannot solve it by rethrowing faults, because I'm on the top of fault
handlers.

-- 
Regards
Mateusz Nowakowski

Re: Logging from BPEL script

Posted by Jens Goldhammer <je...@googlemail.com>.
Hello,

just my two cents.

2008/5/6 Nowakowski, Mateusz <Ma...@sabre-holdings.com>:

> Thank you
>
> I've also think about this workaround... but we didn't decide to do that
> way, because it is too complicated and resource consuming... logging should
> be as fast as possible...
> I think that log activity should be present in the standard, but the
> behavior should depend on the implementation.


Maybe you can log with an esb. If you want a fast implementation, you can
take Synapse (synapse.apache.org) or WSO2 ESB(Synapse+Admin Console,
www.wso2.org). It is very easy to define mediations which log your
soap-messages and they have a streaming api based on AXIOM. They work on the
integration of ODE (
http://wso2.org/wiki/display/esbjava/WSO2+Enterprise+Service+Bus+Roadmap)
and want to enhance the monitoring part of their esb. Maybe they will
implement a mechanism to log steps from ODE? I don´t know exactly, maybe you
can ask them on the mailing list...


>
>
> BPEL/ODE is annoying me slowly... It is very difficult to do very easy
> things and we need to discover very ugly workarounds to make things work


But I think this is not the intention of BPEL to do easy things. BPEL is
only valuable if you have parallel service calls or by using the
bpel4people-spec which is not present in ODE where you can use generated
user-interfaces for inputs to your process. BPEL still lacks on great tool
support, faultless os server implementations and the mentioned reuse
problem. The declarative programming has not only its good sides :-) Open
Source has not the variety (ODE is good, but still have some faults)-
commercial software is in general very heavy (think of IBM Process
Server)... Maybe you can have a look at ActiveBPEL...

I think, it would become easier if bpel would define interfaces how to
interact better with the bpel server. The human-task-api of ibm shows how it
can be done in a small part. If you have a good java-framework like jbpm or
something else, you can do a lot of the bpel-stuff in java and just better
and faster!
Sorry, but that´s my opinion for today. Maybe it will change in future?


>
> --
> Regards
> Mateusz Nowakowski
>
> -----Original Message-----
> From: Łukasz Budnik [mailto:lukasz.budnik@gmail.com]
> Sent: Tuesday, May 06, 2008 12:39 PM
> To: user@ode.apache.org
> Subject: Re: Logging from BPEL script
>
> Hi Mateusz,
>
> I'm currently working on a system that manages BPEL processes.
>
> One of its features is enhancing original BPEL definitions by defining
> new partner link (logging web service) so that I can log almost
> everything ;)
>
> Enhancement is simply done by transforming BPEL with XSLT.
>
> My system (beta) will be available in 2 months time, I can let you
> know when it will be ready.
>
> best regards
> Łukasz
>
> 2008/4/28 Nowakowski, Mateusz <Ma...@sabre-holdings.com>:
> > Hi
> >
> >
> >
> >  Is there any possibility to log some information from BPEL script?
> >
> >
> >
> >  I think about some additional activity. Probably there isn't such
> >  activity but as I know there is possibility to implement and add own
> >  activity to BPEL.
> >
> >
> >
> >  How can I add new activity to ODE? Is there any description how to do
> >  that?
> >
> >
> >
> >
> >
> >  --
> >
> >  Regards
> >
> >  Mateusz Nowakowski
> >
> >
> >
> >
>

Re: ODE Maven artifacts

Posted by Matthieu Riou <ma...@offthelip.org>.
On Tue, May 13, 2008 at 11:16 AM, Matthieu Riou <ma...@offthelip.org>
wrote:

> On Tue, May 13, 2008 at 2:01 AM, Nowakowski, Mateusz <
> Mateusz.Nowakowski@sabre-holdings.com> wrote:
>
> > Hi all,
> >
> > I cannot find ODE Maven artifacts for version 1.1.1
> >
> > I assume it should be under
> > http://people.apache.org/repo/m2-ibiblio-rsync-repository/org/apache/ode
> > /
> > but there are only artifacts for version 1.1
> >
>
> Mmmh seems that I forgot to mirror them, sorry about that. I'll do it now,
> in the meantime you can use:
>
> http://people.apache.org/~mriou/ode-1.1.1/<http://people.apache.org/%7Emriou/ode-1.1.1/>
>
> They should be propagated tomorrow in the various Maven repositories.
>

Actually they seem to be published live on people.apache.org so you can
already get them on:

http://people.apache.org/repo/m2-ibiblio-rsync-repository/org/apache/ode/

Cheers,
Matthieu


>
> Cheers,
> Matthieu
>
>
> >
> > --
> > Regards
> > Mateusz Nowakowski
> >
>
>

RE: ODE Maven artifacts

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Thanks, I've handled the issue the same way. I've inserted projects into
my own maven repository.

-- 
Regards
Mateusz Nowakowski
-----Original Message-----
From: matthieu.riou@gmail.com [mailto:matthieu.riou@gmail.com] On Behalf
Of Matthieu Riou
Sent: Tuesday, May 13, 2008 8:16 PM
To: user@ode.apache.org
Subject: Re: ODE Maven artifacts

On Tue, May 13, 2008 at 2:01 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> Hi all,
>
> I cannot find ODE Maven artifacts for version 1.1.1
>
> I assume it should be under
>
http://people.apache.org/repo/m2-ibiblio-rsync-repository/org/apache/ode
> /
> but there are only artifacts for version 1.1
>

Mmmh seems that I forgot to mirror them, sorry about that. I'll do it
now,
in the meantime you can use:

http://people.apache.org/~mriou/ode-1.1.1/

They should be propagated tomorrow in the various Maven repositories.

Cheers,
Matthieu


>
> --
> Regards
> Mateusz Nowakowski
>

Re: ODE Maven artifacts

Posted by Matthieu Riou <ma...@offthelip.org>.
On Tue, May 13, 2008 at 2:01 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> Hi all,
>
> I cannot find ODE Maven artifacts for version 1.1.1
>
> I assume it should be under
> http://people.apache.org/repo/m2-ibiblio-rsync-repository/org/apache/ode
> /
> but there are only artifacts for version 1.1
>

Mmmh seems that I forgot to mirror them, sorry about that. I'll do it now,
in the meantime you can use:

http://people.apache.org/~mriou/ode-1.1.1/

They should be propagated tomorrow in the various Maven repositories.

Cheers,
Matthieu


>
> --
> Regards
> Mateusz Nowakowski
>

ODE Maven artifacts

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Hi all,

I cannot find ODE Maven artifacts for version 1.1.1

I assume it should be under
http://people.apache.org/repo/m2-ibiblio-rsync-repository/org/apache/ode
/
but there are only artifacts for version 1.1

-- 
Regards
Mateusz Nowakowski

Re: Logging from BPEL script

Posted by Alex Boisvert <bo...@intalio.com>.
Ok, I understand the issue now.  Thanks for the explanation.

This, ahem, idiosyncrasy, is actually defined by the BPEL 2.0 spec in
section 8.4.2 (Replacement Logic of Copy Operations).  When you're copying
from one element (EII) to another element (EII), you can specify the
"keepSrcElementName" attribute to define the behavior you want.

To paraphrase the spec,

   - When the keepSrcElementName attribute is set to "no" (which is the
   default), the target element name is maintained
   - When the keepSrcElementName attribute is set to "yes", the source
   element name is used as the name of the resulting destination element.

It sounds like you'd want to set "keepSrcElementName" to "yes" in your case.


I think you put your finger on it earlier, though, the real issue here is
the lack of schema validation that makes some of these errors go by
silently.

alex

On Tue, May 13, 2008 at 1:50 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> Thank you for your answer.
> It gives me more lights on the problem.
>
> >I'm not sure about the "merge" issue that you mention.  Can you provide
> >more detail?
>
> For example I have variable defined that way:
>
> <bpel:variable name="someVariable" element="tns:SomeElement"/>
>
> And an example of tns:SomeElement element type looks like :
>
> <SomeElement xmlns="....">
>        <AnotherSomeTag>......</AnotherSomeTag>
> </SomeElement>
>
> When I tied to assign something like this:
>
> <TotallyDifferentType xmlns="...." someAttr="...">
>        <xx>lorem ipsum</xx>
>        <yyy>some content</yyy>
> </TotallyDifferentType>
>
> to variable someVariable
> the result is:
>
> <SomeElement xmlns="...." someAttr="...">
>        <xx>lorem ipsum</xx>
>        <yyy>some content</yyy>
> </SomeElement>
>
> This is done silently of course.
>
>
> --
> Regards
> Mateusz Nowakowski
> -----Original Message-----
> From: Alex Boisvert [mailto:boisvert@intalio.com]
> Sent: Monday, May 12, 2008 11:58 PM
> To: user@ode.apache.org
> Subject: Re: Logging from BPEL script
>
> Hi Mateusz,
>
> I'm coming back to this thread after a busy week at
> CommunityOne/JavaOne,
> Scala Lift-Off, and all that craziness...
>
> On Wed, May 7, 2008 at 1:51 AM, Nowakowski, Mateusz <
> Mateusz.Nowakowski@sabre-holdings.com> wrote:
>
> > >I would be interested in hearing more about these.  What are your top
> > >itches?
> >
> > - BPEL is not completely programming language. It doesn't contain
> > something like procedure or function instruction. It is also
> impossible
> > to share code between processes. Only copy-paste methodology works.
>
>
> Correct.  There's been a few proposals/experiments in the past but I
> haven't
> heard anything recently about these efforst.
>
> e.g.
> https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid
> /7d26a4e1-0601-0010-b9a4-c815157e69af<https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7d26a4e1-0601-0010-b9a4-c815157e69af>
> http://www.ibm.com/developerworks/library/specification/ws-bpelsubproc/
>
> - ServiceMix integration. A last problem with deploying ODE within SM is
> > minor in my opinion. ODE slightly doesn't fit to SM. When message
> > arrives to ODE from SM, it looses all normalized messages properties
> > (there was a short discussion about it) - so if we have something in
> > properties we need to make hardcore workaround and copies properties
> > into xml payload. It breaks service schema, but it doesn't matter for
> > ODE.
>
>
> Generally speaking, BPEL wasn't designed to support low-level message
> properties or protocol details, whether they are JBI-specific, or come
> from
> WS-Addressing headers, etc.    This is the kind of details that's better
> left hidden from the business process to make the process as reusable
> and
> portable as possible.
>
> This being said, there have been a few ideas thrown around to supporting
> some of these in Apache Ode.   We've had some recent discussions within
> Intalio and Matthieu should post some proposal about this on the mailing
> list for wider discussion.
>
>
> > - Schema validation. It is written in the documentation that schema
> > validation is not implemented. It is not completely true. When you
> > assign some data to variable, ODE checks the first tag of data. A very
> > strange thing appears when you assign completely different data to
> > inappropriate variable type. ODE silently "merges" input data with
> > destination variable type that the result has the first tag the same
> as
> > destination type, but payload is from input data...
>
>
> Ode attempts to be very JBI compliant and follow the normalization rules
> defined by the JBI spec; we didn't define our own rules.  The only
> message
> converters we have (to complement the standard JBI one) were written to
> support existing ServiceMix components that didn't follow JBI rules.
>
> This being said, I think this is an area were much improvements can be
> done.  Right now Ode supports only one message converter at a time.
> This is
> inconvenient if you have both standard and non-standard components in
> your
> architecture.   We should probably support more than one at a time and
> dynamically select the most appropriate one at runtime.   Second, the
> error
> handling and error reporting could be improved to provide more
> visibility
> into how messages are matched, mapped or converted into and out of Ode.
> The current behavior is rather silent and thus doesn't provide much clue
> as
> to why messages are not processed when they don't fit the expected
> format.
>
> I'm not sure about the "merge" issue that you mention.  Can you provide
> more
> detail?
>
> - Last my JIRA entry ODE-263. onAlarm is not working. It generates alarm
> > only after completed activity, not after specified amount of time. In
> > other words it doesn't work. We want use this functionality as a
> timeout
> > guard, but because of that we need to implement this functionality
> > outside ODE.
>
>
> Ok, I'll take a look.
>
>
> > - Some useful extensions like external variables or adding own
> > activities are not in stable ODE version. We cannot rely on trunk.
> > Faster releases would be appropriate.
>
>
> Actually, external variables are on the Ode 1.1 branch and will be part
> of
> the next stable 1.2 release.
>
> Things that are on the trunk but not in 1.1 are,
> -Atomic scopes (transactions)
> -BPEL extension activities (including Javascript E4X assigns)
> -Improvement to the IL messaging interaction model
>  (future support for reliable messaging, transactional invoke, better
> async
> handling, ...)
>
> The main issue preventing us from migrating to and releasing the trunk
> is
> backward compatibility.  Maciej was working on dual support for both Ode
> 1.1-compiled processes and Ode-trunk compiled processes but it's taken
> longer than expected and he's been less available than expected.   We're
> waiting for him to clean up and commit what he's done so far so we can
> take
> it from there.
>
> alex
>

RE: Logging from BPEL script

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Thank you for your answer.
It gives me more lights on the problem.

>I'm not sure about the "merge" issue that you mention.  Can you provide
>more detail?

For example I have variable defined that way:

<bpel:variable name="someVariable" element="tns:SomeElement"/>

And an example of tns:SomeElement element type looks like :

<SomeElement xmlns="....">
	<AnotherSomeTag>......</AnotherSomeTag>
</SomeElement>

When I tied to assign something like this:

<TotallyDifferentType xmlns="...." someAttr="...">
	<xx>lorem ipsum</xx>
	<yyy>some content</yyy>
</TotallyDifferentType>

to variable someVariable
the result is:

<SomeElement xmlns="...." someAttr="...">
	<xx>lorem ipsum</xx>
	<yyy>some content</yyy>
</SomeElement>

This is done silently of course.


-- 
Regards
Mateusz Nowakowski
-----Original Message-----
From: Alex Boisvert [mailto:boisvert@intalio.com] 
Sent: Monday, May 12, 2008 11:58 PM
To: user@ode.apache.org
Subject: Re: Logging from BPEL script

Hi Mateusz,

I'm coming back to this thread after a busy week at
CommunityOne/JavaOne,
Scala Lift-Off, and all that craziness...

On Wed, May 7, 2008 at 1:51 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> >I would be interested in hearing more about these.  What are your top
> >itches?
>
> - BPEL is not completely programming language. It doesn't contain
> something like procedure or function instruction. It is also
impossible
> to share code between processes. Only copy-paste methodology works.


Correct.  There's been a few proposals/experiments in the past but I
haven't
heard anything recently about these efforst.

e.g.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid
/7d26a4e1-0601-0010-b9a4-c815157e69af
http://www.ibm.com/developerworks/library/specification/ws-bpelsubproc/

- ServiceMix integration. A last problem with deploying ODE within SM is
> minor in my opinion. ODE slightly doesn't fit to SM. When message
> arrives to ODE from SM, it looses all normalized messages properties
> (there was a short discussion about it) - so if we have something in
> properties we need to make hardcore workaround and copies properties
> into xml payload. It breaks service schema, but it doesn't matter for
> ODE.


Generally speaking, BPEL wasn't designed to support low-level message
properties or protocol details, whether they are JBI-specific, or come
from
WS-Addressing headers, etc.    This is the kind of details that's better
left hidden from the business process to make the process as reusable
and
portable as possible.

This being said, there have been a few ideas thrown around to supporting
some of these in Apache Ode.   We've had some recent discussions within
Intalio and Matthieu should post some proposal about this on the mailing
list for wider discussion.


> - Schema validation. It is written in the documentation that schema
> validation is not implemented. It is not completely true. When you
> assign some data to variable, ODE checks the first tag of data. A very
> strange thing appears when you assign completely different data to
> inappropriate variable type. ODE silently "merges" input data with
> destination variable type that the result has the first tag the same
as
> destination type, but payload is from input data...


Ode attempts to be very JBI compliant and follow the normalization rules
defined by the JBI spec; we didn't define our own rules.  The only
message
converters we have (to complement the standard JBI one) were written to
support existing ServiceMix components that didn't follow JBI rules.

This being said, I think this is an area were much improvements can be
done.  Right now Ode supports only one message converter at a time.
This is
inconvenient if you have both standard and non-standard components in
your
architecture.   We should probably support more than one at a time and
dynamically select the most appropriate one at runtime.   Second, the
error
handling and error reporting could be improved to provide more
visibility
into how messages are matched, mapped or converted into and out of Ode.
The current behavior is rather silent and thus doesn't provide much clue
as
to why messages are not processed when they don't fit the expected
format.

I'm not sure about the "merge" issue that you mention.  Can you provide
more
detail?

- Last my JIRA entry ODE-263. onAlarm is not working. It generates alarm
> only after completed activity, not after specified amount of time. In
> other words it doesn't work. We want use this functionality as a
timeout
> guard, but because of that we need to implement this functionality
> outside ODE.


Ok, I'll take a look.


> - Some useful extensions like external variables or adding own
> activities are not in stable ODE version. We cannot rely on trunk.
> Faster releases would be appropriate.


Actually, external variables are on the Ode 1.1 branch and will be part
of
the next stable 1.2 release.

Things that are on the trunk but not in 1.1 are,
-Atomic scopes (transactions)
-BPEL extension activities (including Javascript E4X assigns)
-Improvement to the IL messaging interaction model
  (future support for reliable messaging, transactional invoke, better
async
handling, ...)

The main issue preventing us from migrating to and releasing the trunk
is
backward compatibility.  Maciej was working on dual support for both Ode
1.1-compiled processes and Ode-trunk compiled processes but it's taken
longer than expected and he's been less available than expected.   We're
waiting for him to clean up and commit what he's done so far so we can
take
it from there.

alex

Re: Logging from BPEL script

Posted by Alex Boisvert <bo...@intalio.com>.
Hi Mateusz,

I'm coming back to this thread after a busy week at CommunityOne/JavaOne,
Scala Lift-Off, and all that craziness...

On Wed, May 7, 2008 at 1:51 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> >I would be interested in hearing more about these.  What are your top
> >itches?
>
> - BPEL is not completely programming language. It doesn't contain
> something like procedure or function instruction. It is also impossible
> to share code between processes. Only copy-paste methodology works.


Correct.  There's been a few proposals/experiments in the past but I haven't
heard anything recently about these efforst.

e.g.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7d26a4e1-0601-0010-b9a4-c815157e69af
http://www.ibm.com/developerworks/library/specification/ws-bpelsubproc/

- ServiceMix integration. A last problem with deploying ODE within SM is
> minor in my opinion. ODE slightly doesn't fit to SM. When message
> arrives to ODE from SM, it looses all normalized messages properties
> (there was a short discussion about it) - so if we have something in
> properties we need to make hardcore workaround and copies properties
> into xml payload. It breaks service schema, but it doesn't matter for
> ODE.


Generally speaking, BPEL wasn't designed to support low-level message
properties or protocol details, whether they are JBI-specific, or come from
WS-Addressing headers, etc.    This is the kind of details that's better
left hidden from the business process to make the process as reusable and
portable as possible.

This being said, there have been a few ideas thrown around to supporting
some of these in Apache Ode.   We've had some recent discussions within
Intalio and Matthieu should post some proposal about this on the mailing
list for wider discussion.


> - Schema validation. It is written in the documentation that schema
> validation is not implemented. It is not completely true. When you
> assign some data to variable, ODE checks the first tag of data. A very
> strange thing appears when you assign completely different data to
> inappropriate variable type. ODE silently "merges" input data with
> destination variable type that the result has the first tag the same as
> destination type, but payload is from input data...


Ode attempts to be very JBI compliant and follow the normalization rules
defined by the JBI spec; we didn't define our own rules.  The only message
converters we have (to complement the standard JBI one) were written to
support existing ServiceMix components that didn't follow JBI rules.

This being said, I think this is an area were much improvements can be
done.  Right now Ode supports only one message converter at a time.  This is
inconvenient if you have both standard and non-standard components in your
architecture.   We should probably support more than one at a time and
dynamically select the most appropriate one at runtime.   Second, the error
handling and error reporting could be improved to provide more visibility
into how messages are matched, mapped or converted into and out of Ode.
The current behavior is rather silent and thus doesn't provide much clue as
to why messages are not processed when they don't fit the expected format.

I'm not sure about the "merge" issue that you mention.  Can you provide more
detail?

- Last my JIRA entry ODE-263. onAlarm is not working. It generates alarm
> only after completed activity, not after specified amount of time. In
> other words it doesn't work. We want use this functionality as a timeout
> guard, but because of that we need to implement this functionality
> outside ODE.


Ok, I'll take a look.


> - Some useful extensions like external variables or adding own
> activities are not in stable ODE version. We cannot rely on trunk.
> Faster releases would be appropriate.


Actually, external variables are on the Ode 1.1 branch and will be part of
the next stable 1.2 release.

Things that are on the trunk but not in 1.1 are,
-Atomic scopes (transactions)
-BPEL extension activities (including Javascript E4X assigns)
-Improvement to the IL messaging interaction model
  (future support for reliable messaging, transactional invoke, better async
handling, ...)

The main issue preventing us from migrating to and releasing the trunk is
backward compatibility.  Maciej was working on dual support for both Ode
1.1-compiled processes and Ode-trunk compiled processes but it's taken
longer than expected and he's been less available than expected.   We're
waiting for him to clean up and commit what he's done so far so we can take
it from there.

alex

Re: Logging from BPEL script

Posted by Wayne Keenan <wa...@gmail.com>.
Hi,



On Wed, May 7, 2008 at 9:51 AM, Nowakowski, Mateusz <
Mateusz.Nowakowski@sabre-holdings.com> wrote:

> >> Thank you
> >>
> >> I've also think about this workaround... but we didn't decide to do
> that
> >> way, because it is too complicated and resource consuming... logging
> >should
> >> be as fast as possible...
> >> I think that log activity should be present in the standard, but the
> >> behavior should depend on the implementation.
> >
> >
> >Have you considered using the events that Ode generates?
>
> Yes, we are using ODE event listeners, but these are not perfect. We'd
> like for example to log current state of all variables. The EventContext
> is deprecated, but potentially gives possibility to do that. It is
> deprecated, because it is not working for only in memory processes
> (there was a discussion some time ago about it)
>
> >> BPEL/ODE is annoying me slowly... It is very difficult to do very
> easy
> >> things and we need to discover very ugly workarounds to make things
> work
> >
> >
> >I would be interested in hearing more about these.  What are your top
> >itches?
>
> - BPEL is not completely programming language. It doesn't contain
> something like procedure or function instruction. It is also impossible
> to share code between processes. Only copy-paste methodology works.

 <snip>

Although I'm new to BPEL and ODE, that statement regarding sharing code is
completely correct, as there are
Abstract Process, so although not reuse by composition, does promote reuse
by inheritance.

As far a procedural/function calling goes, an 'invoke' to a BPEL process
hosted/advertised by the same
ODE instance as the caller would be a likely candidate for in-[Java|BPEL]VM
optimisation, not that such a thing
exists at the moment.  Although is the declarative overhead of the partner
link make it worth implementing some kind of <invoke-local-operation>
extension.


Regards
Wayne

RE: Logging from BPEL script

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
>> Thank you
>>
>> I've also think about this workaround... but we didn't decide to do
that
>> way, because it is too complicated and resource consuming... logging
>should
>> be as fast as possible...
>> I think that log activity should be present in the standard, but the
>> behavior should depend on the implementation.
>
>
>Have you considered using the events that Ode generates?

Yes, we are using ODE event listeners, but these are not perfect. We'd
like for example to log current state of all variables. The EventContext
is deprecated, but potentially gives possibility to do that. It is
deprecated, because it is not working for only in memory processes
(there was a discussion some time ago about it)

>> BPEL/ODE is annoying me slowly... It is very difficult to do very
easy
>> things and we need to discover very ugly workarounds to make things
work
>
>
>I would be interested in hearing more about these.  What are your top
>itches?

- BPEL is not completely programming language. It doesn't contain
something like procedure or function instruction. It is also impossible
to share code between processes. Only copy-paste methodology works.
- ServiceMix integration. A last problem with deploying ODE within SM is
minor in my opinion. ODE slightly doesn't fit to SM. When message
arrives to ODE from SM, it looses all normalized messages properties
(there was a short discussion about it) - so if we have something in
properties we need to make hardcore workaround and copies properties
into xml payload. It breaks service schema, but it doesn't matter for
ODE. ODE has very unique schema validation (next point).
- Schema validation. It is written in the documentation that schema
validation is not implemented. It is not completely true. When you
assign some data to variable, ODE checks the first tag of data. A very
strange thing appears when you assign completely different data to
inappropriate variable type. ODE silently "merges" input data with
destination variable type that the result has the first tag the same as
destination type, but payload is from input data...
- Last my JIRA entry ODE-263. onAlarm is not working. It generates alarm
only after completed activity, not after specified amount of time. In
other words it doesn't work. We want use this functionality as a timeout
guard, but because of that we need to implement this functionality
outside ODE.
- Some useful extensions like external variables or adding own
activities are not in stable ODE version. We cannot rely on trunk.
Faster releases would be appropriate.


Apart from that we are almost on production with our new product. But
when we started we thought everything would be much easier.

-- 
Mateusz

Re: Logging from BPEL script

Posted by Alex Boisvert <bo...@intalio.com>.
2008/5/6 Nowakowski, Mateusz <Ma...@sabre-holdings.com>:

> Thank you
>
> I've also think about this workaround... but we didn't decide to do that
> way, because it is too complicated and resource consuming... logging should
> be as fast as possible...
> I think that log activity should be present in the standard, but the
> behavior should depend on the implementation.


Have you considered using the events that Ode generates?


> BPEL/ODE is annoying me slowly... It is very difficult to do very easy
> things and we need to discover very ugly workarounds to make things work


I would be interested in hearing more about these.  What are your top
itches?

alex

RE: Logging from BPEL script

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Thank you

I've also think about this workaround... but we didn't decide to do that way, because it is too complicated and resource consuming... logging should be as fast as possible...
I think that log activity should be present in the standard, but the behavior should depend on the implementation.

BPEL/ODE is annoying me slowly... It is very difficult to do very easy things and we need to discover very ugly workarounds to make things work

-- 
Regards
Mateusz Nowakowski

-----Original Message-----
From: Łukasz Budnik [mailto:lukasz.budnik@gmail.com] 
Sent: Tuesday, May 06, 2008 12:39 PM
To: user@ode.apache.org
Subject: Re: Logging from BPEL script

Hi Mateusz,

I'm currently working on a system that manages BPEL processes.

One of its features is enhancing original BPEL definitions by defining
new partner link (logging web service) so that I can log almost
everything ;)

Enhancement is simply done by transforming BPEL with XSLT.

My system (beta) will be available in 2 months time, I can let you
know when it will be ready.

best regards
Łukasz

2008/4/28 Nowakowski, Mateusz <Ma...@sabre-holdings.com>:
> Hi
>
>
>
>  Is there any possibility to log some information from BPEL script?
>
>
>
>  I think about some additional activity. Probably there isn't such
>  activity but as I know there is possibility to implement and add own
>  activity to BPEL.
>
>
>
>  How can I add new activity to ODE? Is there any description how to do
>  that?
>
>
>
>
>
>  --
>
>  Regards
>
>  Mateusz Nowakowski
>
>
>
>

Re: Logging from BPEL script

Posted by Łukasz Budnik <lu...@gmail.com>.
Hi Mateusz,

I'm currently working on a system that manages BPEL processes.

One of its features is enhancing original BPEL definitions by defining
new partner link (logging web service) so that I can log almost
everything ;)

Enhancement is simply done by transforming BPEL with XSLT.

My system (beta) will be available in 2 months time, I can let you
know when it will be ready.

best regards
Łukasz

2008/4/28 Nowakowski, Mateusz <Ma...@sabre-holdings.com>:
> Hi
>
>
>
>  Is there any possibility to log some information from BPEL script?
>
>
>
>  I think about some additional activity. Probably there isn't such
>  activity but as I know there is possibility to implement and add own
>  activity to BPEL.
>
>
>
>  How can I add new activity to ODE? Is there any description how to do
>  that?
>
>
>
>
>
>  --
>
>  Regards
>
>  Mateusz Nowakowski
>
>
>
>

Re: Logging from BPEL script

Posted by Tammo van Lessen <tv...@gmail.com>.
Hi Mateusz,

when you're on the trunk all you need is basically ready but yet 
undocumented.

Starting points are 
org.apache.ode.bpel.runtime.extension.AbstractExtensionBundle and the 
*Operation classes in [1].

You could also have a look at [2] for an example.

Your custom ExtensionBundle can then be registered in Ode's config file 
(ode-{1}.properties) via the ode-{1}.extension.bundles key.
{1} expands to (axis2|jbi)

HTH,
   Tammo

[1]http://svn.apache.org/viewvc/ode/trunk/bpel-runtime/src/main/java/org/apache/ode/bpel/runtime/extension/
[2]http://svn.apache.org/viewvc/ode/trunk/extensions/e4x/src/main/java/org/apache/ode/extension/e4x/

Nowakowski, Mateusz wrote:
> Hi
> 
>  
> 
> Is there any possibility to log some information from BPEL script?
> 
>  
> 
> I think about some additional activity. Probably there isn't such
> activity but as I know there is possibility to implement and add own
> activity to BPEL.
> 
>  
> 
> How can I add new activity to ODE? Is there any description how to do
> that?
> 
>  
> 
>  
> 


Logging from BPEL script

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Hi

 

Is there any possibility to log some information from BPEL script?

 

I think about some additional activity. Probably there isn't such
activity but as I know there is possibility to implement and add own
activity to BPEL.

 

How can I add new activity to ODE? Is there any description how to do
that?

 

 

-- 

Regards

Mateusz Nowakowski

 


Re: Hibernate mappings

Posted by Stefan Jakoubi <sj...@securityresearch.at>.
We are glad to give the community something back (and hopeful something
useful ;-) ).

The Jira issue: https://issues.apache.org/jira/browse/ODE-277

We are soon approaching a test phase - thus, if something has to be updated,
we will post it!

Cheers,
Simon & Stefan




Matthieu Riou-5 wrote:
> 
> Oh, almost forgot: could you create a Jira task for this and attach
> everything as a text file? We need some sort of IP clearance and Jira
> makes
> it easier to track this down.
> 
> Thanks!
> 
> On Mon, Apr 28, 2008 at 11:13 AM, Matthieu Riou <ma...@offthelip.org>
> wrote:
> 
>> Cool, thanks a lot for contributing that back! I'll see how to add it to
>> our doc and maybe update the scheduler.
>>
>> Cheers,
>> Matthieu
>>
>>
>> On Mon, Apr 28, 2008 at 7:04 AM, Stefan Jakoubi <
>> sjakoubi@securityresearch.at> wrote:
>>
>> >
>> > Hi Mathieu,
>> >
>> > I think we have found the solution - in fact 2 major are required to
>> use
>> > ODE
>> > with DB2:
>> > - Update of the simple scheduler
>> > - Update of DB script
>> >
>> > Below, you can find a little HowTo for JBoss & ODE & DB2 - I hope
>> > everything
>> > works finde ;-)
>> >
>> > Cheers and thanks for your support,
>> > Simon & Stefan
>> >
>> >
>> > +++++++++++++++++++++++++++++++
>> > +++++++++++++++++++++++++++++++
>> > ++ HOW TO: JBOSS & ODE & DB2 ++
>> > +++++++++++++++++++++++++++++++
>> > +++++++++++++++++++++++++++++++
>> >
>> > *****************
>> > * PREREQUISITES *
>> > *****************
>> >
>> >  - JBoss (tested on JBoss-4.0.3SP1)
>> >  - deployed ODE (tested with version 1.1.1)
>> >  - DB2 (tested with version 9.x)
>> >
>> >
>> > *********************************
>> > * MODIFYING SIMPLE SCHEDULER JAR *
>> > * *******************************
>> >
>> > - There is a bug using the scalar function MOD within a prepared
>> > statement
>> > - Modify the JdbcDelegate class (see also end of this HowTo)
>> > - Compile the code and make the jar
>> >
>> >
>> > *****************
>> > * CONFIGURATION *
>> > *****************
>> >
>> > -------------------------
>> > ### DB2 configuration ###
>> > -------------------------
>> >
>> > (1) create database for ODE usage (e.g. testode)
>> >
>> > (2) execute the following statements
>> >
>> > -- Apache ODE - SimpleScheduler Database Schema
>> > --
>> > -- Apache Derby scripts by Maciej Szefler.
>> > --
>> > -- Scripts modified for use with DB2 by Stefan Jakoubi and Simon Tjoa
>> > -- (-) All primary key ID columns modified for auto increment purposes
>> > -> ID
>> > bigint GENERATED ALWAYS AS IDENTITY not null
>> > -- (-) Altered table LARGE_DATA -> BIN_DATA blob(2000M) NOT LOGGED ->
>> as
>> > DB2
>> > only supports logging up to 1GB
>> > --     Furthermore, DB2 only allows blob sizes up to 2GB - 1 Byte!!!
>> >
>> > CREATE TABLE ode_job (
>> >  jobid CHAR(64)  NOT NULL DEFAULT '',
>> >  ts BIGINT  NOT NULL DEFAULT 0,
>> >  nodeid char(64),
>> >  scheduled int  NOT NULL DEFAULT 0,
>> >  transacted int  NOT NULL DEFAULT 0,
>> >  details blob(4096),
>> >  PRIMARY KEY(jobid));
>> >
>> > CREATE INDEX IDX_ODE_JOB_TS ON ode_job(ts);
>> > CREATE INDEX IDX_ODE_JOB_NODEID ON ode_job(nodeid);
>> >
>> >
>> > create table BPEL_ACTIVITY_RECOVERY (ID bigint GENERATED ALWAYS AS
>> > IDENTITY
>> > not null, PIID bigint, AID bigint, CHANNEL varchar(255), REASON
>> > varchar(255), DATE_TIME timestamp, LDATA_ID bigint, ACTIONS
>> > varchar(255),
>> > RETRIES integer, INSERT_TIME timestamp, MLOCK integer not null, primary
>> > key
>> > (ID));
>> > create table BPEL_CORRELATION_PROP (ID bigint GENERATED ALWAYS AS
>> > IDENTITY
>> > not null, NAME varchar(255), NAMESPACE varchar(255), VALUE
>> varchar(255),
>> > CORR_SET_ID bigint, INSERT_TIME timestamp, MLOCK integer not null,
>> > primary
>> > key (ID));
>> > create table BPEL_CORRELATION_SET (ID bigint GENERATED ALWAYS AS
>> > IDENTITY
>> > not null, VALUE varchar(255), CORR_SET_NAME varchar(255), SCOPE_ID
>> > bigint,
>> > PIID bigint, PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK integer
>> not
>> > null, primary key (ID));
>> > create table BPEL_CORRELATOR (ID bigint GENERATED ALWAYS AS IDENTITY
>> not
>> > null, CID varchar(255), PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK
>> > integer not null, primary key (ID));
>> > create table BPEL_CORRELATOR_MESSAGE_CKEY (ID bigint GENERATED ALWAYS
>> AS
>> > IDENTITY not null, CKEY varchar(255), CORRELATOR_MESSAGE_ID bigint,
>> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
>> > create table BPEL_EVENT (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > IID bigint, PID bigint, TSTAMP timestamp, TYPE varchar(255), DETAIL
>> > clob(32000), LDATA_ID bigint, SID bigint, INSERT_TIME timestamp, MLOCK
>> > integer not null, primary key (ID));
>> > create table BPEL_FAULT (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > FAULTNAME varchar(255), LDATA_ID bigint, EXPLANATION varchar(4000),
>> > LINE_NUM
>> > integer, AID integer, INSERT_TIME timestamp, MLOCK integer not null,
>> > primary
>> > key (ID));
>> > create table BPEL_INSTANCE (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > INSTANTIATING_CORRELATOR bigint, FAULT bigint, JACOB_STATE bigint,
>> > PREVIOUS_STATE smallint, PROCESS_ID bigint, STATE smallint,
>> > LAST_ACTIVE_DT
>> > timestamp, SEQUENCE bigint, FAILURE_COUNT integer, FAILURE_DT
>> timestamp,
>> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
>> > create table BPEL_MESSAGE (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > MEX bigint, TYPE varchar(255), DATA bigint, INSERT_TIME timestamp,
>> MLOCK
>> > integer not null, primary key (ID));
>> > create table BPEL_MESSAGE_EXCHANGE (ID bigint GENERATED ALWAYS AS
>> > IDENTITY
>> > not null, PORT_TYPE varchar(255), CHANNEL_NAME varchar(255), CLIENTKEY
>> > varchar(255), LDATA_EPR_ID bigint, LDATA_CEPR_ID bigint, REQUEST
>> bigint,
>> > RESPONSE bigint, INSERT_DT timestamp, OPERATION varchar(255), STATE
>> > varchar(255), PROCESS bigint, PIID bigint, DIR char(1), PLINK_MODELID
>> > integer, PATTERN varchar(255), CORR_STATUS varchar(255), FAULT_TYPE
>> > varchar(255), FAULT_EXPL varchar(255), CALLEE varchar(255), PARTNERLINK
>> > bigint, PIPED_ID varchar(255), INSERT_TIME timestamp, MLOCK integer not
>> > null, primary key (ID));
>> > create table BPEL_MEX_PROPS (MEX bigint not null, VALUE varchar(8000),
>> > NAME
>> > varchar(255) not null, primary key (MEX, NAME));
>> > create table BPEL_PLINK_VAL (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null, PARTNER_LINK varchar(100) not null, PARTNERROLE varchar(100),
>> > MYROLE_EPR bigint, PARTNERROLE_EPR bigint, PROCESS bigint, SCOPE
>> bigint,
>> > SVCNAME varchar(255), MYROLE varchar(100), MODELID integer, MYSESSIONID
>> > varchar(255), PARTNERSESSIONID varchar(255), INSERT_TIME timestamp,
>> > MLOCK
>> > integer not null, primary key (ID));
>> > create table BPEL_PROCESS (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > PROCID varchar(255) not null unique, deployer varchar(255), deploydate
>> > timestamp, type_name varchar(255), type_ns varchar(255), version
>> bigint,
>> > ACTIVE_ smallint, guid varchar(255), INSERT_TIME timestamp, MLOCK
>> > integer
>> > not null, primary key (ID));
>> > create table BPEL_SCOPE (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > PIID bigint, PARENT_SCOPE_ID bigint, STATE varchar(255) not null, NAME
>> > varchar(255) not null, MODELID integer, INSERT_TIME timestamp, MLOCK
>> > integer
>> > not null, primary key (ID));
>> > create table BPEL_SELECTORS (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null, PIID bigint not null, SELGRPID varchar(255) not null, IDX integer
>> > not
>> > null, CORRELATION_KEY varchar(255) not null, CORRELATOR bigint not
>> null,
>> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID), unique
>> > (CORRELATION_KEY, CORRELATOR));
>> > create table BPEL_UNMATCHED (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null, MEX bigint, CORRELATION_KEY varchar(255), CORRELATOR bigint not
>> > null,
>> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
>> > create table BPEL_XML_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > LDATA_ID bigint, NAME varchar(255) not null, SCOPE_ID bigint, PIID
>> > bigint,
>> > IS_SIMPLE_TYPE smallint, INSERT_TIME timestamp, MLOCK integer not null,
>> > primary key (ID));
>> > create table LARGE_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > BIN_DATA blob(2000M) NOT LOGGED, INSERT_TIME timestamp, MLOCK integer
>> > not
>> > null, primary key (ID));
>> > create table VAR_PROPERTY (ID bigint GENERATED ALWAYS AS IDENTITY not
>> > null,
>> > XML_DATA_ID bigint, PROP_VALUE varchar(255), PROP_NAME varchar(255) not
>> > null, INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
>> > alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7B14020712
>> > foreign
>> > key (LDATA_ID) references LARGE_DATA;
>> > alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7BA821BBE1
>> > foreign
>> > key (PIID) references BPEL_INSTANCE;
>> > alter table BPEL_CORRELATION_PROP add constraint FK4EC9DDAA4D0B7982
>> > foreign
>> > key (CORR_SET_ID) references BPEL_CORRELATION_SET;
>> > alter table BPEL_CORRELATION_SET add constraint FKB838191BA821BBE1
>> > foreign
>> > key (PIID) references BPEL_INSTANCE;
>> > alter table BPEL_CORRELATION_SET add constraint FKB838191B6D49C363
>> > foreign
>> > key (SCOPE_ID) references BPEL_SCOPE;
>> > alter table BPEL_CORRELATION_SET add constraint FKB838191BE15A2343
>> > foreign
>> > key (PROCESS_ID) references BPEL_PROCESS;
>> > create index IDX_CORRELATOR_CID on BPEL_CORRELATOR (CID);
>> > alter table BPEL_CORRELATOR add constraint FKF50EFA33E15A2343 foreign
>> > key
>> > (PROCESS_ID) references BPEL_PROCESS;
>> > create index IDX_BPEL_CORRELATOR_MESSAGE_CKEY on
>> > BPEL_CORRELATOR_MESSAGE_CKEY (CKEY);
>> > alter table BPEL_CORRELATOR_MESSAGE_CKEY add constraint
>> > FK8997F700EEFA7470
>> > foreign key (CORRELATOR_MESSAGE_ID) references BPEL_UNMATCHED;
>> > alter table BPEL_EVENT add constraint FKAA6D673014020712 foreign key
>> > (LDATA_ID) references LARGE_DATA;
>> > alter table BPEL_EVENT add constraint FKAA6D6730A7EED251 foreign key
>> > (IID)
>> > references BPEL_INSTANCE;
>> > alter table BPEL_EVENT add constraint FKAA6D6730C831CBE3 foreign key
>> > (PID)
>> > references BPEL_PROCESS;
>> > alter table BPEL_FAULT add constraint FKAA722EB814020712 foreign key
>> > (LDATA_ID) references LARGE_DATA;
>> > alter table BPEL_INSTANCE add constraint FKE1DED41FDD43DBE1 foreign key
>> > (INSTANTIATING_CORRELATOR) references BPEL_CORRELATOR;
>> > alter table BPEL_INSTANCE add constraint FKE1DED41F6B66C85F foreign key
>> > (JACOB_STATE) references LARGE_DATA;
>> > alter table BPEL_INSTANCE add constraint FKE1DED41F1F3C9CB7 foreign key
>> > (FAULT) references BPEL_FAULT;
>> > alter table BPEL_INSTANCE add constraint FKE1DED41FE15A2343 foreign key
>> > (PROCESS_ID) references BPEL_PROCESS;
>> > alter table BPEL_MESSAGE add constraint FK4FA7231DCA00A413 foreign key
>> > (MEX)
>> > references BPEL_MESSAGE_EXCHANGE;
>> > alter table BPEL_MESSAGE add constraint FK4FA7231DEB2C9ED8 foreign key
>> > (DATA) references LARGE_DATA;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0589D7A75B
>> > foreign
>> > key (LDATA_CEPR_ID) references LARGE_DATA;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05BBA250D
>> > foreign
>> > key (PARTNERLINK) references BPEL_PLINK_VAL;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05A821BBE1
>> > foreign
>> > key (PIID) references BPEL_INSTANCE;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05C29670AA
>> > foreign
>> > key (LDATA_EPR_ID) references LARGE_DATA;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05B5BD38C7
>> > foreign
>> > key (PROCESS) references BPEL_PROCESS;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05FF451031
>> > foreign
>> > key (RESPONSE) references BPEL_MESSAGE;
>> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0554DCEE7F
>> > foreign
>> > key (REQUEST) references BPEL_MESSAGE;
>> > alter table BPEL_MEX_PROPS add constraint FK203CAFC7CA00A413 foreign
>> key
>> > (MEX) references BPEL_MESSAGE_EXCHANGE;
>> > alter table BPEL_PLINK_VAL add constraint FK7D71E742F2BC5EB8 foreign
>> key
>> > (MYROLE_EPR) references LARGE_DATA;
>> > alter table BPEL_PLINK_VAL add constraint FK7D71E74290D95ED1 foreign
>> key
>> > (SCOPE) references BPEL_SCOPE;
>> > alter table BPEL_PLINK_VAL add constraint FK7D71E742B5BD38C7 foreign
>> key
>> > (PROCESS) references BPEL_PROCESS;
>> > alter table BPEL_PLINK_VAL add constraint FK7D71E742D4B651D4 foreign
>> key
>> > (PARTNERROLE_EPR) references LARGE_DATA;
>> > alter table BPEL_SCOPE add constraint FKAB2A32EAA821BBE1 foreign key
>> > (PIID)
>> > references BPEL_INSTANCE;
>> > alter table BPEL_SCOPE add constraint FKAB2A32EA42B20B58 foreign key
>> > (PARENT_SCOPE_ID) references BPEL_SCOPE;
>> > create index IDX_SELECTOR_CORRELATOR on BPEL_SELECTORS (CORRELATOR);
>> > create index IDX_SELECTOR_CKEY on BPEL_SELECTORS (CORRELATION_KEY);
>> > create index IDX_SELECTOR_SELGRPID on BPEL_SELECTORS (SELGRPID);
>> > alter table BPEL_SELECTORS add constraint FKF1F83A0AA821BBE1 foreign
>> key
>> > (PIID) references BPEL_INSTANCE;
>> > alter table BPEL_SELECTORS add constraint FKF1F83A0A875201C9 foreign
>> key
>> > (CORRELATOR) references BPEL_CORRELATOR;
>> > create index IDX_UNMATCHED_CKEY on BPEL_UNMATCHED (CORRELATION_KEY);
>> > create index IDX_UNMATCHED_CORRELATOR on BPEL_UNMATCHED (CORRELATOR);
>> > alter table BPEL_UNMATCHED add constraint FKF0663E01CA00A413 foreign
>> key
>> > (MEX) references BPEL_MESSAGE_EXCHANGE;
>> > alter table BPEL_UNMATCHED add constraint FKF0663E01875201C9 foreign
>> key
>> > (CORRELATOR) references BPEL_CORRELATOR;
>> > alter table BPEL_XML_DATA add constraint FKB7D47E7C14020712 foreign key
>> > (LDATA_ID) references LARGE_DATA;
>> > alter table BPEL_XML_DATA add constraint FKB7D47E7CA821BBE1 foreign key
>> > (PIID) references BPEL_INSTANCE;
>> > alter table BPEL_XML_DATA add constraint FKB7D47E7C6D49C363 foreign key
>> > (SCOPE_ID) references BPEL_SCOPE;
>> > alter table VAR_PROPERTY add constraint FK9C1E2C0DA48E25F2 foreign key
>> > (XML_DATA_ID) references BPEL_XML_DATA;
>> > create table hibernate_unique_key ( next_hi integer );
>> > insert into hibernate_unique_key values ( 0 );
>> > create table STORE_DU (NAME varchar(255) not null, deployer
>> > varchar(255),
>> > DEPLOYDT timestamp, DIR varchar(255), primary key (NAME));
>> > create table STORE_PROCESS (PID varchar(255) not null, DU varchar(255),
>> > TYPE
>> > varchar(255), version bigint, STATE varchar(255), primary key (PID));
>> > create table STORE_PROCESS_PROP (propId varchar(255) not null, value
>> > varchar(255), name varchar(255) not null, primary key (propId, name));
>> > create table STORE_VERSIONS (ID integer not null, VERSION bigint,
>> > primary
>> > key (ID));
>> > alter table STORE_PROCESS add constraint FKA83900D1BFFFC58C foreign key
>> > (DU)
>> > references STORE_DU;
>> > alter table STORE_PROCESS_PROP add constraint FKFD6C2E119ADDA5CB
>> foreign
>> > key
>> > (propId) references STORE_PROCESS;
>> >
>> >
>> > (3) configure a DB2 user
>> >
>> > - there should already be a user configured (e.g. username)
>> > - set the password of this user within the windows user management
>> (e.g.
>> > password)
>> >
>> >
>> > ------------------
>> > ### db2jcc.jar ###
>> > ------------------
>> >
>> > (4) copy db2jcc.jar to [JBOSS_HOME]\server\[server]\lib\db2jcc.jar
>> >
>> >
>> > ----------------------------
>> > ### ode-axis2.properties ###
>> > ----------------------------
>> >
>> > (5) create a file named ode-axis2.properties within the directory
>> > [JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\conf
>> >
>> > (6) edit ode-axis2.properties
>> >
>> > ode-axis2.db.mode=EXTERNAL
>> > ode-axis2.db.ext.dataSource=java:[ODE JNDI NAME, e.g. ODEDS]
>> >
>> >
>> > ------------------
>> > ### db2-ds.xml ###
>> > ------------------
>> >
>> > (7) copy [JBOSS_HOME]\docs\examples\jca\db2-ds.xml to
>> > [JBOSS_HOME]\server\[server]\deploy\db2-ds.xml
>> >
>> > (8) edit db2-ds.xml
>> >
>> > <datasources>
>> >  <local-tx-datasource>
>> >    <jndi-name>[ODE JNDI NAME, e.g. ODEDS]</jndi-name>
>> >    <connection-url>jdbc:db2://localhost:50000/[DATABASENAME, e.g.
>> > testode]</connection-url>
>> >    <driver-class>com.ibm.db2.jcc.DB2Driver</driver-class>
>> >    <user-name>[DBUSER, e.g. username - you must not use
>> > db2admin]</user-name>
>> >    <password>[DBUSERPASSWORD, e.g. password - it must not be
>> > blank]</password>
>> >    <min-pool-size>10</min-pool-size>
>> >        <!-- sql to call when connection is created
>> >        <new-connection-sql>some arbitrary sql</new-connection-sql>
>> >        -->
>> >
>> >        <!-- sql to call on an existing pooled connection when it is
>> > obtained from pool
>> >        <check-valid-connection-sql>some arbitrary
>> > sql</check-valid-connection-sql>
>> >        -->
>> >
>> >      <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml
>> > (optional) -->
>> >      <metadata>
>> >         <type-mapping>DB2</type-mapping>
>> >      </metadata>
>> >  </local-tx-datasource>
>> >
>> > </datasources>
>> >
>> >
>> > -----------------------
>> > ### simple scheduler ###
>> > -----------------------
>> >
>> > (9) copy the modified simple scheduler jar to
>> > [JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\lib
>> >
>> > - in our case: ode-scheduler-simple-1.1.1-SNAPSHOT.jar
>> >
>> >
>> > ----------------------------------------------
>> > ### Modification within class JdbcDelegate ###
>> > ----------------------------------------------
>> >
>> > /*
>> >  * Licensed to the Apache Software Foundation (ASF) under one
>> >  * or more contributor license agreements.  See the NOTICE file
>> >  * distributed with this work for additional information
>> >  * regarding copyright ownership.  The ASF licenses this file
>> >  * to you under the Apache License, Version 2.0 (the
>> >  * "License"); you may not use this file except in compliance
>> >  * with the License.  You may obtain a copy of the License at
>> >  *
>> >  *    http://www.apache.org/licenses/LICENSE-2.0
>> >  *
>> >  * Unless required by applicable law or agreed to in writing,
>> >  * software distributed under the License is distributed on an
>> >  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
>> >  * KIND, either express or implied.  See the License for the
>> >  * specific language governing permissions and limitations
>> >  * under the License.
>> >  *
>> >  * UPDATED by Stefan Jakoubi and Simon Tjoa
>> >  * PERFORMED CHANGES:
>> >  * This class has been modified by Stefan Jakoubi and Simon Tjoa
>> >  *  (Secure Business Austria) to enable the use of the scalar function
>> > MOD
>> >  *  within prepared statements. Thus, the scalar function MOD has been
>> >  *  replaced with the internally DB2 formula used for determining MOD
>> > results:
>> >  *  MOD(x, y) = x - y * (x / y)
>> >  */
>> >
>> > package org.apache.ode.scheduler.simple;
>> >
>> > import java.io.ByteArrayOutputStream;
>> > import java.io.ObjectInputStream;
>> > import java.io.Serializable;
>> > import java.sql.Connection;
>> > import java.sql.DatabaseMetaData;
>> > import java.sql.PreparedStatement;
>> > import java.sql.ResultSet;
>> > import java.sql.SQLException;
>> > import java.util.ArrayList;
>> > import java.util.List;
>> > import java.util.Map;
>> >
>> > import javax.sql.DataSource;
>> >
>> > import org.apache.commons.logging.Log;
>> > import org.apache.commons.logging.LogFactory;
>> > import org.apache.ode.utils.StreamUtils;
>> >
>> > /**
>> >  * JDBC-based implementation of the {@link DatabaseDelegate} interface.
>> > Should work with most
>> >  * reasonably behaved databases.
>> >  *
>> >  * @author Maciej Szefler ( m s z e f l e r @ g m a i l . c o m )
>> > modified
>> > by by Stefan Jakoubi and Simon Tjoa
>> >  */
>> > public class JdbcDelegate implements DatabaseDelegate {
>> >
>> >    private static final Log __log =
>> > LogFactory.getLog(JdbcDelegate.class);
>> >
>> >    private static final String DELETE_JOB = "delete from ODE_JOB where
>> > jobid = ? and nodeid = ?";
>> >
>> >    private static final String UPDATE_REASSIGN = "update ODE_JOB set
>> > nodeid
>> > = ?, scheduled = 0 where nodeid = ?";
>> >
>> >    private static final String UPGRADE_JOB_DEFAULT = "update ODE_JOB
>> set
>> > nodeid = ? where nodeid is null and scheduled = 0 "
>> >            + "and mod(ts,?) = ? and ts < ?";
>> >
>> >    private static final String UPGRADE_JOB_SQLSERVER = "update ODE_JOB
>> > set
>> > nodeid = ? where nodeid is null and scheduled = 0 "
>> >        + "and (ts % ?) = ? and ts < ?";
>> >
>> >    // UPDATED by Stefan Jakoubi and Simon Tjoa
>> >    private static final String UPGRADE_JOB_DB2 = "update ODE_JOB set
>> > nodeid
>> > = ? where nodeid is null and scheduled = 0 " +
>> >        " and (BIGINT(ts - ? * BIGINT(ts/?)) = ?) and ts < ? ";
>> >
>> >    private static final String SAVE_JOB = "insert into ODE_JOB "
>> >            + " (jobid, nodeid, ts, scheduled, transacted, details)
>> > values(?, ?, ?, ?, ?, ?)";
>> >
>> >    private static final String GET_NODEIDS = "select distinct nodeid
>> > from
>> > ODE_JOB";
>> >
>> >    private static final String SCHEDULE_IMMEDIATE = "select jobid, ts,
>> > transacted, scheduled, details from ODE_JOB "
>> >            + "where nodeid = ? and scheduled = 0 and ts < ? order by
>> > ts";
>> >
>> >    private static final String UPDATE_SCHEDULED = "update ODE_JOB set
>> > scheduled = 1 where jobid in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
>> >
>> >    private static final int UPDATE_SCHEDULED_SLOTS = 10;
>> >
>> >    private DataSource _ds;
>> >
>> >    private Dialect _dialect;
>> >
>> >    public JdbcDelegate(DataSource ds) {
>> >        _ds = ds;
>> >        _dialect = guessDialect();
>> >    }
>> >
>> >    public boolean deleteJob(String jobid, String nodeId) throws
>> > DatabaseException {
>> >        if (__log.isDebugEnabled())
>> >            __log.debug("deleteJob " + jobid + " on node " + nodeId);
>> >
>> >        Connection con = null;
>> >        PreparedStatement ps = null;
>> >        try {
>> >            con = getConnection();
>> >            ps = con.prepareStatement(DELETE_JOB);
>> >            ps.setString(1, jobid);
>> >            ps.setString(2, nodeId);
>> >            return ps.executeUpdate() == 1;
>> >        } catch (SQLException se) {
>> >            throw new DatabaseException(se);
>> >        } finally {
>> >            close(ps);
>> >            close(con);
>> >        }
>> >    }
>> >
>> >    public List<String> getNodeIds() throws DatabaseException {
>> >        Connection con = null;
>> >        PreparedStatement ps = null;
>> >        try {
>> >            con = getConnection();
>> >            ps = con.prepareStatement(GET_NODEIDS,
>> > ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
>> >            ResultSet rs = ps.executeQuery();
>> >            ArrayList<String> nodes = new ArrayList<String>();
>> >            while (rs.next()) {
>> >                String nodeId = rs.getString(1);
>> >                if (nodeId != null)
>> >                    nodes.add(rs.getString(1));
>> >            }
>> >            if (__log.isDebugEnabled())
>> >                __log.debug("getNodeIds: " + nodes);
>> >            return nodes;
>> >        } catch (SQLException se) {
>> >            throw new DatabaseException(se);
>> >        } finally {
>> >            close(ps);
>> >            close(con);
>> >        }
>> >    }
>> >
>> >    public boolean insertJob(Job job, String nodeId, boolean loaded)
>> > throws
>> > DatabaseException {
>> >        if (__log.isDebugEnabled())
>> >            __log.debug("insertJob " + job.jobId + " on node " + nodeId
>> +
>> > "
>> > loaded=" + loaded);
>> >
>> >        Connection con = null;
>> >        PreparedStatement ps = null;
>> >        try {
>> >            con = getConnection();
>> >            ps = con.prepareStatement(SAVE_JOB);
>> >            ps.setString(1, job.jobId);
>> >            ps.setString(2, nodeId);
>> >            ps.setLong(3, job.schedDate);
>> >            ps.setInt(4, asInteger(loaded));
>> >            ps.setInt(5, asInteger(job.transacted));
>> >            ByteArrayOutputStream bos = new ByteArrayOutputStream();
>> >            try {
>> >                StreamUtils.write(bos, (Serializable) job.detail);
>> >            } catch (Exception ex) {
>> >                __log.error("Error serializing job detail: " +
>> > job.detail);
>> >                throw new DatabaseException(ex);
>> >            }
>> >            ps.setBytes(6, bos.toByteArray());
>> >            return ps.executeUpdate() == 1;
>> >        } catch (SQLException se) {
>> >            throw new DatabaseException(se);
>> >        } finally {
>> >            close(ps);
>> >            close(con);
>> >        }
>> >    }
>> >
>> >    @SuppressWarnings("unchecked")
>> >    public List<Job> dequeueImmediate(String nodeId, long maxtime, int
>> > maxjobs) throws DatabaseException {
>> >        ArrayList<Job> ret = new ArrayList<Job>(maxjobs);
>> >        Connection con = null;
>> >        PreparedStatement ps = null;
>> >        try {
>> >            con = getConnection();
>> >            ps = con.prepareStatement(SCHEDULE_IMMEDIATE);
>> >            ps.setString(1, nodeId);
>> >            ps.setLong(2, maxtime);
>> >            ps.setMaxRows(maxjobs);
>> >            ResultSet rs = ps.executeQuery();
>> >            while (rs.next()) {
>> >                Map<String, Object> details;
>> >                try {
>> >                    ObjectInputStream is = new
>> > ObjectInputStream(rs.getBinaryStream(5));
>> >                    details = (Map<String, Object>) is.readObject();
>> >                    is.close();
>> >                } catch (Exception e) {
>> >                    throw new DatabaseException("Error deserializing job
>> > details", e);
>> >                }
>> >                Job job = new Job(rs.getLong(2), rs.getString(1),
>> > asBoolean(rs.getInt(3)), details);
>> >                ret.add(job);
>> >            }
>> >            rs.close();
>> >            ps.close();
>> >
>> >            // mark jobs as scheduled, UPDATE_SCHEDULED_SLOTS at a time
>> >            int j = 0;
>> >            int updateCount = 0;
>> >            ps = con.prepareStatement(UPDATE_SCHEDULED);
>> >            for (int updates = 1; updates <= (ret.size() /
>> > UPDATE_SCHEDULED_SLOTS) + 1; updates++) {
>> >                for (int i = 1; i <= UPDATE_SCHEDULED_SLOTS; i++) {
>> >                    ps.setString(i, j < ret.size() ? ret.get(j).jobId :
>> > "");
>> >                    j++;
>> >                }
>> >                ps.execute();
>> >                updateCount += ps.getUpdateCount();
>> >            }
>> >            if (updateCount != ret.size()) {
>> >                throw new DatabaseException(
>> >                        "Updating scheduled jobs failed to update all
>> > jobs;
>> > expected=" + ret.size()
>> >                                + " actual=" + updateCount);
>> >            }
>> >        } catch (SQLException se) {
>> >            throw new DatabaseException(se);
>> >        } finally {
>> >            close(ps);
>> >            close(con);
>> >        }
>> >        return ret;
>> >    }
>> >
>> >    public int updateReassign(String oldnode, String newnode) throws
>> > DatabaseException {
>> >        if (__log.isDebugEnabled())
>> >            __log.debug("updateReassign from " + oldnode + " ---> " +
>> > newnode);
>> >        Connection con = null;
>> >        PreparedStatement ps = null;
>> >        try {
>> >            con = getConnection();
>> >            ps = con.prepareStatement(UPDATE_REASSIGN);
>> >            ps.setString(1, newnode);
>> >            ps.setString(2, oldnode);
>> >            return ps.executeUpdate();
>> >        } catch (SQLException se) {
>> >            throw new DatabaseException(se);
>> >        } finally {
>> >            close(ps);
>> >            close(con);
>> >        }
>> >    }
>> >
>> >    public int updateAssignToNode(String node, int i, int numNodes, long
>> > maxtime) throws DatabaseException {
>> >        if (__log.isDebugEnabled())
>> >            __log.debug("updateAsssignToNode node=" + node + " " + i +
>> > "/" +
>> > numNodes + " maxtime=" + maxtime);
>> >        Connection con = null;
>> >        PreparedStatement ps = null;
>> >        try {
>> >            con = getConnection();
>> >            // UPDATED by Stefan Jakoubi and Simon Tjoa
>> >            if (_dialect == Dialect.DB2) {
>> >               ps = con.prepareStatement(UPGRADE_JOB_DB2);
>> >               ps.setString(1, node);
>> >               ps.setInt(2, numNodes);
>> >               ps.setInt(3, numNodes);
>> >               ps.setInt(4, i);
>> >               ps.setLong(5, maxtime);
>> >            } else
>> >            if (_dialect == Dialect.SQLSERVER) {
>> >                // Herausgezogen um neues PS für DB2 einführen zu können
>> >                ps = con.prepareStatement(UPGRADE_JOB_SQLSERVER);
>> >                ps.setString(1, node);
>> >                ps.setInt(2, numNodes);
>> >                ps.setInt(3, i);
>> >                ps.setLong(4, maxtime);
>> >            } else {
>> >                // Herausgezogen um neues PS für DB2 einführen zu können
>> >                ps = con.prepareStatement(UPGRADE_JOB_DEFAULT);
>> >                ps.setString(1, node);
>> >                ps.setInt(2, numNodes);
>> >                ps.setInt(3, i);
>> >                ps.setLong(4, maxtime);
>> >            }
>> >
>> >             return ps.executeUpdate();
>> >        } catch (SQLException se) {
>> >            throw new DatabaseException(se);
>> >        } finally {
>> >            close(ps);
>> >            close(con);
>> >        }
>> >    }
>> >
>> >    private Connection getConnection() throws SQLException {
>> >        return _ds.getConnection();
>> >    }
>> >
>> >    private int asInteger(boolean value) {
>> >        return (value ? 1 : 0);
>> >    }
>> >
>> >    private boolean asBoolean(int value) {
>> >        return (value != 0);
>> >    }
>> >
>> >    private void close(PreparedStatement ps) {
>> >        if (ps != null) {
>> >            try {
>> >                ps.close();
>> >            } catch (Exception e) {
>> >                __log.warn("Exception while closing prepared statement",
>> > e);
>> >            }
>> >        }
>> >    }
>> >
>> >    private void close(Connection con) {
>> >        if (con != null) {
>> >            try {
>> >                con.close();
>> >            } catch (Exception e) {
>> >                __log.warn("Exception while closing connection", e);
>> >            }
>> >        }
>> >    }
>> >
>> >    private Dialect guessDialect() {
>> >        Dialect d = Dialect.UNKNOWN;
>> >        Connection con = null;
>> >        try {
>> >            con = getConnection();
>> >            DatabaseMetaData metaData = con.getMetaData();
>> >            if (metaData != null) {
>> >                String dbProductName =
>> metaData.getDatabaseProductName();
>> >                int dbMajorVer = metaData.getDatabaseMajorVersion();
>> >                __log.debug("Using database " + dbProductName + " major
>> > version " + dbMajorVer);
>> >                if (dbProductName.indexOf("DB2") >= 0) {
>> >                    d = Dialect.DB2;
>> >                } else if (dbProductName.indexOf("Derby") >= 0) {
>> >                    d = Dialect.DERBY;
>> >                } else if (dbProductName.indexOf("Firebird") >= 0) {
>> >                    d = Dialect.FIREBIRD;
>> >                } else if (dbProductName.indexOf("HSQL") >= 0) {
>> >                    d = Dialect.HSQL;
>> >                } else if (dbProductName.indexOf("Microsoft SQL") >= 0)
>> {
>> >                    d = Dialect.SQLSERVER;
>> >                } else if (dbProductName.indexOf("MySQL") >= 0) {
>> >                    d = Dialect.MYSQL;
>> >                } else if (dbProductName.indexOf("Sybase") >= 0) {
>> >                    d = Dialect.SYBASE;
>> >                }
>> >            }
>> >        } catch (SQLException e) {
>> >            __log.warn("Unable to determine database dialect", e);
>> >        } finally {
>> >            close(con);
>> >        }
>> >        __log.info("Using database dialect: " + d);
>> >        return d;
>> >    }
>> >
>> >    enum Dialect {
>> >        DB2, DERBY, FIREBIRD, HSQL, MYSQL, ORACLE, SQLSERVER, SYBASE,
>> > UNKNOWN
>> >     }
>> >
>> > }
>> >
>> >
>> >
>> >
>> >
>> >
>> > Stefan Jakoubi wrote:
>> > >
>> > > Hi Mathieu,
>> > >
>> > > thanks for your fast reply!
>> > >
>> > >> We use XDoclet annotations in the source code to generate the
>> > mapping. So
>> > >> you'll have to change those if you want to alter it. Alternatively
>> > you
>> > >> can
>> > >> run on the generated mappings but that's probably a pain (updates).
>> > But
>> > >> why
>> > >> do you want to change the mapping?
>> > >
>> > > Furthermore, thank you for outlining the "where to find" - this helps
>> > us a
>> > > lot!!!
>> > >
>> > > We unfortunately have to alter the tables to conform our partner's
>> > > requirements:
>> > > (1) project specific tablename prefix
>> > > (2) "ID" column not allowed -> "tablename_ID"
>> > > (3) Reserved words (DB2) such as "TYPE" are not allowed as column
>> name
>> > >
>> > >
>> > > Question: shall I post/upload the changes we have made within the
>> > simple
>> > > scheduler?
>> > >
>> > > Thanks,
>> > > Stefan
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Matthieu Riou-5 wrote:
>> > >>
>> > >> On Fri, Apr 25, 2008 at 4:26 AM, Stefan Jakoubi <
>> > >> sjakoubi@securityresearch.at> wrote:
>> > >>
>> > >>> Dear ODE community,
>> > >>>
>> > >>>
>> > >>>
>> > >>> in this mail I (a) follow up a topic raised from a colleague
>> > >>> (http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine
>> > and
>> > >>> (b) raise myself a question concerning configuring ODE for DB2.
>> > >>>
>> > >>>
>> > >>>
>> > >>> @ (a):
>> > >>>
>> > >>> When migrating ODE to DB2 there is a problem within the simple
>> > >>> scheduler. In particular, the scalar function MOD(x, y) within a
>> > >>> prepared statement throws an awful exception.
>> > >>>
>> > >>> The solution is to replace the MOD(x, y) within the prepared
>> > statement
>> > >>> with: x - y * (x / y). This is according to how DB2 internally
>> > >>> determines the result for the scalar function MOD.
>> > >>>
>> > >>>
>> > >> Thanks for letting us know.
>> > >>
>> > >>
>> > >>>
>> > >>>
>> > >>> @ (b):
>> > >>>
>> > >>> Can anybody please provide a complete table schema ODE requires?
>> > >>>
>> > >>
>> > >> You'll get several ones for different databases in
>> > >> dao-hibernate-db/target
>> > >> after a build.
>> > >>
>> > >>
>> > >>>
>> > >>> Does anybody know where to configure all required Hibernate
>> mappings
>> > so
>> > >>> that ODE table names may be altered?
>> > >>>
>> > >>
>> > >> We use XDoclet annotations in the source code to generate the
>> > mapping. So
>> > >> you'll have to change those if you want to alter it. Alternatively
>> > you
>> > >> can
>> > >> run on the generated mappings but that's probably a pain (updates).
>> > But
>> > >> why
>> > >> do you want to change the mapping?
>> > >>
>> > >>
>> > >>>
>> > >>> When switching persistence to Hibernate, is it true that openjpa is
>> > not
>> > >>> used any more (and thus, no jpa mappings have to be modified in
>> case
>> > >>> that I alter table names?)
>> > >>>
>> > >>>
>> > >> Right, it's either Hibernate of OpenJPA, they don't coexist.
>> > >>
>> > >>
>> > >>>
>> > >>>
>> > >>> Finally I want to tell that as soon as we get a solution for
>> > >>> JBOSS/ODE/DB2 we will contribute at Jira!
>> > >>>
>> > >>>
>> > >> Thanks!
>> > >>
>> > >> Matthieu
>> > >>
>> > >>
>> > >>>
>> > >>>
>> > >>> Thanks in advance & cheers,
>> > >>>
>> > >>> Stefan
>> > >>>
>> > >>>
>> > >>>
>> > >>>
>> > >>>
>> > >>>
>> > >>
>> > >>
>> > >
>> > >
>> >
>> > --
>> > View this message in context:
>> > http://www.nabble.com/Hibernate-mappings-tp16899339p16939634.html
>> > Sent from the Apache Ode User mailing list archive at Nabble.com.
>> >
>> >
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Hibernate-mappings-tp16899339p16953726.html
Sent from the Apache Ode User mailing list archive at Nabble.com.


Re: Hibernate mappings

Posted by Matthieu Riou <ma...@offthelip.org>.
Oh, almost forgot: could you create a Jira task for this and attach
everything as a text file? We need some sort of IP clearance and Jira makes
it easier to track this down.

Thanks!

On Mon, Apr 28, 2008 at 11:13 AM, Matthieu Riou <ma...@offthelip.org>
wrote:

> Cool, thanks a lot for contributing that back! I'll see how to add it to
> our doc and maybe update the scheduler.
>
> Cheers,
> Matthieu
>
>
> On Mon, Apr 28, 2008 at 7:04 AM, Stefan Jakoubi <
> sjakoubi@securityresearch.at> wrote:
>
> >
> > Hi Mathieu,
> >
> > I think we have found the solution - in fact 2 major are required to use
> > ODE
> > with DB2:
> > - Update of the simple scheduler
> > - Update of DB script
> >
> > Below, you can find a little HowTo for JBoss & ODE & DB2 - I hope
> > everything
> > works finde ;-)
> >
> > Cheers and thanks for your support,
> > Simon & Stefan
> >
> >
> > +++++++++++++++++++++++++++++++
> > +++++++++++++++++++++++++++++++
> > ++ HOW TO: JBOSS & ODE & DB2 ++
> > +++++++++++++++++++++++++++++++
> > +++++++++++++++++++++++++++++++
> >
> > *****************
> > * PREREQUISITES *
> > *****************
> >
> >  - JBoss (tested on JBoss-4.0.3SP1)
> >  - deployed ODE (tested with version 1.1.1)
> >  - DB2 (tested with version 9.x)
> >
> >
> > *********************************
> > * MODIFYING SIMPLE SCHEDULER JAR *
> > * *******************************
> >
> > - There is a bug using the scalar function MOD within a prepared
> > statement
> > - Modify the JdbcDelegate class (see also end of this HowTo)
> > - Compile the code and make the jar
> >
> >
> > *****************
> > * CONFIGURATION *
> > *****************
> >
> > -------------------------
> > ### DB2 configuration ###
> > -------------------------
> >
> > (1) create database for ODE usage (e.g. testode)
> >
> > (2) execute the following statements
> >
> > -- Apache ODE - SimpleScheduler Database Schema
> > --
> > -- Apache Derby scripts by Maciej Szefler.
> > --
> > -- Scripts modified for use with DB2 by Stefan Jakoubi and Simon Tjoa
> > -- (-) All primary key ID columns modified for auto increment purposes
> > -> ID
> > bigint GENERATED ALWAYS AS IDENTITY not null
> > -- (-) Altered table LARGE_DATA -> BIN_DATA blob(2000M) NOT LOGGED -> as
> > DB2
> > only supports logging up to 1GB
> > --     Furthermore, DB2 only allows blob sizes up to 2GB - 1 Byte!!!
> >
> > CREATE TABLE ode_job (
> >  jobid CHAR(64)  NOT NULL DEFAULT '',
> >  ts BIGINT  NOT NULL DEFAULT 0,
> >  nodeid char(64),
> >  scheduled int  NOT NULL DEFAULT 0,
> >  transacted int  NOT NULL DEFAULT 0,
> >  details blob(4096),
> >  PRIMARY KEY(jobid));
> >
> > CREATE INDEX IDX_ODE_JOB_TS ON ode_job(ts);
> > CREATE INDEX IDX_ODE_JOB_NODEID ON ode_job(nodeid);
> >
> >
> > create table BPEL_ACTIVITY_RECOVERY (ID bigint GENERATED ALWAYS AS
> > IDENTITY
> > not null, PIID bigint, AID bigint, CHANNEL varchar(255), REASON
> > varchar(255), DATE_TIME timestamp, LDATA_ID bigint, ACTIONS
> > varchar(255),
> > RETRIES integer, INSERT_TIME timestamp, MLOCK integer not null, primary
> > key
> > (ID));
> > create table BPEL_CORRELATION_PROP (ID bigint GENERATED ALWAYS AS
> > IDENTITY
> > not null, NAME varchar(255), NAMESPACE varchar(255), VALUE varchar(255),
> > CORR_SET_ID bigint, INSERT_TIME timestamp, MLOCK integer not null,
> > primary
> > key (ID));
> > create table BPEL_CORRELATION_SET (ID bigint GENERATED ALWAYS AS
> > IDENTITY
> > not null, VALUE varchar(255), CORR_SET_NAME varchar(255), SCOPE_ID
> > bigint,
> > PIID bigint, PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK integer not
> > null, primary key (ID));
> > create table BPEL_CORRELATOR (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null, CID varchar(255), PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK
> > integer not null, primary key (ID));
> > create table BPEL_CORRELATOR_MESSAGE_CKEY (ID bigint GENERATED ALWAYS AS
> > IDENTITY not null, CKEY varchar(255), CORRELATOR_MESSAGE_ID bigint,
> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> > create table BPEL_EVENT (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > IID bigint, PID bigint, TSTAMP timestamp, TYPE varchar(255), DETAIL
> > clob(32000), LDATA_ID bigint, SID bigint, INSERT_TIME timestamp, MLOCK
> > integer not null, primary key (ID));
> > create table BPEL_FAULT (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > FAULTNAME varchar(255), LDATA_ID bigint, EXPLANATION varchar(4000),
> > LINE_NUM
> > integer, AID integer, INSERT_TIME timestamp, MLOCK integer not null,
> > primary
> > key (ID));
> > create table BPEL_INSTANCE (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > INSTANTIATING_CORRELATOR bigint, FAULT bigint, JACOB_STATE bigint,
> > PREVIOUS_STATE smallint, PROCESS_ID bigint, STATE smallint,
> > LAST_ACTIVE_DT
> > timestamp, SEQUENCE bigint, FAILURE_COUNT integer, FAILURE_DT timestamp,
> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> > create table BPEL_MESSAGE (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > MEX bigint, TYPE varchar(255), DATA bigint, INSERT_TIME timestamp, MLOCK
> > integer not null, primary key (ID));
> > create table BPEL_MESSAGE_EXCHANGE (ID bigint GENERATED ALWAYS AS
> > IDENTITY
> > not null, PORT_TYPE varchar(255), CHANNEL_NAME varchar(255), CLIENTKEY
> > varchar(255), LDATA_EPR_ID bigint, LDATA_CEPR_ID bigint, REQUEST bigint,
> > RESPONSE bigint, INSERT_DT timestamp, OPERATION varchar(255), STATE
> > varchar(255), PROCESS bigint, PIID bigint, DIR char(1), PLINK_MODELID
> > integer, PATTERN varchar(255), CORR_STATUS varchar(255), FAULT_TYPE
> > varchar(255), FAULT_EXPL varchar(255), CALLEE varchar(255), PARTNERLINK
> > bigint, PIPED_ID varchar(255), INSERT_TIME timestamp, MLOCK integer not
> > null, primary key (ID));
> > create table BPEL_MEX_PROPS (MEX bigint not null, VALUE varchar(8000),
> > NAME
> > varchar(255) not null, primary key (MEX, NAME));
> > create table BPEL_PLINK_VAL (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null, PARTNER_LINK varchar(100) not null, PARTNERROLE varchar(100),
> > MYROLE_EPR bigint, PARTNERROLE_EPR bigint, PROCESS bigint, SCOPE bigint,
> > SVCNAME varchar(255), MYROLE varchar(100), MODELID integer, MYSESSIONID
> > varchar(255), PARTNERSESSIONID varchar(255), INSERT_TIME timestamp,
> > MLOCK
> > integer not null, primary key (ID));
> > create table BPEL_PROCESS (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > PROCID varchar(255) not null unique, deployer varchar(255), deploydate
> > timestamp, type_name varchar(255), type_ns varchar(255), version bigint,
> > ACTIVE_ smallint, guid varchar(255), INSERT_TIME timestamp, MLOCK
> > integer
> > not null, primary key (ID));
> > create table BPEL_SCOPE (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > PIID bigint, PARENT_SCOPE_ID bigint, STATE varchar(255) not null, NAME
> > varchar(255) not null, MODELID integer, INSERT_TIME timestamp, MLOCK
> > integer
> > not null, primary key (ID));
> > create table BPEL_SELECTORS (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null, PIID bigint not null, SELGRPID varchar(255) not null, IDX integer
> > not
> > null, CORRELATION_KEY varchar(255) not null, CORRELATOR bigint not null,
> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID), unique
> > (CORRELATION_KEY, CORRELATOR));
> > create table BPEL_UNMATCHED (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null, MEX bigint, CORRELATION_KEY varchar(255), CORRELATOR bigint not
> > null,
> > INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> > create table BPEL_XML_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > LDATA_ID bigint, NAME varchar(255) not null, SCOPE_ID bigint, PIID
> > bigint,
> > IS_SIMPLE_TYPE smallint, INSERT_TIME timestamp, MLOCK integer not null,
> > primary key (ID));
> > create table LARGE_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > BIN_DATA blob(2000M) NOT LOGGED, INSERT_TIME timestamp, MLOCK integer
> > not
> > null, primary key (ID));
> > create table VAR_PROPERTY (ID bigint GENERATED ALWAYS AS IDENTITY not
> > null,
> > XML_DATA_ID bigint, PROP_VALUE varchar(255), PROP_NAME varchar(255) not
> > null, INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> > alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7B14020712
> > foreign
> > key (LDATA_ID) references LARGE_DATA;
> > alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7BA821BBE1
> > foreign
> > key (PIID) references BPEL_INSTANCE;
> > alter table BPEL_CORRELATION_PROP add constraint FK4EC9DDAA4D0B7982
> > foreign
> > key (CORR_SET_ID) references BPEL_CORRELATION_SET;
> > alter table BPEL_CORRELATION_SET add constraint FKB838191BA821BBE1
> > foreign
> > key (PIID) references BPEL_INSTANCE;
> > alter table BPEL_CORRELATION_SET add constraint FKB838191B6D49C363
> > foreign
> > key (SCOPE_ID) references BPEL_SCOPE;
> > alter table BPEL_CORRELATION_SET add constraint FKB838191BE15A2343
> > foreign
> > key (PROCESS_ID) references BPEL_PROCESS;
> > create index IDX_CORRELATOR_CID on BPEL_CORRELATOR (CID);
> > alter table BPEL_CORRELATOR add constraint FKF50EFA33E15A2343 foreign
> > key
> > (PROCESS_ID) references BPEL_PROCESS;
> > create index IDX_BPEL_CORRELATOR_MESSAGE_CKEY on
> > BPEL_CORRELATOR_MESSAGE_CKEY (CKEY);
> > alter table BPEL_CORRELATOR_MESSAGE_CKEY add constraint
> > FK8997F700EEFA7470
> > foreign key (CORRELATOR_MESSAGE_ID) references BPEL_UNMATCHED;
> > alter table BPEL_EVENT add constraint FKAA6D673014020712 foreign key
> > (LDATA_ID) references LARGE_DATA;
> > alter table BPEL_EVENT add constraint FKAA6D6730A7EED251 foreign key
> > (IID)
> > references BPEL_INSTANCE;
> > alter table BPEL_EVENT add constraint FKAA6D6730C831CBE3 foreign key
> > (PID)
> > references BPEL_PROCESS;
> > alter table BPEL_FAULT add constraint FKAA722EB814020712 foreign key
> > (LDATA_ID) references LARGE_DATA;
> > alter table BPEL_INSTANCE add constraint FKE1DED41FDD43DBE1 foreign key
> > (INSTANTIATING_CORRELATOR) references BPEL_CORRELATOR;
> > alter table BPEL_INSTANCE add constraint FKE1DED41F6B66C85F foreign key
> > (JACOB_STATE) references LARGE_DATA;
> > alter table BPEL_INSTANCE add constraint FKE1DED41F1F3C9CB7 foreign key
> > (FAULT) references BPEL_FAULT;
> > alter table BPEL_INSTANCE add constraint FKE1DED41FE15A2343 foreign key
> > (PROCESS_ID) references BPEL_PROCESS;
> > alter table BPEL_MESSAGE add constraint FK4FA7231DCA00A413 foreign key
> > (MEX)
> > references BPEL_MESSAGE_EXCHANGE;
> > alter table BPEL_MESSAGE add constraint FK4FA7231DEB2C9ED8 foreign key
> > (DATA) references LARGE_DATA;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0589D7A75B
> > foreign
> > key (LDATA_CEPR_ID) references LARGE_DATA;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05BBA250D
> > foreign
> > key (PARTNERLINK) references BPEL_PLINK_VAL;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05A821BBE1
> > foreign
> > key (PIID) references BPEL_INSTANCE;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05C29670AA
> > foreign
> > key (LDATA_EPR_ID) references LARGE_DATA;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05B5BD38C7
> > foreign
> > key (PROCESS) references BPEL_PROCESS;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05FF451031
> > foreign
> > key (RESPONSE) references BPEL_MESSAGE;
> > alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0554DCEE7F
> > foreign
> > key (REQUEST) references BPEL_MESSAGE;
> > alter table BPEL_MEX_PROPS add constraint FK203CAFC7CA00A413 foreign key
> > (MEX) references BPEL_MESSAGE_EXCHANGE;
> > alter table BPEL_PLINK_VAL add constraint FK7D71E742F2BC5EB8 foreign key
> > (MYROLE_EPR) references LARGE_DATA;
> > alter table BPEL_PLINK_VAL add constraint FK7D71E74290D95ED1 foreign key
> > (SCOPE) references BPEL_SCOPE;
> > alter table BPEL_PLINK_VAL add constraint FK7D71E742B5BD38C7 foreign key
> > (PROCESS) references BPEL_PROCESS;
> > alter table BPEL_PLINK_VAL add constraint FK7D71E742D4B651D4 foreign key
> > (PARTNERROLE_EPR) references LARGE_DATA;
> > alter table BPEL_SCOPE add constraint FKAB2A32EAA821BBE1 foreign key
> > (PIID)
> > references BPEL_INSTANCE;
> > alter table BPEL_SCOPE add constraint FKAB2A32EA42B20B58 foreign key
> > (PARENT_SCOPE_ID) references BPEL_SCOPE;
> > create index IDX_SELECTOR_CORRELATOR on BPEL_SELECTORS (CORRELATOR);
> > create index IDX_SELECTOR_CKEY on BPEL_SELECTORS (CORRELATION_KEY);
> > create index IDX_SELECTOR_SELGRPID on BPEL_SELECTORS (SELGRPID);
> > alter table BPEL_SELECTORS add constraint FKF1F83A0AA821BBE1 foreign key
> > (PIID) references BPEL_INSTANCE;
> > alter table BPEL_SELECTORS add constraint FKF1F83A0A875201C9 foreign key
> > (CORRELATOR) references BPEL_CORRELATOR;
> > create index IDX_UNMATCHED_CKEY on BPEL_UNMATCHED (CORRELATION_KEY);
> > create index IDX_UNMATCHED_CORRELATOR on BPEL_UNMATCHED (CORRELATOR);
> > alter table BPEL_UNMATCHED add constraint FKF0663E01CA00A413 foreign key
> > (MEX) references BPEL_MESSAGE_EXCHANGE;
> > alter table BPEL_UNMATCHED add constraint FKF0663E01875201C9 foreign key
> > (CORRELATOR) references BPEL_CORRELATOR;
> > alter table BPEL_XML_DATA add constraint FKB7D47E7C14020712 foreign key
> > (LDATA_ID) references LARGE_DATA;
> > alter table BPEL_XML_DATA add constraint FKB7D47E7CA821BBE1 foreign key
> > (PIID) references BPEL_INSTANCE;
> > alter table BPEL_XML_DATA add constraint FKB7D47E7C6D49C363 foreign key
> > (SCOPE_ID) references BPEL_SCOPE;
> > alter table VAR_PROPERTY add constraint FK9C1E2C0DA48E25F2 foreign key
> > (XML_DATA_ID) references BPEL_XML_DATA;
> > create table hibernate_unique_key ( next_hi integer );
> > insert into hibernate_unique_key values ( 0 );
> > create table STORE_DU (NAME varchar(255) not null, deployer
> > varchar(255),
> > DEPLOYDT timestamp, DIR varchar(255), primary key (NAME));
> > create table STORE_PROCESS (PID varchar(255) not null, DU varchar(255),
> > TYPE
> > varchar(255), version bigint, STATE varchar(255), primary key (PID));
> > create table STORE_PROCESS_PROP (propId varchar(255) not null, value
> > varchar(255), name varchar(255) not null, primary key (propId, name));
> > create table STORE_VERSIONS (ID integer not null, VERSION bigint,
> > primary
> > key (ID));
> > alter table STORE_PROCESS add constraint FKA83900D1BFFFC58C foreign key
> > (DU)
> > references STORE_DU;
> > alter table STORE_PROCESS_PROP add constraint FKFD6C2E119ADDA5CB foreign
> > key
> > (propId) references STORE_PROCESS;
> >
> >
> > (3) configure a DB2 user
> >
> > - there should already be a user configured (e.g. username)
> > - set the password of this user within the windows user management (e.g.
> > password)
> >
> >
> > ------------------
> > ### db2jcc.jar ###
> > ------------------
> >
> > (4) copy db2jcc.jar to [JBOSS_HOME]\server\[server]\lib\db2jcc.jar
> >
> >
> > ----------------------------
> > ### ode-axis2.properties ###
> > ----------------------------
> >
> > (5) create a file named ode-axis2.properties within the directory
> > [JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\conf
> >
> > (6) edit ode-axis2.properties
> >
> > ode-axis2.db.mode=EXTERNAL
> > ode-axis2.db.ext.dataSource=java:[ODE JNDI NAME, e.g. ODEDS]
> >
> >
> > ------------------
> > ### db2-ds.xml ###
> > ------------------
> >
> > (7) copy [JBOSS_HOME]\docs\examples\jca\db2-ds.xml to
> > [JBOSS_HOME]\server\[server]\deploy\db2-ds.xml
> >
> > (8) edit db2-ds.xml
> >
> > <datasources>
> >  <local-tx-datasource>
> >    <jndi-name>[ODE JNDI NAME, e.g. ODEDS]</jndi-name>
> >    <connection-url>jdbc:db2://localhost:50000/[DATABASENAME, e.g.
> > testode]</connection-url>
> >    <driver-class>com.ibm.db2.jcc.DB2Driver</driver-class>
> >    <user-name>[DBUSER, e.g. username - you must not use
> > db2admin]</user-name>
> >    <password>[DBUSERPASSWORD, e.g. password - it must not be
> > blank]</password>
> >    <min-pool-size>10</min-pool-size>
> >        <!-- sql to call when connection is created
> >        <new-connection-sql>some arbitrary sql</new-connection-sql>
> >        -->
> >
> >        <!-- sql to call on an existing pooled connection when it is
> > obtained from pool
> >        <check-valid-connection-sql>some arbitrary
> > sql</check-valid-connection-sql>
> >        -->
> >
> >      <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml
> > (optional) -->
> >      <metadata>
> >         <type-mapping>DB2</type-mapping>
> >      </metadata>
> >  </local-tx-datasource>
> >
> > </datasources>
> >
> >
> > -----------------------
> > ### simple scheduler ###
> > -----------------------
> >
> > (9) copy the modified simple scheduler jar to
> > [JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\lib
> >
> > - in our case: ode-scheduler-simple-1.1.1-SNAPSHOT.jar
> >
> >
> > ----------------------------------------------
> > ### Modification within class JdbcDelegate ###
> > ----------------------------------------------
> >
> > /*
> >  * Licensed to the Apache Software Foundation (ASF) under one
> >  * or more contributor license agreements.  See the NOTICE file
> >  * distributed with this work for additional information
> >  * regarding copyright ownership.  The ASF licenses this file
> >  * to you under the Apache License, Version 2.0 (the
> >  * "License"); you may not use this file except in compliance
> >  * with the License.  You may obtain a copy of the License at
> >  *
> >  *    http://www.apache.org/licenses/LICENSE-2.0
> >  *
> >  * Unless required by applicable law or agreed to in writing,
> >  * software distributed under the License is distributed on an
> >  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
> >  * KIND, either express or implied.  See the License for the
> >  * specific language governing permissions and limitations
> >  * under the License.
> >  *
> >  * UPDATED by Stefan Jakoubi and Simon Tjoa
> >  * PERFORMED CHANGES:
> >  * This class has been modified by Stefan Jakoubi and Simon Tjoa
> >  *  (Secure Business Austria) to enable the use of the scalar function
> > MOD
> >  *  within prepared statements. Thus, the scalar function MOD has been
> >  *  replaced with the internally DB2 formula used for determining MOD
> > results:
> >  *  MOD(x, y) = x - y * (x / y)
> >  */
> >
> > package org.apache.ode.scheduler.simple;
> >
> > import java.io.ByteArrayOutputStream;
> > import java.io.ObjectInputStream;
> > import java.io.Serializable;
> > import java.sql.Connection;
> > import java.sql.DatabaseMetaData;
> > import java.sql.PreparedStatement;
> > import java.sql.ResultSet;
> > import java.sql.SQLException;
> > import java.util.ArrayList;
> > import java.util.List;
> > import java.util.Map;
> >
> > import javax.sql.DataSource;
> >
> > import org.apache.commons.logging.Log;
> > import org.apache.commons.logging.LogFactory;
> > import org.apache.ode.utils.StreamUtils;
> >
> > /**
> >  * JDBC-based implementation of the {@link DatabaseDelegate} interface.
> > Should work with most
> >  * reasonably behaved databases.
> >  *
> >  * @author Maciej Szefler ( m s z e f l e r @ g m a i l . c o m )
> > modified
> > by by Stefan Jakoubi and Simon Tjoa
> >  */
> > public class JdbcDelegate implements DatabaseDelegate {
> >
> >    private static final Log __log =
> > LogFactory.getLog(JdbcDelegate.class);
> >
> >    private static final String DELETE_JOB = "delete from ODE_JOB where
> > jobid = ? and nodeid = ?";
> >
> >    private static final String UPDATE_REASSIGN = "update ODE_JOB set
> > nodeid
> > = ?, scheduled = 0 where nodeid = ?";
> >
> >    private static final String UPGRADE_JOB_DEFAULT = "update ODE_JOB set
> > nodeid = ? where nodeid is null and scheduled = 0 "
> >            + "and mod(ts,?) = ? and ts < ?";
> >
> >    private static final String UPGRADE_JOB_SQLSERVER = "update ODE_JOB
> > set
> > nodeid = ? where nodeid is null and scheduled = 0 "
> >        + "and (ts % ?) = ? and ts < ?";
> >
> >    // UPDATED by Stefan Jakoubi and Simon Tjoa
> >    private static final String UPGRADE_JOB_DB2 = "update ODE_JOB set
> > nodeid
> > = ? where nodeid is null and scheduled = 0 " +
> >        " and (BIGINT(ts - ? * BIGINT(ts/?)) = ?) and ts < ? ";
> >
> >    private static final String SAVE_JOB = "insert into ODE_JOB "
> >            + " (jobid, nodeid, ts, scheduled, transacted, details)
> > values(?, ?, ?, ?, ?, ?)";
> >
> >    private static final String GET_NODEIDS = "select distinct nodeid
> > from
> > ODE_JOB";
> >
> >    private static final String SCHEDULE_IMMEDIATE = "select jobid, ts,
> > transacted, scheduled, details from ODE_JOB "
> >            + "where nodeid = ? and scheduled = 0 and ts < ? order by
> > ts";
> >
> >    private static final String UPDATE_SCHEDULED = "update ODE_JOB set
> > scheduled = 1 where jobid in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
> >
> >    private static final int UPDATE_SCHEDULED_SLOTS = 10;
> >
> >    private DataSource _ds;
> >
> >    private Dialect _dialect;
> >
> >    public JdbcDelegate(DataSource ds) {
> >        _ds = ds;
> >        _dialect = guessDialect();
> >    }
> >
> >    public boolean deleteJob(String jobid, String nodeId) throws
> > DatabaseException {
> >        if (__log.isDebugEnabled())
> >            __log.debug("deleteJob " + jobid + " on node " + nodeId);
> >
> >        Connection con = null;
> >        PreparedStatement ps = null;
> >        try {
> >            con = getConnection();
> >            ps = con.prepareStatement(DELETE_JOB);
> >            ps.setString(1, jobid);
> >            ps.setString(2, nodeId);
> >            return ps.executeUpdate() == 1;
> >        } catch (SQLException se) {
> >            throw new DatabaseException(se);
> >        } finally {
> >            close(ps);
> >            close(con);
> >        }
> >    }
> >
> >    public List<String> getNodeIds() throws DatabaseException {
> >        Connection con = null;
> >        PreparedStatement ps = null;
> >        try {
> >            con = getConnection();
> >            ps = con.prepareStatement(GET_NODEIDS,
> > ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
> >            ResultSet rs = ps.executeQuery();
> >            ArrayList<String> nodes = new ArrayList<String>();
> >            while (rs.next()) {
> >                String nodeId = rs.getString(1);
> >                if (nodeId != null)
> >                    nodes.add(rs.getString(1));
> >            }
> >            if (__log.isDebugEnabled())
> >                __log.debug("getNodeIds: " + nodes);
> >            return nodes;
> >        } catch (SQLException se) {
> >            throw new DatabaseException(se);
> >        } finally {
> >            close(ps);
> >            close(con);
> >        }
> >    }
> >
> >    public boolean insertJob(Job job, String nodeId, boolean loaded)
> > throws
> > DatabaseException {
> >        if (__log.isDebugEnabled())
> >            __log.debug("insertJob " + job.jobId + " on node " + nodeId +
> > "
> > loaded=" + loaded);
> >
> >        Connection con = null;
> >        PreparedStatement ps = null;
> >        try {
> >            con = getConnection();
> >            ps = con.prepareStatement(SAVE_JOB);
> >            ps.setString(1, job.jobId);
> >            ps.setString(2, nodeId);
> >            ps.setLong(3, job.schedDate);
> >            ps.setInt(4, asInteger(loaded));
> >            ps.setInt(5, asInteger(job.transacted));
> >            ByteArrayOutputStream bos = new ByteArrayOutputStream();
> >            try {
> >                StreamUtils.write(bos, (Serializable) job.detail);
> >            } catch (Exception ex) {
> >                __log.error("Error serializing job detail: " +
> > job.detail);
> >                throw new DatabaseException(ex);
> >            }
> >            ps.setBytes(6, bos.toByteArray());
> >            return ps.executeUpdate() == 1;
> >        } catch (SQLException se) {
> >            throw new DatabaseException(se);
> >        } finally {
> >            close(ps);
> >            close(con);
> >        }
> >    }
> >
> >    @SuppressWarnings("unchecked")
> >    public List<Job> dequeueImmediate(String nodeId, long maxtime, int
> > maxjobs) throws DatabaseException {
> >        ArrayList<Job> ret = new ArrayList<Job>(maxjobs);
> >        Connection con = null;
> >        PreparedStatement ps = null;
> >        try {
> >            con = getConnection();
> >            ps = con.prepareStatement(SCHEDULE_IMMEDIATE);
> >            ps.setString(1, nodeId);
> >            ps.setLong(2, maxtime);
> >            ps.setMaxRows(maxjobs);
> >            ResultSet rs = ps.executeQuery();
> >            while (rs.next()) {
> >                Map<String, Object> details;
> >                try {
> >                    ObjectInputStream is = new
> > ObjectInputStream(rs.getBinaryStream(5));
> >                    details = (Map<String, Object>) is.readObject();
> >                    is.close();
> >                } catch (Exception e) {
> >                    throw new DatabaseException("Error deserializing job
> > details", e);
> >                }
> >                Job job = new Job(rs.getLong(2), rs.getString(1),
> > asBoolean(rs.getInt(3)), details);
> >                ret.add(job);
> >            }
> >            rs.close();
> >            ps.close();
> >
> >            // mark jobs as scheduled, UPDATE_SCHEDULED_SLOTS at a time
> >            int j = 0;
> >            int updateCount = 0;
> >            ps = con.prepareStatement(UPDATE_SCHEDULED);
> >            for (int updates = 1; updates <= (ret.size() /
> > UPDATE_SCHEDULED_SLOTS) + 1; updates++) {
> >                for (int i = 1; i <= UPDATE_SCHEDULED_SLOTS; i++) {
> >                    ps.setString(i, j < ret.size() ? ret.get(j).jobId :
> > "");
> >                    j++;
> >                }
> >                ps.execute();
> >                updateCount += ps.getUpdateCount();
> >            }
> >            if (updateCount != ret.size()) {
> >                throw new DatabaseException(
> >                        "Updating scheduled jobs failed to update all
> > jobs;
> > expected=" + ret.size()
> >                                + " actual=" + updateCount);
> >            }
> >        } catch (SQLException se) {
> >            throw new DatabaseException(se);
> >        } finally {
> >            close(ps);
> >            close(con);
> >        }
> >        return ret;
> >    }
> >
> >    public int updateReassign(String oldnode, String newnode) throws
> > DatabaseException {
> >        if (__log.isDebugEnabled())
> >            __log.debug("updateReassign from " + oldnode + " ---> " +
> > newnode);
> >        Connection con = null;
> >        PreparedStatement ps = null;
> >        try {
> >            con = getConnection();
> >            ps = con.prepareStatement(UPDATE_REASSIGN);
> >            ps.setString(1, newnode);
> >            ps.setString(2, oldnode);
> >            return ps.executeUpdate();
> >        } catch (SQLException se) {
> >            throw new DatabaseException(se);
> >        } finally {
> >            close(ps);
> >            close(con);
> >        }
> >    }
> >
> >    public int updateAssignToNode(String node, int i, int numNodes, long
> > maxtime) throws DatabaseException {
> >        if (__log.isDebugEnabled())
> >            __log.debug("updateAsssignToNode node=" + node + " " + i +
> > "/" +
> > numNodes + " maxtime=" + maxtime);
> >        Connection con = null;
> >        PreparedStatement ps = null;
> >        try {
> >            con = getConnection();
> >            // UPDATED by Stefan Jakoubi and Simon Tjoa
> >            if (_dialect == Dialect.DB2) {
> >               ps = con.prepareStatement(UPGRADE_JOB_DB2);
> >               ps.setString(1, node);
> >               ps.setInt(2, numNodes);
> >               ps.setInt(3, numNodes);
> >               ps.setInt(4, i);
> >               ps.setLong(5, maxtime);
> >            } else
> >            if (_dialect == Dialect.SQLSERVER) {
> >                // Herausgezogen um neues PS für DB2 einführen zu können
> >                ps = con.prepareStatement(UPGRADE_JOB_SQLSERVER);
> >                ps.setString(1, node);
> >                ps.setInt(2, numNodes);
> >                ps.setInt(3, i);
> >                ps.setLong(4, maxtime);
> >            } else {
> >                // Herausgezogen um neues PS für DB2 einführen zu können
> >                ps = con.prepareStatement(UPGRADE_JOB_DEFAULT);
> >                ps.setString(1, node);
> >                ps.setInt(2, numNodes);
> >                ps.setInt(3, i);
> >                ps.setLong(4, maxtime);
> >            }
> >
> >             return ps.executeUpdate();
> >        } catch (SQLException se) {
> >            throw new DatabaseException(se);
> >        } finally {
> >            close(ps);
> >            close(con);
> >        }
> >    }
> >
> >    private Connection getConnection() throws SQLException {
> >        return _ds.getConnection();
> >    }
> >
> >    private int asInteger(boolean value) {
> >        return (value ? 1 : 0);
> >    }
> >
> >    private boolean asBoolean(int value) {
> >        return (value != 0);
> >    }
> >
> >    private void close(PreparedStatement ps) {
> >        if (ps != null) {
> >            try {
> >                ps.close();
> >            } catch (Exception e) {
> >                __log.warn("Exception while closing prepared statement",
> > e);
> >            }
> >        }
> >    }
> >
> >    private void close(Connection con) {
> >        if (con != null) {
> >            try {
> >                con.close();
> >            } catch (Exception e) {
> >                __log.warn("Exception while closing connection", e);
> >            }
> >        }
> >    }
> >
> >    private Dialect guessDialect() {
> >        Dialect d = Dialect.UNKNOWN;
> >        Connection con = null;
> >        try {
> >            con = getConnection();
> >            DatabaseMetaData metaData = con.getMetaData();
> >            if (metaData != null) {
> >                String dbProductName = metaData.getDatabaseProductName();
> >                int dbMajorVer = metaData.getDatabaseMajorVersion();
> >                __log.debug("Using database " + dbProductName + " major
> > version " + dbMajorVer);
> >                if (dbProductName.indexOf("DB2") >= 0) {
> >                    d = Dialect.DB2;
> >                } else if (dbProductName.indexOf("Derby") >= 0) {
> >                    d = Dialect.DERBY;
> >                } else if (dbProductName.indexOf("Firebird") >= 0) {
> >                    d = Dialect.FIREBIRD;
> >                } else if (dbProductName.indexOf("HSQL") >= 0) {
> >                    d = Dialect.HSQL;
> >                } else if (dbProductName.indexOf("Microsoft SQL") >= 0) {
> >                    d = Dialect.SQLSERVER;
> >                } else if (dbProductName.indexOf("MySQL") >= 0) {
> >                    d = Dialect.MYSQL;
> >                } else if (dbProductName.indexOf("Sybase") >= 0) {
> >                    d = Dialect.SYBASE;
> >                }
> >            }
> >        } catch (SQLException e) {
> >            __log.warn("Unable to determine database dialect", e);
> >        } finally {
> >            close(con);
> >        }
> >        __log.info("Using database dialect: " + d);
> >        return d;
> >    }
> >
> >    enum Dialect {
> >        DB2, DERBY, FIREBIRD, HSQL, MYSQL, ORACLE, SQLSERVER, SYBASE,
> > UNKNOWN
> >     }
> >
> > }
> >
> >
> >
> >
> >
> >
> > Stefan Jakoubi wrote:
> > >
> > > Hi Mathieu,
> > >
> > > thanks for your fast reply!
> > >
> > >> We use XDoclet annotations in the source code to generate the
> > mapping. So
> > >> you'll have to change those if you want to alter it. Alternatively
> > you
> > >> can
> > >> run on the generated mappings but that's probably a pain (updates).
> > But
> > >> why
> > >> do you want to change the mapping?
> > >
> > > Furthermore, thank you for outlining the "where to find" - this helps
> > us a
> > > lot!!!
> > >
> > > We unfortunately have to alter the tables to conform our partner's
> > > requirements:
> > > (1) project specific tablename prefix
> > > (2) "ID" column not allowed -> "tablename_ID"
> > > (3) Reserved words (DB2) such as "TYPE" are not allowed as column name
> > >
> > >
> > > Question: shall I post/upload the changes we have made within the
> > simple
> > > scheduler?
> > >
> > > Thanks,
> > > Stefan
> > >
> > >
> > >
> > >
> > >
> > >
> > > Matthieu Riou-5 wrote:
> > >>
> > >> On Fri, Apr 25, 2008 at 4:26 AM, Stefan Jakoubi <
> > >> sjakoubi@securityresearch.at> wrote:
> > >>
> > >>> Dear ODE community,
> > >>>
> > >>>
> > >>>
> > >>> in this mail I (a) follow up a topic raised from a colleague
> > >>> (http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine
> > and
> > >>> (b) raise myself a question concerning configuring ODE for DB2.
> > >>>
> > >>>
> > >>>
> > >>> @ (a):
> > >>>
> > >>> When migrating ODE to DB2 there is a problem within the simple
> > >>> scheduler. In particular, the scalar function MOD(x, y) within a
> > >>> prepared statement throws an awful exception.
> > >>>
> > >>> The solution is to replace the MOD(x, y) within the prepared
> > statement
> > >>> with: x - y * (x / y). This is according to how DB2 internally
> > >>> determines the result for the scalar function MOD.
> > >>>
> > >>>
> > >> Thanks for letting us know.
> > >>
> > >>
> > >>>
> > >>>
> > >>> @ (b):
> > >>>
> > >>> Can anybody please provide a complete table schema ODE requires?
> > >>>
> > >>
> > >> You'll get several ones for different databases in
> > >> dao-hibernate-db/target
> > >> after a build.
> > >>
> > >>
> > >>>
> > >>> Does anybody know where to configure all required Hibernate mappings
> > so
> > >>> that ODE table names may be altered?
> > >>>
> > >>
> > >> We use XDoclet annotations in the source code to generate the
> > mapping. So
> > >> you'll have to change those if you want to alter it. Alternatively
> > you
> > >> can
> > >> run on the generated mappings but that's probably a pain (updates).
> > But
> > >> why
> > >> do you want to change the mapping?
> > >>
> > >>
> > >>>
> > >>> When switching persistence to Hibernate, is it true that openjpa is
> > not
> > >>> used any more (and thus, no jpa mappings have to be modified in case
> > >>> that I alter table names?)
> > >>>
> > >>>
> > >> Right, it's either Hibernate of OpenJPA, they don't coexist.
> > >>
> > >>
> > >>>
> > >>>
> > >>> Finally I want to tell that as soon as we get a solution for
> > >>> JBOSS/ODE/DB2 we will contribute at Jira!
> > >>>
> > >>>
> > >> Thanks!
> > >>
> > >> Matthieu
> > >>
> > >>
> > >>>
> > >>>
> > >>> Thanks in advance & cheers,
> > >>>
> > >>> Stefan
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>
> > >>
> > >
> > >
> >
> > --
> > View this message in context:
> > http://www.nabble.com/Hibernate-mappings-tp16899339p16939634.html
> > Sent from the Apache Ode User mailing list archive at Nabble.com.
> >
> >
>

Re: Hibernate mappings

Posted by Matthieu Riou <ma...@offthelip.org>.
Cool, thanks a lot for contributing that back! I'll see how to add it to our
doc and maybe update the scheduler.

Cheers,
Matthieu

On Mon, Apr 28, 2008 at 7:04 AM, Stefan Jakoubi <
sjakoubi@securityresearch.at> wrote:

>
> Hi Mathieu,
>
> I think we have found the solution - in fact 2 major are required to use
> ODE
> with DB2:
> - Update of the simple scheduler
> - Update of DB script
>
> Below, you can find a little HowTo for JBoss & ODE & DB2 - I hope
> everything
> works finde ;-)
>
> Cheers and thanks for your support,
> Simon & Stefan
>
>
> +++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++
> ++ HOW TO: JBOSS & ODE & DB2 ++
> +++++++++++++++++++++++++++++++
> +++++++++++++++++++++++++++++++
>
> *****************
> * PREREQUISITES *
> *****************
>
>  - JBoss (tested on JBoss-4.0.3SP1)
>  - deployed ODE (tested with version 1.1.1)
>  - DB2 (tested with version 9.x)
>
>
> *********************************
> * MODIFYING SIMPLE SCHEDULER JAR *
> * *******************************
>
> - There is a bug using the scalar function MOD within a prepared statement
> - Modify the JdbcDelegate class (see also end of this HowTo)
> - Compile the code and make the jar
>
>
> *****************
> * CONFIGURATION *
> *****************
>
> -------------------------
> ### DB2 configuration ###
> -------------------------
>
> (1) create database for ODE usage (e.g. testode)
>
> (2) execute the following statements
>
> -- Apache ODE - SimpleScheduler Database Schema
> --
> -- Apache Derby scripts by Maciej Szefler.
> --
> -- Scripts modified for use with DB2 by Stefan Jakoubi and Simon Tjoa
> -- (-) All primary key ID columns modified for auto increment purposes ->
> ID
> bigint GENERATED ALWAYS AS IDENTITY not null
> -- (-) Altered table LARGE_DATA -> BIN_DATA blob(2000M) NOT LOGGED -> as
> DB2
> only supports logging up to 1GB
> --     Furthermore, DB2 only allows blob sizes up to 2GB - 1 Byte!!!
>
> CREATE TABLE ode_job (
>  jobid CHAR(64)  NOT NULL DEFAULT '',
>  ts BIGINT  NOT NULL DEFAULT 0,
>  nodeid char(64),
>  scheduled int  NOT NULL DEFAULT 0,
>  transacted int  NOT NULL DEFAULT 0,
>  details blob(4096),
>  PRIMARY KEY(jobid));
>
> CREATE INDEX IDX_ODE_JOB_TS ON ode_job(ts);
> CREATE INDEX IDX_ODE_JOB_NODEID ON ode_job(nodeid);
>
>
> create table BPEL_ACTIVITY_RECOVERY (ID bigint GENERATED ALWAYS AS
> IDENTITY
> not null, PIID bigint, AID bigint, CHANNEL varchar(255), REASON
> varchar(255), DATE_TIME timestamp, LDATA_ID bigint, ACTIONS varchar(255),
> RETRIES integer, INSERT_TIME timestamp, MLOCK integer not null, primary
> key
> (ID));
> create table BPEL_CORRELATION_PROP (ID bigint GENERATED ALWAYS AS IDENTITY
> not null, NAME varchar(255), NAMESPACE varchar(255), VALUE varchar(255),
> CORR_SET_ID bigint, INSERT_TIME timestamp, MLOCK integer not null, primary
> key (ID));
> create table BPEL_CORRELATION_SET (ID bigint GENERATED ALWAYS AS IDENTITY
> not null, VALUE varchar(255), CORR_SET_NAME varchar(255), SCOPE_ID bigint,
> PIID bigint, PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK integer not
> null, primary key (ID));
> create table BPEL_CORRELATOR (ID bigint GENERATED ALWAYS AS IDENTITY not
> null, CID varchar(255), PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK
> integer not null, primary key (ID));
> create table BPEL_CORRELATOR_MESSAGE_CKEY (ID bigint GENERATED ALWAYS AS
> IDENTITY not null, CKEY varchar(255), CORRELATOR_MESSAGE_ID bigint,
> INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> create table BPEL_EVENT (ID bigint GENERATED ALWAYS AS IDENTITY not null,
> IID bigint, PID bigint, TSTAMP timestamp, TYPE varchar(255), DETAIL
> clob(32000), LDATA_ID bigint, SID bigint, INSERT_TIME timestamp, MLOCK
> integer not null, primary key (ID));
> create table BPEL_FAULT (ID bigint GENERATED ALWAYS AS IDENTITY not null,
> FAULTNAME varchar(255), LDATA_ID bigint, EXPLANATION varchar(4000),
> LINE_NUM
> integer, AID integer, INSERT_TIME timestamp, MLOCK integer not null,
> primary
> key (ID));
> create table BPEL_INSTANCE (ID bigint GENERATED ALWAYS AS IDENTITY not
> null,
> INSTANTIATING_CORRELATOR bigint, FAULT bigint, JACOB_STATE bigint,
> PREVIOUS_STATE smallint, PROCESS_ID bigint, STATE smallint, LAST_ACTIVE_DT
> timestamp, SEQUENCE bigint, FAILURE_COUNT integer, FAILURE_DT timestamp,
> INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> create table BPEL_MESSAGE (ID bigint GENERATED ALWAYS AS IDENTITY not
> null,
> MEX bigint, TYPE varchar(255), DATA bigint, INSERT_TIME timestamp, MLOCK
> integer not null, primary key (ID));
> create table BPEL_MESSAGE_EXCHANGE (ID bigint GENERATED ALWAYS AS IDENTITY
> not null, PORT_TYPE varchar(255), CHANNEL_NAME varchar(255), CLIENTKEY
> varchar(255), LDATA_EPR_ID bigint, LDATA_CEPR_ID bigint, REQUEST bigint,
> RESPONSE bigint, INSERT_DT timestamp, OPERATION varchar(255), STATE
> varchar(255), PROCESS bigint, PIID bigint, DIR char(1), PLINK_MODELID
> integer, PATTERN varchar(255), CORR_STATUS varchar(255), FAULT_TYPE
> varchar(255), FAULT_EXPL varchar(255), CALLEE varchar(255), PARTNERLINK
> bigint, PIPED_ID varchar(255), INSERT_TIME timestamp, MLOCK integer not
> null, primary key (ID));
> create table BPEL_MEX_PROPS (MEX bigint not null, VALUE varchar(8000),
> NAME
> varchar(255) not null, primary key (MEX, NAME));
> create table BPEL_PLINK_VAL (ID bigint GENERATED ALWAYS AS IDENTITY not
> null, PARTNER_LINK varchar(100) not null, PARTNERROLE varchar(100),
> MYROLE_EPR bigint, PARTNERROLE_EPR bigint, PROCESS bigint, SCOPE bigint,
> SVCNAME varchar(255), MYROLE varchar(100), MODELID integer, MYSESSIONID
> varchar(255), PARTNERSESSIONID varchar(255), INSERT_TIME timestamp, MLOCK
> integer not null, primary key (ID));
> create table BPEL_PROCESS (ID bigint GENERATED ALWAYS AS IDENTITY not
> null,
> PROCID varchar(255) not null unique, deployer varchar(255), deploydate
> timestamp, type_name varchar(255), type_ns varchar(255), version bigint,
> ACTIVE_ smallint, guid varchar(255), INSERT_TIME timestamp, MLOCK integer
> not null, primary key (ID));
> create table BPEL_SCOPE (ID bigint GENERATED ALWAYS AS IDENTITY not null,
> PIID bigint, PARENT_SCOPE_ID bigint, STATE varchar(255) not null, NAME
> varchar(255) not null, MODELID integer, INSERT_TIME timestamp, MLOCK
> integer
> not null, primary key (ID));
> create table BPEL_SELECTORS (ID bigint GENERATED ALWAYS AS IDENTITY not
> null, PIID bigint not null, SELGRPID varchar(255) not null, IDX integer
> not
> null, CORRELATION_KEY varchar(255) not null, CORRELATOR bigint not null,
> INSERT_TIME timestamp, MLOCK integer not null, primary key (ID), unique
> (CORRELATION_KEY, CORRELATOR));
> create table BPEL_UNMATCHED (ID bigint GENERATED ALWAYS AS IDENTITY not
> null, MEX bigint, CORRELATION_KEY varchar(255), CORRELATOR bigint not
> null,
> INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> create table BPEL_XML_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not
> null,
> LDATA_ID bigint, NAME varchar(255) not null, SCOPE_ID bigint, PIID bigint,
> IS_SIMPLE_TYPE smallint, INSERT_TIME timestamp, MLOCK integer not null,
> primary key (ID));
> create table LARGE_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not null,
> BIN_DATA blob(2000M) NOT LOGGED, INSERT_TIME timestamp, MLOCK integer not
> null, primary key (ID));
> create table VAR_PROPERTY (ID bigint GENERATED ALWAYS AS IDENTITY not
> null,
> XML_DATA_ID bigint, PROP_VALUE varchar(255), PROP_NAME varchar(255) not
> null, INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
> alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7B14020712
> foreign
> key (LDATA_ID) references LARGE_DATA;
> alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7BA821BBE1
> foreign
> key (PIID) references BPEL_INSTANCE;
> alter table BPEL_CORRELATION_PROP add constraint FK4EC9DDAA4D0B7982
> foreign
> key (CORR_SET_ID) references BPEL_CORRELATION_SET;
> alter table BPEL_CORRELATION_SET add constraint FKB838191BA821BBE1 foreign
> key (PIID) references BPEL_INSTANCE;
> alter table BPEL_CORRELATION_SET add constraint FKB838191B6D49C363 foreign
> key (SCOPE_ID) references BPEL_SCOPE;
> alter table BPEL_CORRELATION_SET add constraint FKB838191BE15A2343 foreign
> key (PROCESS_ID) references BPEL_PROCESS;
> create index IDX_CORRELATOR_CID on BPEL_CORRELATOR (CID);
> alter table BPEL_CORRELATOR add constraint FKF50EFA33E15A2343 foreign key
> (PROCESS_ID) references BPEL_PROCESS;
> create index IDX_BPEL_CORRELATOR_MESSAGE_CKEY on
> BPEL_CORRELATOR_MESSAGE_CKEY (CKEY);
> alter table BPEL_CORRELATOR_MESSAGE_CKEY add constraint FK8997F700EEFA7470
> foreign key (CORRELATOR_MESSAGE_ID) references BPEL_UNMATCHED;
> alter table BPEL_EVENT add constraint FKAA6D673014020712 foreign key
> (LDATA_ID) references LARGE_DATA;
> alter table BPEL_EVENT add constraint FKAA6D6730A7EED251 foreign key (IID)
> references BPEL_INSTANCE;
> alter table BPEL_EVENT add constraint FKAA6D6730C831CBE3 foreign key (PID)
> references BPEL_PROCESS;
> alter table BPEL_FAULT add constraint FKAA722EB814020712 foreign key
> (LDATA_ID) references LARGE_DATA;
> alter table BPEL_INSTANCE add constraint FKE1DED41FDD43DBE1 foreign key
> (INSTANTIATING_CORRELATOR) references BPEL_CORRELATOR;
> alter table BPEL_INSTANCE add constraint FKE1DED41F6B66C85F foreign key
> (JACOB_STATE) references LARGE_DATA;
> alter table BPEL_INSTANCE add constraint FKE1DED41F1F3C9CB7 foreign key
> (FAULT) references BPEL_FAULT;
> alter table BPEL_INSTANCE add constraint FKE1DED41FE15A2343 foreign key
> (PROCESS_ID) references BPEL_PROCESS;
> alter table BPEL_MESSAGE add constraint FK4FA7231DCA00A413 foreign key
> (MEX)
> references BPEL_MESSAGE_EXCHANGE;
> alter table BPEL_MESSAGE add constraint FK4FA7231DEB2C9ED8 foreign key
> (DATA) references LARGE_DATA;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0589D7A75B
> foreign
> key (LDATA_CEPR_ID) references LARGE_DATA;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05BBA250D foreign
> key (PARTNERLINK) references BPEL_PLINK_VAL;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05A821BBE1
> foreign
> key (PIID) references BPEL_INSTANCE;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05C29670AA
> foreign
> key (LDATA_EPR_ID) references LARGE_DATA;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05B5BD38C7
> foreign
> key (PROCESS) references BPEL_PROCESS;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05FF451031
> foreign
> key (RESPONSE) references BPEL_MESSAGE;
> alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0554DCEE7F
> foreign
> key (REQUEST) references BPEL_MESSAGE;
> alter table BPEL_MEX_PROPS add constraint FK203CAFC7CA00A413 foreign key
> (MEX) references BPEL_MESSAGE_EXCHANGE;
> alter table BPEL_PLINK_VAL add constraint FK7D71E742F2BC5EB8 foreign key
> (MYROLE_EPR) references LARGE_DATA;
> alter table BPEL_PLINK_VAL add constraint FK7D71E74290D95ED1 foreign key
> (SCOPE) references BPEL_SCOPE;
> alter table BPEL_PLINK_VAL add constraint FK7D71E742B5BD38C7 foreign key
> (PROCESS) references BPEL_PROCESS;
> alter table BPEL_PLINK_VAL add constraint FK7D71E742D4B651D4 foreign key
> (PARTNERROLE_EPR) references LARGE_DATA;
> alter table BPEL_SCOPE add constraint FKAB2A32EAA821BBE1 foreign key
> (PIID)
> references BPEL_INSTANCE;
> alter table BPEL_SCOPE add constraint FKAB2A32EA42B20B58 foreign key
> (PARENT_SCOPE_ID) references BPEL_SCOPE;
> create index IDX_SELECTOR_CORRELATOR on BPEL_SELECTORS (CORRELATOR);
> create index IDX_SELECTOR_CKEY on BPEL_SELECTORS (CORRELATION_KEY);
> create index IDX_SELECTOR_SELGRPID on BPEL_SELECTORS (SELGRPID);
> alter table BPEL_SELECTORS add constraint FKF1F83A0AA821BBE1 foreign key
> (PIID) references BPEL_INSTANCE;
> alter table BPEL_SELECTORS add constraint FKF1F83A0A875201C9 foreign key
> (CORRELATOR) references BPEL_CORRELATOR;
> create index IDX_UNMATCHED_CKEY on BPEL_UNMATCHED (CORRELATION_KEY);
> create index IDX_UNMATCHED_CORRELATOR on BPEL_UNMATCHED (CORRELATOR);
> alter table BPEL_UNMATCHED add constraint FKF0663E01CA00A413 foreign key
> (MEX) references BPEL_MESSAGE_EXCHANGE;
> alter table BPEL_UNMATCHED add constraint FKF0663E01875201C9 foreign key
> (CORRELATOR) references BPEL_CORRELATOR;
> alter table BPEL_XML_DATA add constraint FKB7D47E7C14020712 foreign key
> (LDATA_ID) references LARGE_DATA;
> alter table BPEL_XML_DATA add constraint FKB7D47E7CA821BBE1 foreign key
> (PIID) references BPEL_INSTANCE;
> alter table BPEL_XML_DATA add constraint FKB7D47E7C6D49C363 foreign key
> (SCOPE_ID) references BPEL_SCOPE;
> alter table VAR_PROPERTY add constraint FK9C1E2C0DA48E25F2 foreign key
> (XML_DATA_ID) references BPEL_XML_DATA;
> create table hibernate_unique_key ( next_hi integer );
> insert into hibernate_unique_key values ( 0 );
> create table STORE_DU (NAME varchar(255) not null, deployer varchar(255),
> DEPLOYDT timestamp, DIR varchar(255), primary key (NAME));
> create table STORE_PROCESS (PID varchar(255) not null, DU varchar(255),
> TYPE
> varchar(255), version bigint, STATE varchar(255), primary key (PID));
> create table STORE_PROCESS_PROP (propId varchar(255) not null, value
> varchar(255), name varchar(255) not null, primary key (propId, name));
> create table STORE_VERSIONS (ID integer not null, VERSION bigint, primary
> key (ID));
> alter table STORE_PROCESS add constraint FKA83900D1BFFFC58C foreign key
> (DU)
> references STORE_DU;
> alter table STORE_PROCESS_PROP add constraint FKFD6C2E119ADDA5CB foreign
> key
> (propId) references STORE_PROCESS;
>
>
> (3) configure a DB2 user
>
> - there should already be a user configured (e.g. username)
> - set the password of this user within the windows user management (e.g.
> password)
>
>
> ------------------
> ### db2jcc.jar ###
> ------------------
>
> (4) copy db2jcc.jar to [JBOSS_HOME]\server\[server]\lib\db2jcc.jar
>
>
> ----------------------------
> ### ode-axis2.properties ###
> ----------------------------
>
> (5) create a file named ode-axis2.properties within the directory
> [JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\conf
>
> (6) edit ode-axis2.properties
>
> ode-axis2.db.mode=EXTERNAL
> ode-axis2.db.ext.dataSource=java:[ODE JNDI NAME, e.g. ODEDS]
>
>
> ------------------
> ### db2-ds.xml ###
> ------------------
>
> (7) copy [JBOSS_HOME]\docs\examples\jca\db2-ds.xml to
> [JBOSS_HOME]\server\[server]\deploy\db2-ds.xml
>
> (8) edit db2-ds.xml
>
> <datasources>
>  <local-tx-datasource>
>    <jndi-name>[ODE JNDI NAME, e.g. ODEDS]</jndi-name>
>    <connection-url>jdbc:db2://localhost:50000/[DATABASENAME, e.g.
> testode]</connection-url>
>    <driver-class>com.ibm.db2.jcc.DB2Driver</driver-class>
>    <user-name>[DBUSER, e.g. username - you must not use
> db2admin]</user-name>
>    <password>[DBUSERPASSWORD, e.g. password - it must not be
> blank]</password>
>    <min-pool-size>10</min-pool-size>
>        <!-- sql to call when connection is created
>        <new-connection-sql>some arbitrary sql</new-connection-sql>
>        -->
>
>        <!-- sql to call on an existing pooled connection when it is
> obtained from pool
>        <check-valid-connection-sql>some arbitrary
> sql</check-valid-connection-sql>
>        -->
>
>      <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml
> (optional) -->
>      <metadata>
>         <type-mapping>DB2</type-mapping>
>      </metadata>
>  </local-tx-datasource>
>
> </datasources>
>
>
> -----------------------
> ### simple scheduler ###
> -----------------------
>
> (9) copy the modified simple scheduler jar to
> [JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\lib
>
> - in our case: ode-scheduler-simple-1.1.1-SNAPSHOT.jar
>
>
> ----------------------------------------------
> ### Modification within class JdbcDelegate ###
> ----------------------------------------------
>
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  *    http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing,
>  * software distributed under the License is distributed on an
>  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
>  * KIND, either express or implied.  See the License for the
>  * specific language governing permissions and limitations
>  * under the License.
>  *
>  * UPDATED by Stefan Jakoubi and Simon Tjoa
>  * PERFORMED CHANGES:
>  * This class has been modified by Stefan Jakoubi and Simon Tjoa
>  *  (Secure Business Austria) to enable the use of the scalar function MOD
>  *  within prepared statements. Thus, the scalar function MOD has been
>  *  replaced with the internally DB2 formula used for determining MOD
> results:
>  *  MOD(x, y) = x - y * (x / y)
>  */
>
> package org.apache.ode.scheduler.simple;
>
> import java.io.ByteArrayOutputStream;
> import java.io.ObjectInputStream;
> import java.io.Serializable;
> import java.sql.Connection;
> import java.sql.DatabaseMetaData;
> import java.sql.PreparedStatement;
> import java.sql.ResultSet;
> import java.sql.SQLException;
> import java.util.ArrayList;
> import java.util.List;
> import java.util.Map;
>
> import javax.sql.DataSource;
>
> import org.apache.commons.logging.Log;
> import org.apache.commons.logging.LogFactory;
> import org.apache.ode.utils.StreamUtils;
>
> /**
>  * JDBC-based implementation of the {@link DatabaseDelegate} interface.
> Should work with most
>  * reasonably behaved databases.
>  *
>  * @author Maciej Szefler ( m s z e f l e r @ g m a i l . c o m ) modified
> by by Stefan Jakoubi and Simon Tjoa
>  */
> public class JdbcDelegate implements DatabaseDelegate {
>
>    private static final Log __log = LogFactory.getLog(JdbcDelegate.class);
>
>    private static final String DELETE_JOB = "delete from ODE_JOB where
> jobid = ? and nodeid = ?";
>
>    private static final String UPDATE_REASSIGN = "update ODE_JOB set
> nodeid
> = ?, scheduled = 0 where nodeid = ?";
>
>    private static final String UPGRADE_JOB_DEFAULT = "update ODE_JOB set
> nodeid = ? where nodeid is null and scheduled = 0 "
>            + "and mod(ts,?) = ? and ts < ?";
>
>    private static final String UPGRADE_JOB_SQLSERVER = "update ODE_JOB set
> nodeid = ? where nodeid is null and scheduled = 0 "
>        + "and (ts % ?) = ? and ts < ?";
>
>    // UPDATED by Stefan Jakoubi and Simon Tjoa
>    private static final String UPGRADE_JOB_DB2 = "update ODE_JOB set
> nodeid
> = ? where nodeid is null and scheduled = 0 " +
>        " and (BIGINT(ts - ? * BIGINT(ts/?)) = ?) and ts < ? ";
>
>    private static final String SAVE_JOB = "insert into ODE_JOB "
>            + " (jobid, nodeid, ts, scheduled, transacted, details)
> values(?, ?, ?, ?, ?, ?)";
>
>    private static final String GET_NODEIDS = "select distinct nodeid from
> ODE_JOB";
>
>    private static final String SCHEDULE_IMMEDIATE = "select jobid, ts,
> transacted, scheduled, details from ODE_JOB "
>            + "where nodeid = ? and scheduled = 0 and ts < ? order by ts";
>
>    private static final String UPDATE_SCHEDULED = "update ODE_JOB set
> scheduled = 1 where jobid in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
>
>    private static final int UPDATE_SCHEDULED_SLOTS = 10;
>
>    private DataSource _ds;
>
>    private Dialect _dialect;
>
>    public JdbcDelegate(DataSource ds) {
>        _ds = ds;
>        _dialect = guessDialect();
>    }
>
>    public boolean deleteJob(String jobid, String nodeId) throws
> DatabaseException {
>        if (__log.isDebugEnabled())
>            __log.debug("deleteJob " + jobid + " on node " + nodeId);
>
>        Connection con = null;
>        PreparedStatement ps = null;
>        try {
>            con = getConnection();
>            ps = con.prepareStatement(DELETE_JOB);
>            ps.setString(1, jobid);
>            ps.setString(2, nodeId);
>            return ps.executeUpdate() == 1;
>        } catch (SQLException se) {
>            throw new DatabaseException(se);
>        } finally {
>            close(ps);
>            close(con);
>        }
>    }
>
>    public List<String> getNodeIds() throws DatabaseException {
>        Connection con = null;
>        PreparedStatement ps = null;
>        try {
>            con = getConnection();
>            ps = con.prepareStatement(GET_NODEIDS,
> ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
>            ResultSet rs = ps.executeQuery();
>            ArrayList<String> nodes = new ArrayList<String>();
>            while (rs.next()) {
>                String nodeId = rs.getString(1);
>                if (nodeId != null)
>                    nodes.add(rs.getString(1));
>            }
>            if (__log.isDebugEnabled())
>                __log.debug("getNodeIds: " + nodes);
>            return nodes;
>        } catch (SQLException se) {
>            throw new DatabaseException(se);
>        } finally {
>            close(ps);
>            close(con);
>        }
>    }
>
>    public boolean insertJob(Job job, String nodeId, boolean loaded) throws
> DatabaseException {
>        if (__log.isDebugEnabled())
>            __log.debug("insertJob " + job.jobId + " on node " + nodeId + "
> loaded=" + loaded);
>
>        Connection con = null;
>        PreparedStatement ps = null;
>        try {
>            con = getConnection();
>            ps = con.prepareStatement(SAVE_JOB);
>            ps.setString(1, job.jobId);
>            ps.setString(2, nodeId);
>            ps.setLong(3, job.schedDate);
>            ps.setInt(4, asInteger(loaded));
>            ps.setInt(5, asInteger(job.transacted));
>            ByteArrayOutputStream bos = new ByteArrayOutputStream();
>            try {
>                StreamUtils.write(bos, (Serializable) job.detail);
>            } catch (Exception ex) {
>                __log.error("Error serializing job detail: " + job.detail);
>                throw new DatabaseException(ex);
>            }
>            ps.setBytes(6, bos.toByteArray());
>            return ps.executeUpdate() == 1;
>        } catch (SQLException se) {
>            throw new DatabaseException(se);
>        } finally {
>            close(ps);
>            close(con);
>        }
>    }
>
>    @SuppressWarnings("unchecked")
>    public List<Job> dequeueImmediate(String nodeId, long maxtime, int
> maxjobs) throws DatabaseException {
>        ArrayList<Job> ret = new ArrayList<Job>(maxjobs);
>        Connection con = null;
>        PreparedStatement ps = null;
>        try {
>            con = getConnection();
>            ps = con.prepareStatement(SCHEDULE_IMMEDIATE);
>            ps.setString(1, nodeId);
>            ps.setLong(2, maxtime);
>            ps.setMaxRows(maxjobs);
>            ResultSet rs = ps.executeQuery();
>            while (rs.next()) {
>                Map<String, Object> details;
>                try {
>                    ObjectInputStream is = new
> ObjectInputStream(rs.getBinaryStream(5));
>                    details = (Map<String, Object>) is.readObject();
>                    is.close();
>                } catch (Exception e) {
>                    throw new DatabaseException("Error deserializing job
> details", e);
>                }
>                Job job = new Job(rs.getLong(2), rs.getString(1),
> asBoolean(rs.getInt(3)), details);
>                ret.add(job);
>            }
>            rs.close();
>            ps.close();
>
>            // mark jobs as scheduled, UPDATE_SCHEDULED_SLOTS at a time
>            int j = 0;
>            int updateCount = 0;
>            ps = con.prepareStatement(UPDATE_SCHEDULED);
>            for (int updates = 1; updates <= (ret.size() /
> UPDATE_SCHEDULED_SLOTS) + 1; updates++) {
>                for (int i = 1; i <= UPDATE_SCHEDULED_SLOTS; i++) {
>                    ps.setString(i, j < ret.size() ? ret.get(j).jobId :
> "");
>                    j++;
>                }
>                ps.execute();
>                updateCount += ps.getUpdateCount();
>            }
>            if (updateCount != ret.size()) {
>                throw new DatabaseException(
>                        "Updating scheduled jobs failed to update all jobs;
> expected=" + ret.size()
>                                + " actual=" + updateCount);
>            }
>        } catch (SQLException se) {
>            throw new DatabaseException(se);
>        } finally {
>            close(ps);
>            close(con);
>        }
>        return ret;
>    }
>
>    public int updateReassign(String oldnode, String newnode) throws
> DatabaseException {
>        if (__log.isDebugEnabled())
>            __log.debug("updateReassign from " + oldnode + " ---> " +
> newnode);
>        Connection con = null;
>        PreparedStatement ps = null;
>        try {
>            con = getConnection();
>            ps = con.prepareStatement(UPDATE_REASSIGN);
>            ps.setString(1, newnode);
>            ps.setString(2, oldnode);
>            return ps.executeUpdate();
>        } catch (SQLException se) {
>            throw new DatabaseException(se);
>        } finally {
>            close(ps);
>            close(con);
>        }
>    }
>
>    public int updateAssignToNode(String node, int i, int numNodes, long
> maxtime) throws DatabaseException {
>        if (__log.isDebugEnabled())
>            __log.debug("updateAsssignToNode node=" + node + " " + i + "/"
> +
> numNodes + " maxtime=" + maxtime);
>        Connection con = null;
>        PreparedStatement ps = null;
>        try {
>            con = getConnection();
>            // UPDATED by Stefan Jakoubi and Simon Tjoa
>            if (_dialect == Dialect.DB2) {
>               ps = con.prepareStatement(UPGRADE_JOB_DB2);
>               ps.setString(1, node);
>               ps.setInt(2, numNodes);
>               ps.setInt(3, numNodes);
>               ps.setInt(4, i);
>               ps.setLong(5, maxtime);
>            } else
>            if (_dialect == Dialect.SQLSERVER) {
>                // Herausgezogen um neues PS für DB2 einführen zu können
>                ps = con.prepareStatement(UPGRADE_JOB_SQLSERVER);
>                ps.setString(1, node);
>                ps.setInt(2, numNodes);
>                ps.setInt(3, i);
>                ps.setLong(4, maxtime);
>            } else {
>                // Herausgezogen um neues PS für DB2 einführen zu können
>                ps = con.prepareStatement(UPGRADE_JOB_DEFAULT);
>                ps.setString(1, node);
>                ps.setInt(2, numNodes);
>                ps.setInt(3, i);
>                ps.setLong(4, maxtime);
>            }
>
>             return ps.executeUpdate();
>        } catch (SQLException se) {
>            throw new DatabaseException(se);
>        } finally {
>            close(ps);
>            close(con);
>        }
>    }
>
>    private Connection getConnection() throws SQLException {
>        return _ds.getConnection();
>    }
>
>    private int asInteger(boolean value) {
>        return (value ? 1 : 0);
>    }
>
>    private boolean asBoolean(int value) {
>        return (value != 0);
>    }
>
>    private void close(PreparedStatement ps) {
>        if (ps != null) {
>            try {
>                ps.close();
>            } catch (Exception e) {
>                __log.warn("Exception while closing prepared statement",
> e);
>            }
>        }
>    }
>
>    private void close(Connection con) {
>        if (con != null) {
>            try {
>                con.close();
>            } catch (Exception e) {
>                __log.warn("Exception while closing connection", e);
>            }
>        }
>    }
>
>    private Dialect guessDialect() {
>        Dialect d = Dialect.UNKNOWN;
>        Connection con = null;
>        try {
>            con = getConnection();
>            DatabaseMetaData metaData = con.getMetaData();
>            if (metaData != null) {
>                String dbProductName = metaData.getDatabaseProductName();
>                int dbMajorVer = metaData.getDatabaseMajorVersion();
>                __log.debug("Using database " + dbProductName + " major
> version " + dbMajorVer);
>                if (dbProductName.indexOf("DB2") >= 0) {
>                    d = Dialect.DB2;
>                } else if (dbProductName.indexOf("Derby") >= 0) {
>                    d = Dialect.DERBY;
>                } else if (dbProductName.indexOf("Firebird") >= 0) {
>                    d = Dialect.FIREBIRD;
>                } else if (dbProductName.indexOf("HSQL") >= 0) {
>                    d = Dialect.HSQL;
>                } else if (dbProductName.indexOf("Microsoft SQL") >= 0) {
>                    d = Dialect.SQLSERVER;
>                } else if (dbProductName.indexOf("MySQL") >= 0) {
>                    d = Dialect.MYSQL;
>                } else if (dbProductName.indexOf("Sybase") >= 0) {
>                    d = Dialect.SYBASE;
>                }
>            }
>        } catch (SQLException e) {
>            __log.warn("Unable to determine database dialect", e);
>        } finally {
>            close(con);
>        }
>        __log.info("Using database dialect: " + d);
>        return d;
>    }
>
>    enum Dialect {
>        DB2, DERBY, FIREBIRD, HSQL, MYSQL, ORACLE, SQLSERVER, SYBASE,
> UNKNOWN
>     }
>
> }
>
>
>
>
>
>
> Stefan Jakoubi wrote:
> >
> > Hi Mathieu,
> >
> > thanks for your fast reply!
> >
> >> We use XDoclet annotations in the source code to generate the mapping.
> So
> >> you'll have to change those if you want to alter it. Alternatively you
> >> can
> >> run on the generated mappings but that's probably a pain (updates). But
> >> why
> >> do you want to change the mapping?
> >
> > Furthermore, thank you for outlining the "where to find" - this helps us
> a
> > lot!!!
> >
> > We unfortunately have to alter the tables to conform our partner's
> > requirements:
> > (1) project specific tablename prefix
> > (2) "ID" column not allowed -> "tablename_ID"
> > (3) Reserved words (DB2) such as "TYPE" are not allowed as column name
> >
> >
> > Question: shall I post/upload the changes we have made within the simple
> > scheduler?
> >
> > Thanks,
> > Stefan
> >
> >
> >
> >
> >
> >
> > Matthieu Riou-5 wrote:
> >>
> >> On Fri, Apr 25, 2008 at 4:26 AM, Stefan Jakoubi <
> >> sjakoubi@securityresearch.at> wrote:
> >>
> >>> Dear ODE community,
> >>>
> >>>
> >>>
> >>> in this mail I (a) follow up a topic raised from a colleague
> >>> (http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine and
> >>> (b) raise myself a question concerning configuring ODE for DB2.
> >>>
> >>>
> >>>
> >>> @ (a):
> >>>
> >>> When migrating ODE to DB2 there is a problem within the simple
> >>> scheduler. In particular, the scalar function MOD(x, y) within a
> >>> prepared statement throws an awful exception.
> >>>
> >>> The solution is to replace the MOD(x, y) within the prepared statement
> >>> with: x - y * (x / y). This is according to how DB2 internally
> >>> determines the result for the scalar function MOD.
> >>>
> >>>
> >> Thanks for letting us know.
> >>
> >>
> >>>
> >>>
> >>> @ (b):
> >>>
> >>> Can anybody please provide a complete table schema ODE requires?
> >>>
> >>
> >> You'll get several ones for different databases in
> >> dao-hibernate-db/target
> >> after a build.
> >>
> >>
> >>>
> >>> Does anybody know where to configure all required Hibernate mappings
> so
> >>> that ODE table names may be altered?
> >>>
> >>
> >> We use XDoclet annotations in the source code to generate the mapping.
> So
> >> you'll have to change those if you want to alter it. Alternatively you
> >> can
> >> run on the generated mappings but that's probably a pain (updates). But
> >> why
> >> do you want to change the mapping?
> >>
> >>
> >>>
> >>> When switching persistence to Hibernate, is it true that openjpa is
> not
> >>> used any more (and thus, no jpa mappings have to be modified in case
> >>> that I alter table names?)
> >>>
> >>>
> >> Right, it's either Hibernate of OpenJPA, they don't coexist.
> >>
> >>
> >>>
> >>>
> >>> Finally I want to tell that as soon as we get a solution for
> >>> JBOSS/ODE/DB2 we will contribute at Jira!
> >>>
> >>>
> >> Thanks!
> >>
> >> Matthieu
> >>
> >>
> >>>
> >>>
> >>> Thanks in advance & cheers,
> >>>
> >>> Stefan
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Hibernate-mappings-tp16899339p16939634.html
> Sent from the Apache Ode User mailing list archive at Nabble.com.
>
>

Re: Hibernate mappings

Posted by Stefan Jakoubi <sj...@securityresearch.at>.
Hi Mathieu,

I think we have found the solution - in fact 2 major are required to use ODE
with DB2:
- Update of the simple scheduler
- Update of DB script

Below, you can find a little HowTo for JBoss & ODE & DB2 - I hope everything
works finde ;-)

Cheers and thanks for your support,
Simon & Stefan


+++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++
++ HOW TO: JBOSS & ODE & DB2 ++
+++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++

*****************
* PREREQUISITES *
*****************

 - JBoss (tested on JBoss-4.0.3SP1)
 - deployed ODE (tested with version 1.1.1)
 - DB2 (tested with version 9.x)
 
 
*********************************
* MODIFYING SIMPLE SCHEDULER JAR *
* *******************************

- There is a bug using the scalar function MOD within a prepared statement
- Modify the JdbcDelegate class (see also end of this HowTo)
- Compile the code and make the jar


*****************
* CONFIGURATION *
*****************

-------------------------
### DB2 configuration ###
-------------------------

(1) create database for ODE usage (e.g. testode)

(2) execute the following statements

-- Apache ODE - SimpleScheduler Database Schema
-- 
-- Apache Derby scripts by Maciej Szefler.
-- 
-- Scripts modified for use with DB2 by Stefan Jakoubi and Simon Tjoa
-- (-) All primary key ID columns modified for auto increment purposes -> ID
bigint GENERATED ALWAYS AS IDENTITY not null
-- (-) Altered table LARGE_DATA -> BIN_DATA blob(2000M) NOT LOGGED -> as DB2
only supports logging up to 1GB
--     Furthermore, DB2 only allows blob sizes up to 2GB - 1 Byte!!!

CREATE TABLE ode_job (
  jobid CHAR(64)  NOT NULL DEFAULT '',
  ts BIGINT  NOT NULL DEFAULT 0,
  nodeid char(64),
  scheduled int  NOT NULL DEFAULT 0,
  transacted int  NOT NULL DEFAULT 0,
  details blob(4096),
  PRIMARY KEY(jobid));

CREATE INDEX IDX_ODE_JOB_TS ON ode_job(ts);
CREATE INDEX IDX_ODE_JOB_NODEID ON ode_job(nodeid);


create table BPEL_ACTIVITY_RECOVERY (ID bigint GENERATED ALWAYS AS IDENTITY
not null, PIID bigint, AID bigint, CHANNEL varchar(255), REASON
varchar(255), DATE_TIME timestamp, LDATA_ID bigint, ACTIONS varchar(255),
RETRIES integer, INSERT_TIME timestamp, MLOCK integer not null, primary key
(ID));
create table BPEL_CORRELATION_PROP (ID bigint GENERATED ALWAYS AS IDENTITY
not null, NAME varchar(255), NAMESPACE varchar(255), VALUE varchar(255),
CORR_SET_ID bigint, INSERT_TIME timestamp, MLOCK integer not null, primary
key (ID));
create table BPEL_CORRELATION_SET (ID bigint GENERATED ALWAYS AS IDENTITY
not null, VALUE varchar(255), CORR_SET_NAME varchar(255), SCOPE_ID bigint,
PIID bigint, PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK integer not
null, primary key (ID));
create table BPEL_CORRELATOR (ID bigint GENERATED ALWAYS AS IDENTITY not
null, CID varchar(255), PROCESS_ID bigint, INSERT_TIME timestamp, MLOCK
integer not null, primary key (ID));
create table BPEL_CORRELATOR_MESSAGE_CKEY (ID bigint GENERATED ALWAYS AS
IDENTITY not null, CKEY varchar(255), CORRELATOR_MESSAGE_ID bigint,
INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
create table BPEL_EVENT (ID bigint GENERATED ALWAYS AS IDENTITY not null,
IID bigint, PID bigint, TSTAMP timestamp, TYPE varchar(255), DETAIL
clob(32000), LDATA_ID bigint, SID bigint, INSERT_TIME timestamp, MLOCK
integer not null, primary key (ID));
create table BPEL_FAULT (ID bigint GENERATED ALWAYS AS IDENTITY not null,
FAULTNAME varchar(255), LDATA_ID bigint, EXPLANATION varchar(4000), LINE_NUM
integer, AID integer, INSERT_TIME timestamp, MLOCK integer not null, primary
key (ID));
create table BPEL_INSTANCE (ID bigint GENERATED ALWAYS AS IDENTITY not null,
INSTANTIATING_CORRELATOR bigint, FAULT bigint, JACOB_STATE bigint,
PREVIOUS_STATE smallint, PROCESS_ID bigint, STATE smallint, LAST_ACTIVE_DT
timestamp, SEQUENCE bigint, FAILURE_COUNT integer, FAILURE_DT timestamp,
INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
create table BPEL_MESSAGE (ID bigint GENERATED ALWAYS AS IDENTITY not null,
MEX bigint, TYPE varchar(255), DATA bigint, INSERT_TIME timestamp, MLOCK
integer not null, primary key (ID));
create table BPEL_MESSAGE_EXCHANGE (ID bigint GENERATED ALWAYS AS IDENTITY
not null, PORT_TYPE varchar(255), CHANNEL_NAME varchar(255), CLIENTKEY
varchar(255), LDATA_EPR_ID bigint, LDATA_CEPR_ID bigint, REQUEST bigint,
RESPONSE bigint, INSERT_DT timestamp, OPERATION varchar(255), STATE
varchar(255), PROCESS bigint, PIID bigint, DIR char(1), PLINK_MODELID
integer, PATTERN varchar(255), CORR_STATUS varchar(255), FAULT_TYPE
varchar(255), FAULT_EXPL varchar(255), CALLEE varchar(255), PARTNERLINK
bigint, PIPED_ID varchar(255), INSERT_TIME timestamp, MLOCK integer not
null, primary key (ID));
create table BPEL_MEX_PROPS (MEX bigint not null, VALUE varchar(8000), NAME
varchar(255) not null, primary key (MEX, NAME));
create table BPEL_PLINK_VAL (ID bigint GENERATED ALWAYS AS IDENTITY not
null, PARTNER_LINK varchar(100) not null, PARTNERROLE varchar(100),
MYROLE_EPR bigint, PARTNERROLE_EPR bigint, PROCESS bigint, SCOPE bigint,
SVCNAME varchar(255), MYROLE varchar(100), MODELID integer, MYSESSIONID
varchar(255), PARTNERSESSIONID varchar(255), INSERT_TIME timestamp, MLOCK
integer not null, primary key (ID));
create table BPEL_PROCESS (ID bigint GENERATED ALWAYS AS IDENTITY not null,
PROCID varchar(255) not null unique, deployer varchar(255), deploydate
timestamp, type_name varchar(255), type_ns varchar(255), version bigint,
ACTIVE_ smallint, guid varchar(255), INSERT_TIME timestamp, MLOCK integer
not null, primary key (ID));
create table BPEL_SCOPE (ID bigint GENERATED ALWAYS AS IDENTITY not null,
PIID bigint, PARENT_SCOPE_ID bigint, STATE varchar(255) not null, NAME
varchar(255) not null, MODELID integer, INSERT_TIME timestamp, MLOCK integer
not null, primary key (ID));
create table BPEL_SELECTORS (ID bigint GENERATED ALWAYS AS IDENTITY not
null, PIID bigint not null, SELGRPID varchar(255) not null, IDX integer not
null, CORRELATION_KEY varchar(255) not null, CORRELATOR bigint not null,
INSERT_TIME timestamp, MLOCK integer not null, primary key (ID), unique
(CORRELATION_KEY, CORRELATOR));
create table BPEL_UNMATCHED (ID bigint GENERATED ALWAYS AS IDENTITY not
null, MEX bigint, CORRELATION_KEY varchar(255), CORRELATOR bigint not null,
INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
create table BPEL_XML_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not null,
LDATA_ID bigint, NAME varchar(255) not null, SCOPE_ID bigint, PIID bigint,
IS_SIMPLE_TYPE smallint, INSERT_TIME timestamp, MLOCK integer not null,
primary key (ID));
create table LARGE_DATA (ID bigint GENERATED ALWAYS AS IDENTITY not null,
BIN_DATA blob(2000M) NOT LOGGED, INSERT_TIME timestamp, MLOCK integer not
null, primary key (ID));
create table VAR_PROPERTY (ID bigint GENERATED ALWAYS AS IDENTITY not null,
XML_DATA_ID bigint, PROP_VALUE varchar(255), PROP_NAME varchar(255) not
null, INSERT_TIME timestamp, MLOCK integer not null, primary key (ID));
alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7B14020712 foreign
key (LDATA_ID) references LARGE_DATA;
alter table BPEL_ACTIVITY_RECOVERY add constraint FK94D24A7BA821BBE1 foreign
key (PIID) references BPEL_INSTANCE;
alter table BPEL_CORRELATION_PROP add constraint FK4EC9DDAA4D0B7982 foreign
key (CORR_SET_ID) references BPEL_CORRELATION_SET;
alter table BPEL_CORRELATION_SET add constraint FKB838191BA821BBE1 foreign
key (PIID) references BPEL_INSTANCE;
alter table BPEL_CORRELATION_SET add constraint FKB838191B6D49C363 foreign
key (SCOPE_ID) references BPEL_SCOPE;
alter table BPEL_CORRELATION_SET add constraint FKB838191BE15A2343 foreign
key (PROCESS_ID) references BPEL_PROCESS;
create index IDX_CORRELATOR_CID on BPEL_CORRELATOR (CID);
alter table BPEL_CORRELATOR add constraint FKF50EFA33E15A2343 foreign key
(PROCESS_ID) references BPEL_PROCESS;
create index IDX_BPEL_CORRELATOR_MESSAGE_CKEY on
BPEL_CORRELATOR_MESSAGE_CKEY (CKEY);
alter table BPEL_CORRELATOR_MESSAGE_CKEY add constraint FK8997F700EEFA7470
foreign key (CORRELATOR_MESSAGE_ID) references BPEL_UNMATCHED;
alter table BPEL_EVENT add constraint FKAA6D673014020712 foreign key
(LDATA_ID) references LARGE_DATA;
alter table BPEL_EVENT add constraint FKAA6D6730A7EED251 foreign key (IID)
references BPEL_INSTANCE;
alter table BPEL_EVENT add constraint FKAA6D6730C831CBE3 foreign key (PID)
references BPEL_PROCESS;
alter table BPEL_FAULT add constraint FKAA722EB814020712 foreign key
(LDATA_ID) references LARGE_DATA;
alter table BPEL_INSTANCE add constraint FKE1DED41FDD43DBE1 foreign key
(INSTANTIATING_CORRELATOR) references BPEL_CORRELATOR;
alter table BPEL_INSTANCE add constraint FKE1DED41F6B66C85F foreign key
(JACOB_STATE) references LARGE_DATA;
alter table BPEL_INSTANCE add constraint FKE1DED41F1F3C9CB7 foreign key
(FAULT) references BPEL_FAULT;
alter table BPEL_INSTANCE add constraint FKE1DED41FE15A2343 foreign key
(PROCESS_ID) references BPEL_PROCESS;
alter table BPEL_MESSAGE add constraint FK4FA7231DCA00A413 foreign key (MEX)
references BPEL_MESSAGE_EXCHANGE;
alter table BPEL_MESSAGE add constraint FK4FA7231DEB2C9ED8 foreign key
(DATA) references LARGE_DATA;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0589D7A75B foreign
key (LDATA_CEPR_ID) references LARGE_DATA;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05BBA250D foreign
key (PARTNERLINK) references BPEL_PLINK_VAL;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05A821BBE1 foreign
key (PIID) references BPEL_INSTANCE;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05C29670AA foreign
key (LDATA_EPR_ID) references LARGE_DATA;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05B5BD38C7 foreign
key (PROCESS) references BPEL_PROCESS;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD05FF451031 foreign
key (RESPONSE) references BPEL_MESSAGE;
alter table BPEL_MESSAGE_EXCHANGE add constraint FKBDA6BD0554DCEE7F foreign
key (REQUEST) references BPEL_MESSAGE;
alter table BPEL_MEX_PROPS add constraint FK203CAFC7CA00A413 foreign key
(MEX) references BPEL_MESSAGE_EXCHANGE;
alter table BPEL_PLINK_VAL add constraint FK7D71E742F2BC5EB8 foreign key
(MYROLE_EPR) references LARGE_DATA;
alter table BPEL_PLINK_VAL add constraint FK7D71E74290D95ED1 foreign key
(SCOPE) references BPEL_SCOPE;
alter table BPEL_PLINK_VAL add constraint FK7D71E742B5BD38C7 foreign key
(PROCESS) references BPEL_PROCESS;
alter table BPEL_PLINK_VAL add constraint FK7D71E742D4B651D4 foreign key
(PARTNERROLE_EPR) references LARGE_DATA;
alter table BPEL_SCOPE add constraint FKAB2A32EAA821BBE1 foreign key (PIID)
references BPEL_INSTANCE;
alter table BPEL_SCOPE add constraint FKAB2A32EA42B20B58 foreign key
(PARENT_SCOPE_ID) references BPEL_SCOPE;
create index IDX_SELECTOR_CORRELATOR on BPEL_SELECTORS (CORRELATOR);
create index IDX_SELECTOR_CKEY on BPEL_SELECTORS (CORRELATION_KEY);
create index IDX_SELECTOR_SELGRPID on BPEL_SELECTORS (SELGRPID);
alter table BPEL_SELECTORS add constraint FKF1F83A0AA821BBE1 foreign key
(PIID) references BPEL_INSTANCE;
alter table BPEL_SELECTORS add constraint FKF1F83A0A875201C9 foreign key
(CORRELATOR) references BPEL_CORRELATOR;
create index IDX_UNMATCHED_CKEY on BPEL_UNMATCHED (CORRELATION_KEY);
create index IDX_UNMATCHED_CORRELATOR on BPEL_UNMATCHED (CORRELATOR);
alter table BPEL_UNMATCHED add constraint FKF0663E01CA00A413 foreign key
(MEX) references BPEL_MESSAGE_EXCHANGE;
alter table BPEL_UNMATCHED add constraint FKF0663E01875201C9 foreign key
(CORRELATOR) references BPEL_CORRELATOR;
alter table BPEL_XML_DATA add constraint FKB7D47E7C14020712 foreign key
(LDATA_ID) references LARGE_DATA;
alter table BPEL_XML_DATA add constraint FKB7D47E7CA821BBE1 foreign key
(PIID) references BPEL_INSTANCE;
alter table BPEL_XML_DATA add constraint FKB7D47E7C6D49C363 foreign key
(SCOPE_ID) references BPEL_SCOPE;
alter table VAR_PROPERTY add constraint FK9C1E2C0DA48E25F2 foreign key
(XML_DATA_ID) references BPEL_XML_DATA;
create table hibernate_unique_key ( next_hi integer );
insert into hibernate_unique_key values ( 0 );
create table STORE_DU (NAME varchar(255) not null, deployer varchar(255),
DEPLOYDT timestamp, DIR varchar(255), primary key (NAME));
create table STORE_PROCESS (PID varchar(255) not null, DU varchar(255), TYPE
varchar(255), version bigint, STATE varchar(255), primary key (PID));
create table STORE_PROCESS_PROP (propId varchar(255) not null, value
varchar(255), name varchar(255) not null, primary key (propId, name));
create table STORE_VERSIONS (ID integer not null, VERSION bigint, primary
key (ID));
alter table STORE_PROCESS add constraint FKA83900D1BFFFC58C foreign key (DU)
references STORE_DU;
alter table STORE_PROCESS_PROP add constraint FKFD6C2E119ADDA5CB foreign key
(propId) references STORE_PROCESS;


(3) configure a DB2 user

- there should already be a user configured (e.g. username)
- set the password of this user within the windows user management (e.g.
password)


------------------
### db2jcc.jar ###
------------------

(4) copy db2jcc.jar to [JBOSS_HOME]\server\[server]\lib\db2jcc.jar


----------------------------
### ode-axis2.properties ###
----------------------------

(5) create a file named ode-axis2.properties within the directory
[JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\conf

(6) edit ode-axis2.properties

ode-axis2.db.mode=EXTERNAL
ode-axis2.db.ext.dataSource=java:[ODE JNDI NAME, e.g. ODEDS]


------------------
### db2-ds.xml ###
------------------

(7) copy [JBOSS_HOME]\docs\examples\jca\db2-ds.xml to
[JBOSS_HOME]\server\[server]\deploy\db2-ds.xml

(8) edit db2-ds.xml

<datasources>
  <local-tx-datasource>
    <jndi-name>[ODE JNDI NAME, e.g. ODEDS]</jndi-name>
    <connection-url>jdbc:db2://localhost:50000/[DATABASENAME, e.g.
testode]</connection-url>
    <driver-class>com.ibm.db2.jcc.DB2Driver</driver-class>
    <user-name>[DBUSER, e.g. username - you must not use
db2admin]</user-name>
    <password>[DBUSERPASSWORD, e.g. password - it must not be
blank]</password>
    <min-pool-size>10</min-pool-size>
        <!-- sql to call when connection is created
        <new-connection-sql>some arbitrary sql</new-connection-sql>
        -->

        <!-- sql to call on an existing pooled connection when it is
obtained from pool 
        <check-valid-connection-sql>some arbitrary
sql</check-valid-connection-sql>
        -->

      <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml
(optional) -->
      <metadata>
         <type-mapping>DB2</type-mapping>
      </metadata>
  </local-tx-datasource>

</datasources>


-----------------------
### simple scheduler ###
-----------------------

(9) copy the modified simple scheduler jar to
[JBOSS_HOME]\server\[server]\deploy\ode.war\WEB-INF\lib

- in our case: ode-scheduler-simple-1.1.1-SNAPSHOT.jar


----------------------------------------------
### Modification within class JdbcDelegate ###
----------------------------------------------

/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,
 * software distributed under the License is distributed on an
 * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 * KIND, either express or implied.  See the License for the
 * specific language governing permissions and limitations
 * under the License.
 * 
 * UPDATED by Stefan Jakoubi and Simon Tjoa 
 * PERFORMED CHANGES: 
 * This class has been modified by Stefan Jakoubi and Simon Tjoa 
 *  (Secure Business Austria) to enable the use of the scalar function MOD
 *  within prepared statements. Thus, the scalar function MOD has been
 *  replaced with the internally DB2 formula used for determining MOD
results:
 *  MOD(x, y) = x - y * (x / y)     
 */

package org.apache.ode.scheduler.simple;

import java.io.ByteArrayOutputStream;
import java.io.ObjectInputStream;
import java.io.Serializable;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import javax.sql.DataSource;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.ode.utils.StreamUtils;

/**
 * JDBC-based implementation of the {@link DatabaseDelegate} interface.
Should work with most 
 * reasonably behaved databases. 
 * 
 * @author Maciej Szefler ( m s z e f l e r @ g m a i l . c o m ) modified
by by Stefan Jakoubi and Simon Tjoa
 */
public class JdbcDelegate implements DatabaseDelegate {

    private static final Log __log = LogFactory.getLog(JdbcDelegate.class);

    private static final String DELETE_JOB = "delete from ODE_JOB where
jobid = ? and nodeid = ?";

    private static final String UPDATE_REASSIGN = "update ODE_JOB set nodeid
= ?, scheduled = 0 where nodeid = ?";

    private static final String UPGRADE_JOB_DEFAULT = "update ODE_JOB set
nodeid = ? where nodeid is null and scheduled = 0 "
            + "and mod(ts,?) = ? and ts < ?";

    private static final String UPGRADE_JOB_SQLSERVER = "update ODE_JOB set
nodeid = ? where nodeid is null and scheduled = 0 "
        + "and (ts % ?) = ? and ts < ?";

    // UPDATED by Stefan Jakoubi and Simon Tjoa
    private static final String UPGRADE_JOB_DB2 = "update ODE_JOB set nodeid
= ? where nodeid is null and scheduled = 0 " +  
        " and (BIGINT(ts - ? * BIGINT(ts/?)) = ?) and ts < ? ";

    private static final String SAVE_JOB = "insert into ODE_JOB "
            + " (jobid, nodeid, ts, scheduled, transacted, details)
values(?, ?, ?, ?, ?, ?)";

    private static final String GET_NODEIDS = "select distinct nodeid from
ODE_JOB";

    private static final String SCHEDULE_IMMEDIATE = "select jobid, ts,
transacted, scheduled, details from ODE_JOB "
            + "where nodeid = ? and scheduled = 0 and ts < ? order by ts";

    private static final String UPDATE_SCHEDULED = "update ODE_JOB set
scheduled = 1 where jobid in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";

    private static final int UPDATE_SCHEDULED_SLOTS = 10;

    private DataSource _ds;

    private Dialect _dialect;
    
    public JdbcDelegate(DataSource ds) {
        _ds = ds;
        _dialect = guessDialect();
    }

    public boolean deleteJob(String jobid, String nodeId) throws
DatabaseException {
        if (__log.isDebugEnabled())
            __log.debug("deleteJob " + jobid + " on node " + nodeId);

        Connection con = null;
        PreparedStatement ps = null;
        try {
            con = getConnection();
            ps = con.prepareStatement(DELETE_JOB);
            ps.setString(1, jobid);
            ps.setString(2, nodeId);
            return ps.executeUpdate() == 1;
        } catch (SQLException se) {
            throw new DatabaseException(se);
        } finally {
            close(ps);
            close(con);
        }
    }

    public List<String> getNodeIds() throws DatabaseException {
        Connection con = null;
        PreparedStatement ps = null;
        try {
            con = getConnection();
            ps = con.prepareStatement(GET_NODEIDS,
ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
            ResultSet rs = ps.executeQuery();
            ArrayList<String> nodes = new ArrayList<String>();
            while (rs.next()) {
                String nodeId = rs.getString(1);
                if (nodeId != null)
                    nodes.add(rs.getString(1));
            }
            if (__log.isDebugEnabled())
                __log.debug("getNodeIds: " + nodes);
            return nodes;
        } catch (SQLException se) {
            throw new DatabaseException(se);
        } finally {
            close(ps);
            close(con);
        }
    }

    public boolean insertJob(Job job, String nodeId, boolean loaded) throws
DatabaseException {
        if (__log.isDebugEnabled())
            __log.debug("insertJob " + job.jobId + " on node " + nodeId + "
loaded=" + loaded);

        Connection con = null;
        PreparedStatement ps = null;
        try {
            con = getConnection();
            ps = con.prepareStatement(SAVE_JOB);
            ps.setString(1, job.jobId);
            ps.setString(2, nodeId);
            ps.setLong(3, job.schedDate);
            ps.setInt(4, asInteger(loaded));
            ps.setInt(5, asInteger(job.transacted));
            ByteArrayOutputStream bos = new ByteArrayOutputStream();
            try {
                StreamUtils.write(bos, (Serializable) job.detail);
            } catch (Exception ex) {
                __log.error("Error serializing job detail: " + job.detail);
                throw new DatabaseException(ex);
            }
            ps.setBytes(6, bos.toByteArray());
            return ps.executeUpdate() == 1;
        } catch (SQLException se) {
            throw new DatabaseException(se);
        } finally {
            close(ps);
            close(con);
        }
    }

    @SuppressWarnings("unchecked")
    public List<Job> dequeueImmediate(String nodeId, long maxtime, int
maxjobs) throws DatabaseException {
        ArrayList<Job> ret = new ArrayList<Job>(maxjobs);
        Connection con = null;
        PreparedStatement ps = null;
        try {
            con = getConnection();
            ps = con.prepareStatement(SCHEDULE_IMMEDIATE);
            ps.setString(1, nodeId);
            ps.setLong(2, maxtime);
            ps.setMaxRows(maxjobs);
            ResultSet rs = ps.executeQuery();
            while (rs.next()) {
                Map<String, Object> details;
                try {
                    ObjectInputStream is = new
ObjectInputStream(rs.getBinaryStream(5));
                    details = (Map<String, Object>) is.readObject();
                    is.close();
                } catch (Exception e) {
                    throw new DatabaseException("Error deserializing job
details", e);
                }
                Job job = new Job(rs.getLong(2), rs.getString(1),
asBoolean(rs.getInt(3)), details);
                ret.add(job);
            }
            rs.close();
            ps.close();
            
            // mark jobs as scheduled, UPDATE_SCHEDULED_SLOTS at a time
            int j = 0;
            int updateCount = 0;
            ps = con.prepareStatement(UPDATE_SCHEDULED);
            for (int updates = 1; updates <= (ret.size() /
UPDATE_SCHEDULED_SLOTS) + 1; updates++) {
                for (int i = 1; i <= UPDATE_SCHEDULED_SLOTS; i++) {
                    ps.setString(i, j < ret.size() ? ret.get(j).jobId : "");
                    j++;
                }
                ps.execute();
                updateCount += ps.getUpdateCount();
            }
            if (updateCount != ret.size()) {
                throw new DatabaseException(
                        "Updating scheduled jobs failed to update all jobs;
expected=" + ret.size()
                                + " actual=" + updateCount);
            }
        } catch (SQLException se) {
            throw new DatabaseException(se);
        } finally {
            close(ps);
            close(con);
        }
        return ret;
    }

    public int updateReassign(String oldnode, String newnode) throws
DatabaseException {
        if (__log.isDebugEnabled())
            __log.debug("updateReassign from " + oldnode + " ---> " +
newnode);
        Connection con = null;
        PreparedStatement ps = null;
        try {
            con = getConnection();
            ps = con.prepareStatement(UPDATE_REASSIGN);
            ps.setString(1, newnode);
            ps.setString(2, oldnode);
            return ps.executeUpdate();
        } catch (SQLException se) {
            throw new DatabaseException(se);
        } finally {
            close(ps);
            close(con);
        }
    }

    public int updateAssignToNode(String node, int i, int numNodes, long
maxtime) throws DatabaseException {
        if (__log.isDebugEnabled())
            __log.debug("updateAsssignToNode node=" + node + " " + i + "/" +
numNodes + " maxtime=" + maxtime);
        Connection con = null;
        PreparedStatement ps = null;
        try {
            con = getConnection();
            // UPDATED by Stefan Jakoubi and Simon Tjoa
            if (_dialect == Dialect.DB2) {
               ps = con.prepareStatement(UPGRADE_JOB_DB2);
               ps.setString(1, node);
               ps.setInt(2, numNodes);
               ps.setInt(3, numNodes);
               ps.setInt(4, i);
               ps.setLong(5, maxtime);
            } else
            if (_dialect == Dialect.SQLSERVER) {
                // Herausgezogen um neues PS für DB2 einführen zu können
                ps = con.prepareStatement(UPGRADE_JOB_SQLSERVER);
                ps.setString(1, node);
                ps.setInt(2, numNodes);
                ps.setInt(3, i);
                ps.setLong(4, maxtime);
            } else {
                // Herausgezogen um neues PS für DB2 einführen zu können
                ps = con.prepareStatement(UPGRADE_JOB_DEFAULT);
                ps.setString(1, node);
                ps.setInt(2, numNodes);
                ps.setInt(3, i);
                ps.setLong(4, maxtime);
            }
                                         
             return ps.executeUpdate();
        } catch (SQLException se) {
            throw new DatabaseException(se);
        } finally {
            close(ps);
            close(con);
        }
    }

    private Connection getConnection() throws SQLException {
        return _ds.getConnection();
    }

    private int asInteger(boolean value) {
        return (value ? 1 : 0);
    }

    private boolean asBoolean(int value) {
        return (value != 0);
    }

    private void close(PreparedStatement ps) {
        if (ps != null) {
            try {
                ps.close();
            } catch (Exception e) {
                __log.warn("Exception while closing prepared statement", e);
            }
        }
    }

    private void close(Connection con) {
        if (con != null) {
            try {
                con.close();
            } catch (Exception e) {
                __log.warn("Exception while closing connection", e);
            }
        }
    }

    private Dialect guessDialect() {
        Dialect d = Dialect.UNKNOWN;
        Connection con = null;
        try {
            con = getConnection();
            DatabaseMetaData metaData = con.getMetaData();
            if (metaData != null) {
                String dbProductName = metaData.getDatabaseProductName();
                int dbMajorVer = metaData.getDatabaseMajorVersion();
                __log.debug("Using database " + dbProductName + " major
version " + dbMajorVer);
                if (dbProductName.indexOf("DB2") >= 0) {
                    d = Dialect.DB2;
                } else if (dbProductName.indexOf("Derby") >= 0) {
                    d = Dialect.DERBY;
                } else if (dbProductName.indexOf("Firebird") >= 0) {
                    d = Dialect.FIREBIRD;
                } else if (dbProductName.indexOf("HSQL") >= 0) {
                    d = Dialect.HSQL;
                } else if (dbProductName.indexOf("Microsoft SQL") >= 0) {
                    d = Dialect.SQLSERVER;
                } else if (dbProductName.indexOf("MySQL") >= 0) {
                    d = Dialect.MYSQL;
                } else if (dbProductName.indexOf("Sybase") >= 0) {
                    d = Dialect.SYBASE;
                }
            }
        } catch (SQLException e) {
            __log.warn("Unable to determine database dialect", e);
        } finally {
            close(con);
        }
        __log.info("Using database dialect: " + d);
        return d;
    }

    enum Dialect {
        DB2, DERBY, FIREBIRD, HSQL, MYSQL, ORACLE, SQLSERVER, SYBASE,
UNKNOWN 
    }
    
}






Stefan Jakoubi wrote:
> 
> Hi Mathieu,
> 
> thanks for your fast reply!
> 
>> We use XDoclet annotations in the source code to generate the mapping. So
>> you'll have to change those if you want to alter it. Alternatively you
>> can
>> run on the generated mappings but that's probably a pain (updates). But
>> why
>> do you want to change the mapping?
> 
> Furthermore, thank you for outlining the "where to find" - this helps us a
> lot!!!
> 
> We unfortunately have to alter the tables to conform our partner's
> requirements:
> (1) project specific tablename prefix
> (2) "ID" column not allowed -> "tablename_ID"
> (3) Reserved words (DB2) such as "TYPE" are not allowed as column name
> 
> 
> Question: shall I post/upload the changes we have made within the simple
> scheduler?
> 
> Thanks,
> Stefan
> 
> 
> 
> 
> 
> 
> Matthieu Riou-5 wrote:
>> 
>> On Fri, Apr 25, 2008 at 4:26 AM, Stefan Jakoubi <
>> sjakoubi@securityresearch.at> wrote:
>> 
>>> Dear ODE community,
>>>
>>>
>>>
>>> in this mail I (a) follow up a topic raised from a colleague
>>> (http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine and
>>> (b) raise myself a question concerning configuring ODE for DB2.
>>>
>>>
>>>
>>> @ (a):
>>>
>>> When migrating ODE to DB2 there is a problem within the simple
>>> scheduler. In particular, the scalar function MOD(x, y) within a
>>> prepared statement throws an awful exception.
>>>
>>> The solution is to replace the MOD(x, y) within the prepared statement
>>> with: x - y * (x / y). This is according to how DB2 internally
>>> determines the result for the scalar function MOD.
>>>
>>>
>> Thanks for letting us know.
>> 
>> 
>>>
>>>
>>> @ (b):
>>>
>>> Can anybody please provide a complete table schema ODE requires?
>>>
>> 
>> You'll get several ones for different databases in
>> dao-hibernate-db/target
>> after a build.
>> 
>> 
>>>
>>> Does anybody know where to configure all required Hibernate mappings so
>>> that ODE table names may be altered?
>>>
>> 
>> We use XDoclet annotations in the source code to generate the mapping. So
>> you'll have to change those if you want to alter it. Alternatively you
>> can
>> run on the generated mappings but that's probably a pain (updates). But
>> why
>> do you want to change the mapping?
>> 
>> 
>>>
>>> When switching persistence to Hibernate, is it true that openjpa is not
>>> used any more (and thus, no jpa mappings have to be modified in case
>>> that I alter table names?)
>>>
>>>
>> Right, it's either Hibernate of OpenJPA, they don't coexist.
>> 
>> 
>>>
>>>
>>> Finally I want to tell that as soon as we get a solution for
>>> JBOSS/ODE/DB2 we will contribute at Jira!
>>>
>>>
>> Thanks!
>> 
>> Matthieu
>> 
>> 
>>>
>>>
>>> Thanks in advance & cheers,
>>>
>>> Stefan
>>>
>>>
>>>
>>>
>>>
>>>
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Hibernate-mappings-tp16899339p16939634.html
Sent from the Apache Ode User mailing list archive at Nabble.com.


Logging from BPEL script

Posted by "Nowakowski, Mateusz" <Ma...@sabre-holdings.com>.
Hi

 

Is there any possibility to log some information from BPEL script?

 

I think about some additional activity. Probably there isn't such
activity but as I know there is possibility to implement and add own
activity to BPEL.

 

How can I add new activity to ODE? Is there any description how to do
that?

 

 

-- 

Regards

Mateusz Nowakowski

 


Re: Hibernate mappings

Posted by Stefan Jakoubi <sj...@securityresearch.at>.
Hi Mathieu,

thanks for your fast reply!

> We use XDoclet annotations in the source code to generate the mapping. So
> you'll have to change those if you want to alter it. Alternatively you can
> run on the generated mappings but that's probably a pain (updates). But
> why
> do you want to change the mapping?

Furthermore, thank you for outlining the "where to find" - this helps us a
lot!!!

We unfortunately have to alter the tables to conform our partner's
requirements:
(1) project specific tablename prefix
(2) "ID" column not allowed -> "tablename_ID"
(3) Reserved words (DB2) such as "TYPE" are not allowed as column name


Question: shall I post/upload the changes we have made within the simple
scheduler?

Thanks,
Stefan






Matthieu Riou-5 wrote:
> 
> On Fri, Apr 25, 2008 at 4:26 AM, Stefan Jakoubi <
> sjakoubi@securityresearch.at> wrote:
> 
>> Dear ODE community,
>>
>>
>>
>> in this mail I (a) follow up a topic raised from a colleague
>> (http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine and
>> (b) raise myself a question concerning configuring ODE for DB2.
>>
>>
>>
>> @ (a):
>>
>> When migrating ODE to DB2 there is a problem within the simple
>> scheduler. In particular, the scalar function MOD(x, y) within a
>> prepared statement throws an awful exception.
>>
>> The solution is to replace the MOD(x, y) within the prepared statement
>> with: x - y * (x / y). This is according to how DB2 internally
>> determines the result for the scalar function MOD.
>>
>>
> Thanks for letting us know.
> 
> 
>>
>>
>> @ (b):
>>
>> Can anybody please provide a complete table schema ODE requires?
>>
> 
> You'll get several ones for different databases in dao-hibernate-db/target
> after a build.
> 
> 
>>
>> Does anybody know where to configure all required Hibernate mappings so
>> that ODE table names may be altered?
>>
> 
> We use XDoclet annotations in the source code to generate the mapping. So
> you'll have to change those if you want to alter it. Alternatively you can
> run on the generated mappings but that's probably a pain (updates). But
> why
> do you want to change the mapping?
> 
> 
>>
>> When switching persistence to Hibernate, is it true that openjpa is not
>> used any more (and thus, no jpa mappings have to be modified in case
>> that I alter table names?)
>>
>>
> Right, it's either Hibernate of OpenJPA, they don't coexist.
> 
> 
>>
>>
>> Finally I want to tell that as soon as we get a solution for
>> JBOSS/ODE/DB2 we will contribute at Jira!
>>
>>
> Thanks!
> 
> Matthieu
> 
> 
>>
>>
>> Thanks in advance & cheers,
>>
>> Stefan
>>
>>
>>
>>
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Hibernate-mappings-tp16899339p16930965.html
Sent from the Apache Ode User mailing list archive at Nabble.com.


Re: Hibernate mappings

Posted by Matthieu Riou <ma...@offthelip.org>.
On Fri, Apr 25, 2008 at 4:26 AM, Stefan Jakoubi <
sjakoubi@securityresearch.at> wrote:

> Dear ODE community,
>
>
>
> in this mail I (a) follow up a topic raised from a colleague
> (http://ode.markmail.org/message/gyoxq5cz3uruizoc?q=tjoa) of mine and
> (b) raise myself a question concerning configuring ODE for DB2.
>
>
>
> @ (a):
>
> When migrating ODE to DB2 there is a problem within the simple
> scheduler. In particular, the scalar function MOD(x, y) within a
> prepared statement throws an awful exception.
>
> The solution is to replace the MOD(x, y) within the prepared statement
> with: x - y * (x / y). This is according to how DB2 internally
> determines the result for the scalar function MOD.
>
>
Thanks for letting us know.


>
>
> @ (b):
>
> Can anybody please provide a complete table schema ODE requires?
>

You'll get several ones for different databases in dao-hibernate-db/target
after a build.


>
> Does anybody know where to configure all required Hibernate mappings so
> that ODE table names may be altered?
>

We use XDoclet annotations in the source code to generate the mapping. So
you'll have to change those if you want to alter it. Alternatively you can
run on the generated mappings but that's probably a pain (updates). But why
do you want to change the mapping?


>
> When switching persistence to Hibernate, is it true that openjpa is not
> used any more (and thus, no jpa mappings have to be modified in case
> that I alter table names?)
>
>
Right, it's either Hibernate of OpenJPA, they don't coexist.


>
>
> Finally I want to tell that as soon as we get a solution for
> JBOSS/ODE/DB2 we will contribute at Jira!
>
>
Thanks!

Matthieu


>
>
> Thanks in advance & cheers,
>
> Stefan
>
>
>
>
>
>