You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by Randall Hauch <rh...@gmail.com> on 2012/08/03 17:21:26 UTC

Re: JSR-283 official TCK and the 'jackrabbit-jcr-tests'

Hi, Julian:

I agree that the TCK unit tests are in really good shape, especially given that all of our known issues with them will be addressed in the 2.5.1 release (see [1]). Separating the TCK unit tests with its own independent release cycle will definitely require some up front work, and whether this work will be worth it may depend on several things:

* Relationship to the official TCK

The Jackrabbit TCK page [2] states that Jackrabbit's JCR Tests module is used within the official JSR-283 TCK [3] but is not the official TCK test. Obviously the former have been maintained and continue to improve, but the official JSR-283 TCK tests appear to have been released last on August 19, 2009. Is it possible that the official JSR-283 TCK tests can be updated, and would having a separate release schedule help with this in any way? (If the official TCK tests cannot be updated, then I presume any project wanting to claim JSR-283 compliance would have to run the official TCKs and appeal each of the older incorrect tests by stating the issue and perhaps using the updated/corrected tests Jackrabbit JCR Tests module as "corrections". Is there anything that makes this easier with a non-updated TCK? How does Jackrabbit show spec and TCK compliance?)

* New tests

The official TCK and Jackrabbit's JCR tests unsurprisingly do not completely cover the specification, and doing so would require a significant amount of effort. However, there may be opportunity over time for projects to propose adding new tests. If the JCR Tests' release cycle were independent from the Jackrabbit's release cycle, then the process of adding new tests and releasing updated JCR tests might be easier. On the other hand, verification that the tests are valid may become a bit harder. (We'd probably want to define a process to govern such additions and whether they are valid. At the moment, the )

For example, the ModeShape project has around 70 additional tests [4] that we'd be willing to donate. Most of these are around verifying administration privileges (e.g., registering namespaces, node types, etc., especially for anonymous users), versioning, and locking. Obviously we'd do this by submitting a proposed patch for the appropriate JCR unit test cases, and we'd want others to verify the expectations are indeed mandated in the specification. (Any Jackrabbit failures of the proposed tests might be a signal the tests.

Additionally, the Oak effort is already running the JCR tests and has recently found/fixed issues, too. Can those participating in Oak offer how much they might expect to simply reuse vs. add new tests?


Best regards,

Randall Hauch
Project lead, ModeShape

[1] http://markmail.org/thread/isihzvirvrc4idpp
[2] http://jackrabbit.apache.org/jackrabbit-jcr-tests.html
[3] http://www.day.com/day/en/products/jcr/jsr-283.html
[4] https://github.com/ModeShape/modeshape/blob/master/modeshape-jcr/src/test/java/org/modeshape/jcr/ModeShapeTckTest.java


On Thursday, June 7, 2012 at 11:58 AM, Julian Reschke wrote:

> On 2012-06-07 18:19, Michael Dürig wrote:
> > Hi Randall,
> >  
> > I think forking of the jcr-test module into its own release cycle
> > definitely makes sense. The easiest thing would be to just make it a
> > sub-project of Jackrabbit and move it from trunk/jackrabbit-jcr-tests to
> > jcr-tests/trunk. In addition we'd probably also need a separate issued
> > tracker and migrate existing issues.
> >  
> > However, this comes with some additional effort and complexity. Apart
> > from the initial move and fixing of existing dependencies we'd need a
> > release manager who takes care of the new sub-project. Not sure whether
> > Alex has enough cylces left to take this over.
> >  
> > Let's see what others think of this and then open an issue from the
> > conclusion.
> >  
> > Michael
> > ...
> >  
>  
>  
> I agree that decoupling is the right thing to do. On the other hand,  
> decoupling also requires (a) some initial work, and (b) ongoing work to  
> actually cut the separate releases.
>  
> It seems that the original problem that triggered this discussion was  
> the amount of changes in trunk that were not present in a non-SNAPSHOT  
> release. In the meantime, 2.5.0 *has* been released, and looking at the  
> Jackrabbit release history, we've made releases every few months.
>  
> So is the trouble really worth it? Who volunteers to take case of this?
>  
> Best regards, Julian  


Re: JSR-283 official TCK and the 'jackrabbit-jcr-tests'

Posted by Randall Hauch <rh...@gmail.com>.
Thanks, Jukka. See below for a few responses/questions. As always, thanks again!


On Friday, August 3, 2012 at 11:51 AM, Jukka Zitting wrote:

> Hi,
> 
> On Fri, Aug 3, 2012 at 5:21 PM, Randall Hauch <rhauch@gmail.com (mailto:rhauch@gmail.com)> wrote:
> > The Jackrabbit TCK page [2] states that Jackrabbit's JCR Tests module is
> > used within the official JSR-283 TCK [3] but is not the official TCK test.
> > Obviously the former have been maintained and continue to improve, but the
> > official JSR-283 TCK tests appear to have been released last on August 19,
> > 2009. Is it possible that the official JSR-283 TCK tests can be updated, and
> > would having a separate release schedule help with this in any way?
> > 
> 
> 
> The only way to update the official JSR-283 TCK is through the JCP
> maintenance process [1].
> 
> > (If the official TCK tests cannot be updated, then I presume any project wanting
> > to claim JSR-283 compliance would have to run the official TCKs and appeal each
> > of the older incorrect tests by stating the issue and perhaps using the
> > updated/corrected tests Jackrabbit JCR Tests module as "corrections".
> > 
> 
> 
> Yes, the appeal process is documented in [2] and relies on an exclude
> list of buggy tests maintained by the spec lead. In practice, if you
> have pointers to relevant jackrabbit-jcr-tests bug reports and can
> demonstrate that your implementation passes the test after that issue
> is fixed, the spec lead will be happy to put the test case on the
> exclude list for you.
> 
> 

What is the preferred way to demonstrate that our implementation passes all of the JCR Tests? We obviously run them in our builds, but we could also set up a separate Maven project to do that. I presume there's no way to easily run the official TCK tests but forcibly use the latest unofficial JCR Tests. Is there a preferred point of contact, other than jcr@day.com ?
 
> 
> > Is there anything that makes this easier with a non-updated TCK?
> 
> 
> It's a more light-weight process than doing a maintenance release through JCP.
> 
> > How does Jackrabbit show spec and TCK compliance?
> 
> We rely on the appeals process and exclude list as described above. In
> practice that means that we're good as long as Jackrabbit passes the
> latest version of the tests in jackrabbit-jcr-tests.
> 
> > The official TCK and Jackrabbit's JCR tests unsurprisingly do not completely
> > cover the specification, and doing so would require a significant amount of
> > effort. However, there may be opportunity over time for projects to propose
> > adding new tests.
> > 
> 
> 
> Agreed. Especially with multiple active JCR implementations I think
> there's a shared incentive to maintain compatibility beyond that of
> what the official TCK tests. There's still plenty of place for healthy
> competition on performance, scalability, maintainability and various
> other features and "-ilities" that fall outside the scope of JCR.
> 
> Thinking further, a standalone test codebase would also be a great
> place to cooperate on things like performance benchmarks or other
> tests that go beyond the scope of the JCR spec but that would still
> add value to implementors of content repositories.
> 
> 

It would be great to collaborate on this. We also have the start of a performance test framework (as does Jackrabbit) that requires nothing but Maven and a JDK; the set of tests is minimal.
> 
> > If the JCR Tests' release cycle were independent from the Jackrabbit's
> > release cycle, then the process of adding new tests and releasing updated
> > JCR tests might be easier. On the other hand, verification that the tests are
> > valid may become a bit harder.
> > 
> 
> 
> Not necessarily. If the tests were maintained outside the main
> Jackrabbit trunk, it would be easier set up a CI build that runs all
> updates to the TCK codebase against the official JCR RI and latest
> stable versions of Jackrabbit, ModeShape and other JCR implementations
> (with excludes for tests that are known issues for each particular
> implementation). That should make it much easier than now to catch
> problems where the TCK codebase makes assumptions based on just a
> single implementation.
> 
> 

Setting up CI to automatically test against the latest stable releases is a great idea.
 
> 
> > For example, the ModeShape project has around 70 additional tests [4] that
> > we'd be willing to donate. Most of these are around verifying administration
> > privileges (e.g., registering namespaces, node types, etc., especially for
> > anonymous users), versioning, and locking.
> > 
> 
> 
> That would be awesome!
> 
> We also have some extra generic JCR tests, written for the jcr2spi
> component, that could and ideally should be used as a part of the TCK.
> 
We have a lot of query-related tests, too. But those are more basic JUnit tests that don't extend the AbstractJcrTest base class. It'd take a bit more work, but we could probably easily convert them to something compatible with JCR Tests.
 
> 
> > Additionally, the Oak effort is already running the JCR tests and has
> > recently found/fixed issues, too. Can those participating in Oak offer how
> > much they might expect to simply reuse vs. add new tests?
> > 
> 
> 
> 

> For now our goal is just to pass the already existing TCK tests, but
> I'm sure that over time we'll encounter cases where existing clients
> that assume only standard JCR functionality fail on Oak because of
> some problems in our code. Capturing such cases as TCK tests would be
> quite useful.
> 
> [1] http://jcp.org/en/procedures/jcp2#5
> [2] http://www.day.com/content/day/en/products/jcr/jsr-283/_jcr_content/par/download_1/file.res/jsr-283-tck-appeal.pdf
> 
> BR,
> 
> Jukka Zitting 


Re: JSR-283 official TCK and the 'jackrabbit-jcr-tests'

Posted by Jukka Zitting <ju...@gmail.com>.
Hi,

On Fri, Aug 3, 2012 at 5:21 PM, Randall Hauch <rh...@gmail.com> wrote:
> The Jackrabbit TCK page [2] states that Jackrabbit's JCR Tests module is
> used within the official JSR-283 TCK [3] but is not the official TCK test.
> Obviously the former have been maintained and continue to improve, but the
> official JSR-283 TCK tests appear to have been released last on August 19,
> 2009. Is it possible that the official JSR-283 TCK tests can be updated, and
> would having a separate release schedule help with this in any way?

The only way to update the official JSR-283 TCK is through the JCP
maintenance process [1].

> (If the official TCK tests cannot be updated, then I presume any project wanting
> to claim JSR-283 compliance would have to run the official TCKs and appeal each
> of the older incorrect tests by stating the issue and perhaps using the
> updated/corrected tests Jackrabbit JCR Tests module as "corrections".

Yes, the appeal process is documented in [2] and relies on an exclude
list of buggy tests maintained by the spec lead. In practice, if you
have pointers to relevant jackrabbit-jcr-tests bug reports and can
demonstrate that your implementation passes the test after that issue
is fixed, the spec lead will be happy to put the test case on the
exclude list for you.

> Is there anything that makes this easier with a non-updated TCK?

It's a more light-weight process than doing a maintenance release through JCP.

> How does Jackrabbit show spec and TCK compliance?

We rely on the appeals process and exclude list as described above. In
practice that means that we're good as long as Jackrabbit passes the
latest version of the tests in jackrabbit-jcr-tests.

> The official TCK and Jackrabbit's JCR tests unsurprisingly do not completely
> cover the specification, and doing so would require a significant amount of
> effort. However, there may be opportunity over time for projects to propose
> adding new tests.

Agreed. Especially with multiple active JCR implementations I think
there's a shared incentive to maintain compatibility beyond that of
what the official TCK tests. There's still plenty of place for healthy
competition on performance, scalability, maintainability and various
other features and "-ilities" that fall outside the scope of JCR.

Thinking further, a standalone test codebase would also be a great
place to cooperate on things like performance benchmarks or other
tests that go beyond the scope of the JCR spec but that would still
add value to implementors of content repositories.

> If the JCR Tests' release cycle were independent from the Jackrabbit's
> release cycle, then the process of adding new tests and releasing updated
> JCR tests might be easier. On the other hand, verification that the tests are
> valid may become a bit harder.

Not necessarily. If the tests were maintained outside the main
Jackrabbit trunk, it would be easier set up a CI build that runs all
updates to the TCK codebase against the official JCR RI and latest
stable versions of Jackrabbit, ModeShape and other JCR implementations
(with excludes for tests that are known issues for each particular
implementation). That should make it much easier than now to catch
problems where the TCK codebase makes assumptions based on just a
single implementation.

> For example, the ModeShape project has around 70 additional tests [4] that
> we'd be willing to donate. Most of these are around verifying administration
> privileges (e.g., registering namespaces, node types, etc., especially for
> anonymous users), versioning, and locking.

That would be awesome!

We also have some extra generic JCR tests, written for the jcr2spi
component, that could and ideally should be used as a part of the TCK.

> Additionally, the Oak effort is already running the JCR tests and has
> recently found/fixed issues, too. Can those participating in Oak offer how
> much they might expect to simply reuse vs. add new tests?

For now our goal is just to pass the already existing TCK tests, but
I'm sure that over time we'll encounter cases where existing clients
that assume only standard JCR functionality fail on Oak because of
some problems in our code. Capturing such cases as TCK tests would be
quite useful.

[1] http://jcp.org/en/procedures/jcp2#5
[2] http://www.day.com/content/day/en/products/jcr/jsr-283/_jcr_content/par/download_1/file.res/jsr-283-tck-appeal.pdf

BR,

Jukka Zitting