You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jena.apache.org by Andy Seaborne <an...@apache.org> on 2013/09/21 19:55:50 UTC

Modularization

>> Digression:
>>
>> Is this the right time to split the Model-API (APIs?) from the core
>> graph-level machinery into a separate module?


> (I don't understand this
> question).  Are you saying you would like to see all the interfaces and
> helper classes in a module and the memory implementation in another?  Do we
> want to do this?  If not, what do you mean?
>
> Claude

I was wondering if modularizing along the lines of a (new) core with 
graph-level, including the GraphMem implementation.  More modules, with 
a pure interface module is also possible but you can't test much without 
an implementation and complete mocking is a lot of work, so why not use 
one memory implementation as the functional reference impl?

The split then might be (and I haven't tried):

c.h.h.j.graph
c.h.h.j.mem
c.h.h.j.datatypes

and

c.h.h.j.rdf
c.h.h.j.ontology
c.h.h.j.enhanced

(and maybe ARP+xmloutput in their own module)

I'm sure there is entanglement and I'm guessing it's not trivial in 
places - I know there is around AnonIds, which I think should be kept in 
the RDF API (compatibility) but de-emphasised/deprecated from the Graph 
SPI).

The RDF API is not something that seems to be an extension point.  The 
API/SPI design allows multiple APIs in different styles.  I'd love to 
see an idiomatic scala API over graph/triple/node.  Or clojure.  Or a 
new Java one (for example , targetting Java8).

So, if that is desirable, how do we make it clean, clear and easy to do 
that?  One step is being clear-cut about the current RDF API.

	Andy


Re: Modularization

Posted by Andy Seaborne <an...@apache.org>.
On 21/09/13 18:55, Andy Seaborne wrote:
>
>>> Digression:
>>>
>>> Is this the right time to split the Model-API (APIs?) from the core
>>> graph-level machinery into a separate module?
>
>
>> (I don't understand this
>> question).  Are you saying you would like to see all the interfaces and
>> helper classes in a module and the memory implementation in another?
>> Do we
>> want to do this?  If not, what do you mean?
>>
>> Claude
>
> I was wondering if modularizing along the lines of a (new) core with
> graph-level, including the GraphMem implementation.  More modules, with
> a pure interface module is also possible but you can't test much without
> an implementation and complete mocking is a lot of work, so why not use
> one memory implementation as the functional reference impl?
>
> The split then might be (and I haven't tried):
>
> c.h.h.j.graph
> c.h.h.j.mem
> c.h.h.j.datatypes
>
> and
>
> c.h.h.j.rdf
> c.h.h.j.ontology
> c.h.h.j.enhanced
>
> (and maybe ARP+xmloutput in their own module)
>
> I'm sure there is entanglement and I'm guessing it's not trivial in
> places - I know there is around AnonIds, which I think should be kept in
> the RDF API (compatibility) but de-emphasised/deprecated from the Graph
> SPI).
>
> The RDF API is not something that seems to be an extension point.  The
> API/SPI design allows multiple APIs in different styles.  I'd love to
> see an idiomatic scala API over graph/triple/node.  Or clojure.  Or a
> new Java one (for example , targetting Java8).
>
> So, if that is desirable, how do we make it clean, clear and easy to do
> that?  One step is being clear-cut about the current RDF API.

Looks a lot cleaner than I feared - the graph-level code just seems to 
have a couple of non-trivial connections to the Model level:

1/ AnonId in a few places
2/ (datatype) XMLLiteral checking uses ARP
3/ Some util code in the wrong paxckages for a split (i.e. class Util)

and c.h.h.j.reasoner goes into jena-rdf-api

	Andy



Re: Modularization

Posted by Claude Warren <cl...@xenei.com>.
I think that if we build a complete set of Contract tests we will have the
ability to test the correctness of implementations.  I agree that many of
the tests (as currently written) are difficult to write without an
implementation.  I also agree that the memory based implementation makes a
good reference implementation.  That is what I have been using while
building the Contract tests (I'll make another checking shortly).



On Sat, Sep 21, 2013 at 6:55 PM, Andy Seaborne <an...@apache.org> wrote:

>
>  Digression:
>>>
>>> Is this the right time to split the Model-API (APIs?) from the core
>>> graph-level machinery into a separate module?
>>>
>>
>
>  (I don't understand this
>> question).  Are you saying you would like to see all the interfaces and
>> helper classes in a module and the memory implementation in another?  Do
>> we
>> want to do this?  If not, what do you mean?
>>
>> Claude
>>
>
> I was wondering if modularizing along the lines of a (new) core with
> graph-level, including the GraphMem implementation.  More modules, with a
> pure interface module is also possible but you can't test much without an
> implementation and complete mocking is a lot of work, so why not use one
> memory implementation as the functional reference impl?
>
> The split then might be (and I haven't tried):
>
> c.h.h.j.graph
> c.h.h.j.mem
> c.h.h.j.datatypes
>
> and
>
> c.h.h.j.rdf
> c.h.h.j.ontology
> c.h.h.j.enhanced
>
> (and maybe ARP+xmloutput in their own module)
>
> I'm sure there is entanglement and I'm guessing it's not trivial in places
> - I know there is around AnonIds, which I think should be kept in the RDF
> API (compatibility) but de-emphasised/deprecated from the Graph SPI).
>
> The RDF API is not something that seems to be an extension point.  The
> API/SPI design allows multiple APIs in different styles.  I'd love to see
> an idiomatic scala API over graph/triple/node.  Or clojure.  Or a new Java
> one (for example , targetting Java8).
>
> So, if that is desirable, how do we make it clean, clear and easy to do
> that?  One step is being clear-cut about the current RDF API.
>
>         Andy
>
>


-- 
I like: Like Like - The likeliest place on the web<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren