You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@marmotta.apache.org by ja...@apache.org on 2013/11/18 15:41:55 UTC

svn commit: r1543041 - in /incubator/marmotta/site/trunk/content: markdown/ markdown/kiwi/ resources/

Author: jakob
Date: Mon Nov 18 14:41:54 2013
New Revision: 1543041

URL: http://svn.apache.org/r1543041
Log:
Added meta-keywords to the kiwi pages

Removed:
    incubator/marmotta/site/trunk/content/markdown/kiwi-triplestore.md.vm
Modified:
    incubator/marmotta/site/trunk/content/markdown/kiwi/index.md.vm
    incubator/marmotta/site/trunk/content/markdown/kiwi/reasoner.md.vm
    incubator/marmotta/site/trunk/content/markdown/kiwi/sparql.md.vm
    incubator/marmotta/site/trunk/content/markdown/kiwi/transactions.md.vm
    incubator/marmotta/site/trunk/content/markdown/kiwi/triplestore.md.vm
    incubator/marmotta/site/trunk/content/markdown/kiwi/tripletable.md.vm
    incubator/marmotta/site/trunk/content/markdown/kiwi/versioning.md.vm
    incubator/marmotta/site/trunk/content/resources/.htaccess

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/index.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/index.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/index.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/index.md.vm Mon Nov 18 14:41:54 2013
@@ -1,14 +1,15 @@
 <head>
   <title>KiWi Triple Store</title> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
-  <meta name="keywords" content="KiWi, Triple Store, Quadrupel Store, RDF, Sesame, Sesame API, Sail, OpenRDF, Graph, Context, Database, relational Database, RDB, Transactions, SPARQL, Reasoning, Versioning, PostgreSQL, MySQL, H2" />
+  <meta name="keywords" content="KiWi, Triple Store, Quadrupel Store, KiWi Triple Store, Introduction, RDF, Sesame, Sesame API, Sail, OpenRDF, Graph, Context, Database, relational Database, RDB, Transactions, SPARQL, Reasoning, Versioning, PostgreSQL, MySQL, H2" />
 </head>
 
-# KiWi Triple Store #
+# KiWi Triplestore #
 
 The KiWi triple store is a high performance transactional triple store backend
-for OpenRDF Sesame building on top of a relational database (currently H2,
-PostgreSQL, or MySQL). It has optional support for rule-based reasoning (sKWRL)
-and versioning of updates. The KiWi triple store is also the default backend for
+for [OpenRDF Sesame][sesame] building on top of a relational database (currently
+[H2][h2], [PostgreSQL][pgsql], or [MySQL][mysql]). It has optional support for
+[rule-based reasoning (sKWRL)][reasoner] and [versioning of
+updates][versioning]. The KiWi triple store is also the default backend for
 Apache Marmotta. It originated in the EU-funded research project "KiWi -
 Knowledge in a Wiki" (hence the name).
 
@@ -16,21 +17,35 @@ The KiWi triple store is composed of a n
 combined and stacked as needed. Currently (Apache Marmotta v3.0), the KiWi
 triple store offers the following modules:
 
-* kiwi-triplestore: implements triple persistence in a relational database
-* kiwi-transactions: adds extended transaction support to a notifying sail
-  (notifies on transaction commit)
-* kiwi-tripletable: support for in-memory indexed Java Collections for OpenRDF
-  statements
-* kiwi-sparql: implements native SPARQL support by translating some critical
-  constructs directly into SQL
-* kiwi-contextaware: support for context-aware OpenRDF Sails (allow overriding
-  contexts in statements)
-* kiwi-reasoner: adds a rule-based forward chaining reasoner with truth
-  maintenance for the KiWi triple store
-* kiwi-versioning: adds versioning of updates to a KiWi triple store
-
-Versioning and Reasoner only work with a KiWi store as parent Sail, i.e. they
-cannot be used with other Sesame backends. Transactions, Context-Aware Sails,
-and the Triple Table support classes are in principle independent and can also
+* [kiwi-triplestore][ts]: implements triple persistence in a relational database
+* [kiwi-transactions][trans]: adds extended transaction support to a notifying
+  sail (notifies on transaction commit)
+* [kiwi-tripletable][tt]: support for in-memory indexed Java Collections for
+  OpenRDF statements
+* [kiwi-sparql][sparql]: implements native SPARQL support by translating some
+  critical constructs directly into SQL
+* [kiwi-contextaware][ca]: support for context-aware OpenRDF Sails (allow
+  overriding contexts in statements)
+* [kiwi-reasoner][reasoner]: adds a rule-based forward chaining reasoner with
+  truth maintenance for the KiWi triple store
+* [kiwi-versioning][versioning]: adds versioning of updates to a KiWi triple
+  store
+
+[Versioning][versioning] and [Reasoner][reasoner] only work with a KiWi store as
+parent Sail, i.e. they cannot be used with other Sesame backends.
+[Transactions][trans], [Context-Aware Sails][ca], and the [Triple Table][tt]
+support classes are in principle independent and can also
 be stacked with other storage backends.
 
+[sesame]: http://www.openrdf.org/
+[h2]: http://www.h2database.com/
+[pgsql]: http://www.postgresql.org/
+[mysql]: http://www.mysql.com/
+[reasoner]: reasoner.html
+[versioning]: versioning.html
+[ts]: triplestore.html
+[trans]: transactions.html
+[tt]: tripletable.html
+[sparql]: sparql.html
+[ca]: contextaware.html
+

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/reasoner.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/reasoner.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/reasoner.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/reasoner.md.vm Mon Nov 18 14:41:54 2013
@@ -1,15 +1,17 @@
 <head>
   <title>KiWi Reasoner</title> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
-  <meta name="keywords" content="KiWi, Reasoner, sKWRL, RDFS, OWL" />
+  <meta name="keywords" content="Reasoning, Reasoner, sKWRL, RDFS, OWL, KiWi, KiWi Triple Store, Sesame API, Sail, Context, Database, relational Database, RDB, PostgreSQL, MySQL, H2" />
 </head>
 
 # KiWi Reasoner #
 
-The KiWi reasoner is a powerful and flexible rule-based reasoner that can be used on top of a KiWi Triple Store. Its
-expressivity is more or less the same as Datalog, i.e. it will always terminate and can be evaluated in polynomial
-time (data complexity not taking into account the number of rules). In the context of triple stores, the KiWi
-reasoner can be used to easily implement the implicit semantics of different domain vocabularies. For example, the
-following rule program expresses SKOS semantics:
+The KiWi reasoner is a powerful and flexible rule-based reasoner that can be
+used on top of a KiWi Triple Store. Its expressivity is more or less the same as
+Datalog, i.e. it will always terminate and can be evaluated in polynomial time
+(data complexity not taking into account the number of rules). In the context of
+triple stores, the KiWi reasoner can be used to easily implement the implicit
+semantics of different domain vocabularies. For example, the following rule
+program expresses SKOS semantics:
 
     @prefix skos: <http://www.w3.org/2004/02/skos/core#>
 
@@ -26,28 +28,34 @@ following rule program expresses SKOS se
     ($1 skos:narrower $2) -> ($1 skos:related $2)
     ($1 skos:related $2) -> ($2 skos:related $1)
 
-Similarly, the reasoner can be used for expressing RDFS subclass and domain inference, as well as a subset of OWL
-semantics (the one that is most interesting :-P ). Beyond RDFS and OWL, it also allows implementing domain-specific
-rule semantics. Additional examples for programs can be found in the source code.
-
-The reasoner is implemented as a incremental forward-chaining reasoner with truth maintenance. In practice, this
-means that:
-
-* incremental reasoning is triggered after a transaction commits successfully; the reasoner will then apply those
-  rules that match with at least one of the newly added triples
-* inferred triples are then materialized in the triple store in the inferred context (see the configuration of
-  the triple store above) and are thus available in the same way as base triples
-* truth maintenance keeps track of the reasons (i.e. rules and triples) why an inferred triple exists; this helps
-  making updates (especially removals of rules and triples) very efficient without requiring to completely
-  recompute all inferred triples
+Similarly, the reasoner can be used for expressing RDFS subclass and domain
+inference, as well as a subset of OWL semantics (the one that is most
+interesting :-P ). Beyond RDFS and OWL, it also allows implementing
+domain-specific rule semantics. Additional examples for programs can be found in
+the source code.
+
+The reasoner is implemented as a incremental forward-chaining reasoner with
+truth maintenance. In practice, this means that:
+
+* incremental reasoning is triggered after a transaction commits successfully;
+  the reasoner will then apply those rules that match with at least one of the
+  newly added triples
+* inferred triples are then materialized in the triple store in the inferred
+  context (see the configuration of the triple store above) and are thus available
+  in the same way as base triples
+* truth maintenance keeps track of the reasons (i.e. rules and triples) why an
+  inferred triple exists; this helps making updates (especially removals of rules
+  and triples) very efficient without requiring to completely recompute all
+  inferred triples
 
 Maven Artifact
 --------------
 
-The KiWi Reasoner can only be used in conjunction with the KiWi Triple Store, because it maintains most of its
-information in the relational database (e.g. the data structures for truth maintenance) and directly translates
-rule body query patterns into SQL. To include it in a project that uses the KiWi Triple Store, add the following
-dependency to your Maven project:
+The KiWi Reasoner can only be used in conjunction with the KiWi Triple Store,
+because it maintains most of its information in the relational database (e.g.
+the data structures for truth maintenance) and directly translates rule body
+query patterns into SQL. To include it in a project that uses the KiWi Triple
+Store, add the following dependency to your Maven project:
 
      <dependency>
          <groupId>org.apache.marmotta</groupId>
@@ -59,9 +67,10 @@ dependency to your Maven project:
 Code Usage
 ----------
 
-The KiWi Reasoner can be stacked into any [sail stack](http://openrdf.callimachus.net/sesame/2.7/docs/users.docbook?view#chapter-repository-api) 
-with a transactional sail (see kiwi-transactions) and a KiWi Store at its root. The relevant database tables are 
-created automatically when the repository is initialised.A simple repository with reasoner is initialized as follows:
+The KiWi Reasoner can be stacked into any [sail stack][1] with a transactional
+sail (see kiwi-transactions) and a KiWi Store at its root. The relevant database
+tables are created automatically when the repository is initialised.A simple
+repository with reasoner is initialized as follows:
 
     KiWistore store = new KiWiStore("test",jdbcUrl,jdbcUser,jdbcPass,dialect, "http://localhost/context/default", "http://localhost/context/inferred");
     KiWiTransactionalSail tsail = new KiWiTransactionalSail(store);
@@ -78,26 +87,35 @@ created automatically when the repositor
     // run full reasoning (delete all existing inferred triples and re-create them)
     rsail.reRunPrograms();
 
-The reasoner can have any number of reasoning programs. The concept of a program is merely introduced to group
-different tasks. Internally, all reasoning rules are considered as an unordered collection, regardless which
-program they belong to.
+The reasoner can have any number of reasoning programs. The concept of a program
+is merely introduced to group different tasks. Internally, all reasoning rules
+are considered as an unordered collection, regardless which program they belong
+to.
+
+[1]: http://openrdf.callimachus.net/sesame/2.7/docs/users.docbook?view#chapter-repository-api
 
 Performance Considerations
 --------------------------
 
-Even though the reasoner is efficient compared with many other reasoners, there are a number of things to take into
-account, because reasoning is always a potentially expensive operation:
-
-* reasoning will always terminate, but the upper bound for inferred triples is in theory the set of all combinations
-  of nodes occurring in base triples in the database used as subject, predicate, or object, i.e. n^3
-* specific query patterns with many ground values are more efficient than patterns with many variables, as fixed
-  values can considerably reduce the candidate results in the SQL queries while variables are translated into SQL
+Even though the reasoner is efficient compared with many other reasoners, there
+are a number of things to take into account, because reasoning is always a
+potentially expensive operation:
+
+* reasoning will always terminate, but the upper bound for inferred triples is
+  in theory the set of all combinations of nodes occurring in base triples in
+  the database used as subject, predicate, or object, i.e. n^3
+* specific query patterns with many ground values are more efficient than
+  patterns with many variables, as fixed values can considerably reduce the
+  candidate results in the SQL queries while variables are translated into SQL
   joins
-* re-running a full reasoning can be extremely costly on large databases, so it is better configuring the reasoning
-  programs before importing large datasets (large being in the range of millions of triples)
-* updating a program is more efficient than first deleting the old version and then adding the new version,
-  because the reasoner compares old and new program and only updates the changed rules
-
-In addition, the reasoner is currently executed in a single worker thread. The main reason is that otherwise there
-are potentially many transaction conflicts. We are working on an improved version that could benefit more from
-multi-core processors.
+* re-running a full reasoning can be extremely costly on large databases, so it
+  is better configuring the reasoning programs before importing large datasets
+  (large being in the range of millions of triples)
+* updating a program is more efficient than first deleting the old version and
+  then adding the new version, because the reasoner compares old and new program
+  and only updates the changed rules
+
+In addition, the reasoner is currently executed in a single worker thread. The
+main reason is that otherwise there are potentially many transaction conflicts.
+We are working on an improved version that could benefit more from multi-core processors.
+

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/sparql.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/sparql.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/sparql.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/sparql.md.vm Mon Nov 18 14:41:54 2013
@@ -1,38 +1,54 @@
-<head><title>KiWi Native SPARQL</title></head> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+<head>
+  <title>KiWi Native SPARQL</title> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+  <meta name="keywords" content="KiWi, Triple Store, Quadrupel Store, KiWi Triple Store, RDF, Sesame API, Sail, Database, relational Database, RDB, SPARQL, PostgreSQL, MySQL, H2" />
+</head>
 
 # KiWi Native SPARQL #
 
-The KiWi SPARQL module offers optimized [SPARQL 1.1](http://www.w3.org/TR/sparql11-query/) query support for typical
-cases by translating parts of a SPARQL query directly into SQL. Currently, the following SPARQL constructs are translated:
-
-* JOIN between statement patterns: in case a part of a SPARQL query is a join between statement patterns, this part
-  will be optimized by translating the whole JOIN into a single SQL query involving all patterns
-* FILTER for a statement pattern or join of statement patterns: in this case, the filter conditions are translated
-  into SQL WHERE conditions on the nodes occurring in the patterns; most SPARQL constructs are supported (including
-  regular expressions), including (starting with Marmotta 3.2) most XPath functions defined in the SPARQL specification
-* full-text search (Marmotta 3.2 and above): adds additional full-text search functions to SPARQL that can be used in
-  the FILTER part of a query (see below)
-
-Also, result iterators of an optimized query operate directly on database cursors, so they will be very efficient in
-case only a few results will be retrieved.
-
-Note that KiWi SPARQL does not translate the complete query to SQL. Instead, it walks through the abstract syntax
-tree of a query and optimizes those parts where it can reliably do so and where it makes sense. This allows us to
-make efficient use of the performance of the underlying database while at the same time retaining the flexibility
-of full SPARQL 1.1. Specifically, the following popular constructs are currently *not* completely translated:
-
-* OPTIONAL (left join): the SPARQL OPTIONAL has a slighly different semantics than its SQL left join counterpart, so
-  OPTIONAL can at the moment not be optimized (but those parts of a query that are normal joins will still be optimized)
-* DISTINCT, ORDER BY, GROUP BY: since KiWi SPARQL currently only optimizes the query part and not the projection,
-  expressions operating on the query result like those mentioned will not be translated and instead evaluated in-memory;
-
+The KiWi SPARQL module offers optimized [SPARQL 1.1][1] query support for
+typical cases by translating parts of a SPARQL query directly into SQL.
+Currently, the following SPARQL constructs are translated:
+
+* **JOIN** between statement patterns: in case a part of a SPARQL query is a
+  join between statement patterns, this part will be optimized by translating the
+  whole JOIN into a single SQL query involving all patterns
+* **FILTER** for a statement pattern or join of statement patterns: in this
+  case, the filter conditions are translated into SQL WHERE conditions on the
+  nodes occurring in the patterns; most SPARQL constructs are supported (including
+  regular expressions), including (starting with Marmotta 3.2) most XPath
+  functions defined in the SPARQL specification
+* **full-text search** (Marmotta 3.2 and above): adds additional full-text
+  search functions to SPARQL that can be used in the FILTER part of a query (see
+  below)
+
+Also, result iterators of an optimized query operate directly on database
+cursors, so they will be very efficient in case only a few results will be
+retrieved.
+
+Note that KiWi SPARQL does not translate the complete query to SQL. Instead, it
+walks through the abstract syntax tree of a query and optimizes those parts
+where it can reliably do so and where it makes sense. This allows us to make
+efficient use of the performance of the underlying database while at the same
+time retaining the flexibility of full SPARQL 1.1. Specifically, the following
+popular constructs are currently *not* completely translated:
+
+* **OPTIONAL** (left join): the SPARQL OPTIONAL has a slighly different
+  semantics than its SQL left join counterpart, so OPTIONAL can at the moment
+  not be optimized (but those parts of a query that are normal joins will still
+  be optimized)
+* **DISTINCT**, **ORDER BY**, **GROUP BY**: since KiWi SPARQL currently only
+  optimizes the query part and not the projection, expressions operating on the
+  query result like those mentioned will not be translated and instead evaluated
+  in-memory;
 
+[1]: http://www.w3.org/TR/sparql11-query/
 
 Maven Artifact
 --------------
 
-The KiWi SPARQL optimizations can only be used in conjunction with the KiWi Triple Store, because it works directly on
-the internal KiWi data structures. To include it in a project that uses the KiWi Triple Store, add the following
+The KiWi SPARQL optimizations can only be used in conjunction with the KiWi
+Triple Store, because it works directly on the internal KiWi data structures. To
+include it in a project that uses the KiWi Triple Store, add the following
 dependency to your Maven project:
 
      <dependency>
@@ -45,31 +61,41 @@ dependency to your Maven project:
 Full-Text Search (3.2 and above)
 --------------------------------
 
-Starting with the development version of Apache Marmotta 3.2, there is also full-text search support in SPARQL queries.
-Full-text search works over the literal values of nodes and differs from normal literal queries or regexp filters in that
-it applies language-specific lingustic processing (e.g. stemming and stop-word elimination). The KiWi SPARQL module
-comes with its own namespace for SPARQL extensions:
+Starting with the development version of Apache Marmotta 3.2, there is also
+full-text search support in SPARQL queries. Full-text search works over the
+literal values of nodes and differs from normal literal queries or regexp
+filters in that it applies language-specific lingustic processing (e.g. stemming
+and stop-word elimination). The KiWi SPARQL module comes with its own namespace
+for SPARQL extensions:
 
     PREFIX mm: <http://marmotta.apache.org/vocabulary/sparql-functions#>
 
-Full-text search currently offers two SPARQL functions that can be used in the FILTER part of a query and return
-boolean values (found or not found):
+Full-text search currently offers two SPARQL functions that can be used in the
+FILTER part of a query and return boolean values (found or not found):
 
-* `fulltext-search(text, query, [language])`: searches "text" for the words occurring in "query", optionally applying
-  the language-specific processing for the given language; query is a simple text literal (list of words) without any
-  boolean connectors; words are AND connected, i.e. all words have to be found in the text for a successful match
-* `fulltext-query(text, query, [language])`: searches "text" using the boolean query string passed in "query", optionally
-  applying language-specific processing for the given language; query is a boolean query string following the
-  [syntax used by PostgreSQL](http://www.postgresql.org/docs/9.3/static/textsearch-controls.html#TEXTSEARCH-PARSING-QUERIES)
-
-Note that full-text search is only available when using backend databases that support this functionality (currently only
-PostgreSQL and MySQL). Only PostgreSQL has real support for language specific processing. Also note that performance
-heavily depends on the availability of an appropriate full-text index in the database. The KiWi SPARQL module will
-automatically create full-text indexes for the languages configured in the KiWiConfiguration used for creating the
-triple store.
-
-The following example searches for the word "software" occurring in the dc:description field of the resource using the
-literal language of dc:description:
+* `fulltext-search(text, query, [language])`: searches "text" for the words
+  occurring in "query", optionally applying the language-specific processing for
+  the given language; query is a simple text literal (list of words) without any
+  boolean connectors; words are AND connected, i.e. all words have to be found
+  in the text for a successful match
+* `fulltext-query(text, query, [language])`: searches "text" using the boolean
+  query string passed in "query", optionally applying language-specific
+  processing for the given language; query is a boolean query string following
+  the [syntax used by PostgreSQL][2]
+
+[2]: http://www.postgresql.org/docs/9.3/static/textsearch-controls.html#TEXTSEARCH-PARSING-QUERIES
+
+Note that full-text search is only available when using backend databases that
+support this functionality (currently only PostgreSQL and MySQL). Only
+PostgreSQL has real support for language specific processing. Also note that
+performance heavily depends on the availability of an appropriate full-text
+index in the database. The KiWi SPARQL module will automatically create
+full-text indexes for the languages configured in the KiWiConfiguration used for
+creating the triple store.
+
+The following example searches for the word "software" occurring in the
+dc:description field of the resource using the literal language of
+dc:description:
 
     PREFIX foaf: <http://xmlns.com/foaf/0.1/>
     PREFIX dc: <http://purl.org/dc/elements/1.1/>
@@ -88,16 +114,22 @@ literal language of dc:description:
 Performance Considerations
 --------------------------
 
-In practice, the KiWi SPARQL module seriously improves the performance of most SPARQL queries (and even updates) and
-should therefore almost always be used in conjunction with the KiWi triple store. However, there is no magic, and you
-need to keep in mind that certain queries will still be problematic. To improve SPARQL performance, try to follow the
-following recommendations:
-
-* avoid DISTINCT, ORDER BY, GROUP BY: filtering out duplicates is a performance killer, as it requires to first load
-  all results into memory; if you do not strictly need it, do not use it
-* avoid OPTIONAL: optional queries are currently not optimized, as the semantics of OPTIONAL in SPARQL slightly differs
-  from the semantics of an SQL left join
-* avoid subselects: a join with a subselect currently cannot be optimized, because KiWi SPARQL does not work on the
-  results of a SPARQL query, only on the conditions
-* use FILTER: conditions in the FILTER part of a query will be translated into WHERE conditions in SQL; the more precise
-  your filter conditions are, the better your query will perform
+In practice, the KiWi SPARQL module seriously improves the performance of most
+SPARQL queries (and even updates) and should therefore almost always be used in
+conjunction with the KiWi triple store. However, there is no magic, and you need
+to keep in mind that certain queries will still be problematic. To improve
+SPARQL performance, try to follow the following recommendations:
+
+* **avoid DISTINCT, ORDER BY, GROUP BY**: filtering out duplicates is a
+  performance killer, as it requires to first load all results into memory; if
+  you do not strictly need it, do not use it
+* **avoid OPTIONAL**: optional queries are currently not optimized, as the
+  semantics of OPTIONAL in SPARQL slightly differs from the semantics of an SQL
+  left join
+* **avoid subselects**: a join with a subselect currently cannot be optimized,
+  because KiWi SPARQL does not work on the results of a SPARQL query, only on the
+  conditions
+* **use FILTER**: conditions in the FILTER part of a query will be translated
+  into WHERE conditions in SQL; the more precise your filter conditions are, the
+  better your query will perform
+

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/transactions.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/transactions.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/transactions.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/transactions.md.vm Mon Nov 18 14:41:54 2013
@@ -1,23 +1,33 @@
-<head><title>KiWi Transaction Support</title></head> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+<head>
+  <title>KiWi Transaction Support</title> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+  <meta name="keywords" content="KiWi, Triple Store, Quadrupel Store, Sesame, Sesame API, Sail, OpenRDF, Transactions, SPARQL" />
+</head>
 
 # KiWi Transaction Support #
 
+OpenRDF Sesame already offers basic transaction handling and update notification
+support in any OpenRDF sail stack around a NotifyingSail. This module builds on
+this functionality to provide a mechanism for keeping track of all changes
+(added and removed triples) occurring between the time a transaction began and
+the time the transaction was committed or rolled back. The transaction data is
+then handed over to all registered transaction listeners at certain event points
+(in analogy to JPA we offer "before commit", "after commit", "on rollback").
+Since the KiWi transaction support builds upon the NotifyingSail, it can be used
+in any OpenRDF sail stack, independently of the other KiWi modules.
+
+Extended transaction support is e.g. used by the versioning component (each
+transaction is considered a unit of work or version) and by the reasoner (on
+transaction commit, the transaction data is handed to the incremental reasoner).
+It is also used by some extended Apache Marmotta and [Linked Media Framework][1]
+functionalities like the LMF Semantic Search.
 
-OpenRDF Sesame already offers basic transaction handling and update notification support in any OpenRDF sail stack around
-a NotifyingSail. This module builds on this functionality to provide a mechanism for keeping track of all changes
-(added and removed triples) occurring between the time a transaction began and the time the transaction was committed
-or rolled back. The transaction data is then handed over to all registered transaction listeners at certain event points
-(in analogy to JPA we offer "before commit", "after commit", "on rollback"). Since the KiWi transaction support builds
-upon the NotifyingSail, it can be used in any OpenRDF sail stack, independently of the other KiWi modules.
-
-Extended transaction support is e.g. used by the versioning component (each transaction is considered a unit of work or
-version) and by the reasoner (on transaction commit, the transaction data is handed to the incremental reasoner). It
-is also used by some extended Apache Marmotta and Linked Media Framework functionalities like the LMF Semantic Search.
+[1]: http://code.google.com/p/lmf/
 
 Maven Artifact
 --------------
 
-To use the extended transaction support, include the following artifact in your Maven build file:
+To use the extended transaction support, include the following artifact in your
+Maven build file:
 
      <dependency>
          <groupId>org.apache.marmotta</groupId>
@@ -28,8 +38,9 @@ To use the extended transaction support,
 Code Usage
 ----------
 
-In your code, the KiWi extended transactions can easily be stacked into your sail stack around any NotifyingSail.
-Event listeners can be added/removed by calling the appropriate addTransactionListener and removeTransactionListener
+In your code, the KiWi extended transactions can easily be stacked into your
+sail stack around any NotifyingSail. Event listeners can be added/removed by
+calling the appropriate addTransactionListener and removeTransactionListener
 methods:
 
     KiWiTransactionalSail sail = new KiWiTransactionalSail(new MemoryStore());
@@ -39,8 +50,11 @@ methods:
 
 The TransactionListener interface defines three methods:
 
-* _beforeCommit(TransactionData data)_ is called just before the transaction actually carries out its commit
-  to the database;
-* _afterCommit(TransactionData data)_ is called immediately after the transaction has been committed to the database
-  (i.e. you can rely on the data being persistent)
-* _rollback(TransactionData data)_ is called when the transaction is rolled back (e.g. in case of an error)
+* _beforeCommit(TransactionData data)_ is called just before the transaction
+  actually carries out its commit to the database;
+* _afterCommit(TransactionData data)_ is called immediately after the
+  transaction has been committed to the database (i.e. you can rely on the data
+  being persistent)
+* _rollback(TransactionData data)_ is called when the transaction is rolled back
+  (e.g. in case of an error)
+

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/triplestore.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/triplestore.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/triplestore.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/triplestore.md.vm Mon Nov 18 14:41:54 2013
@@ -1,14 +1,18 @@
-<head><title>KiWi Store</title></head> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
-
-# KiWi Store #
-
-
-The heart of the KiWi triple store is the [KiWiStore][1] storage backend. It implements a OpenRDF Notifying Sail on top
-of a custom relational database schema. It can be used in a similar way to the already existing (but now deprecated)
-OpenRDF RDBMS backends. The KiWi triple store operates almost directly on top of the relational database, there is
-only minimal overhead on the Java side (some caching and result transformations). Each OpenRDF repository connection
-is a database connection, and each repository result is a database cursor (so it supports lazy fetching when iterating
-over the result).
+<head>
+  <title>KiWi Triplestore</title> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+  <meta name="keywords" content="KiWi, Triple Store, Quadrupel Store, KiWi Triple Store, KiWi Triplestore, Introduction, RDF, Sesame, Sesame API, Sail, OpenRDF, Graph, Context, Database, relational Database, RDB, Transactions, SPARQL, PostgreSQL, MySQL, H2" />
+</head>
+
+# KiWi Triplestore #
+
+The heart of the KiWi triple store is the [KiWiStore][1] storage backend. It
+implements a OpenRDF Notifying Sail on top of a custom relational database
+schema. It can be used in a similar way to the already existing (but now
+deprecated) OpenRDF RDBMS backends. The KiWi triple store operates almost
+directly on top of the relational database, there is only minimal overhead on
+the Java side (some caching and result transformations). Each OpenRDF repository
+connection is a database connection, and each repository result is a database
+cursor (so it supports lazy fetching when iterating over the result).
 
 [1]: ../apidocs/org/apache/marmotta/kiwi/sail/KiWiStore.html
 
@@ -28,10 +32,11 @@ To use the KiWiStore, include the follow
 Code Usage
 ----------
 
-In your code, creating a KiWi triple store works mostly like other OpenRDF backends. The only additional data that is
-required are the JDBC connection details for accessing the database (i.e. database type, database URI, database user
-and database password) and how to store inferred triples or triples without explicit context. You can create a new
-instance of a KiWi store as follows:
+In your code, creating a KiWi triple store works mostly like other OpenRDF
+backends. The only additional data that is required are the JDBC connection
+details for accessing the database (i.e. database type, database URI, database
+user and database password) and how to store inferred triples or triples without
+explicit context. You can create a new instance of a KiWi store as follows:
 
 <pre class="prettyprint">
 String defaultContext  = "http://localhost/context/default";
@@ -58,41 +63,53 @@ try {
 }
 </pre>
 
-Note that there are some "uncommon" parameters, most notably the defaultContext and inferredContext:
+Note that there are some "uncommon" parameters, most notably the defaultContext
+and inferredContext:
 
-* **defaultContext** is the URI of the context to use in case no explicit context is specified; this changes the default
-  behaviour of OpenRDF a bit, but it is the cleaner approach (and more efficient in the relational database because it
-  avoids NULL values)
-* **inferredContext** is the URI to use for storing all triples that are inferred by some reasoner (either the KiWi reasoner
-  or the OpenRDF RDFS reasoner); this is also a different behaviour to OpenRDF; we use it because the semantics is
-  otherwise not completely clear in case an inference was made based on the information stemming from two different
-  contexts
-* **dialect** specifies the dialect to use for connecting to the database; currently supported dialects are H2Dialect,
-  PostgreSQLDialect and MySQLDialect; note that the MySQL JDBC library is licensed under LGPL and can therefore not
-  be shipped with Apache Marmotta
+* **defaultContext** is the URI of the context to use in case no explicit
+  context is specified; this changes the default behaviour of OpenRDF a bit, but
+  it is the cleaner approach (and more efficient in the relational database
+  because it avoids NULL values)
+* **inferredContext** is the URI to use for storing all triples that are
+  inferred by some reasoner (either the KiWi reasoner or the OpenRDF RDFS
+  reasoner); this is also a different behaviour to OpenRDF; we use it because the
+  semantics is otherwise not completely clear in case an inference was made based
+  on the information stemming from two different contexts
+* **dialect** specifies the dialect to use for connecting to the database;
+  currently supported dialects are H2Dialect, PostgreSQLDialect and MySQLDialect;
+  note that the MySQL JDBC library is licensed under LGPL and can therefore not be
+  shipped with Apache Marmotta
 
 We plan to add support for additional databases over time.
 
 Performance Considerations
 --------------------------
 
-Additionally, there are some things to keep in mind when using a KiWi triple store (all of them are good coding
-practice, but in KiWi they also have performance implications):
-
-* if you are interested in good performance (production environments), use a proper database (e.g. *PostgreSQL*)!
-* a RepositoryConnection has a direct correspondence to a database connection, so it always needs to be closed properly;
-  if you forget closing connections, you will have resource leakage pretty quickly
-* all operations carried out on a repository connection are directly carried out in the database (e.g. inserting
-  triples); the database connection is transactional, i.e. changes will only be available to other transactions when
-  the commit() method is called explicitly; it is therefore good practice to always commit or rollback a connection
-  before closing it
-* a RepositoryResult has a direct correspondence to a database ResultSet and therefore to a database cursor, so like
-  with connections, it needs to be closed properly or otherwise you will have resource leakage
-* the value factory of the KiWi Store maintains its own, separate database connection for creating and retrieving
-  RDF values; any newly created values are committed immediately to the database to make sure they are available to
-  other transactions
-* the database tables will only be created when repository.initialize() is called; if the tables already exist,
-  initialization will check whether a schema upgrade is required and automatically do the upgrade if needed
-* the repository must be explicitly shutdown when it is no longer needed, or otherwise it will keep open
-  the database connection of the value factory as well as the internal connection pool
+Additionally, there are some things to keep in mind when using a KiWi triple
+store (all of them are good coding practice, but in KiWi they also have
+performance implications):
+
+* if you are interested in good performance (production environments), use a
+  proper database (e.g. *PostgreSQL*)!
+* a RepositoryConnection has a direct correspondence to a database connection,
+  so it always needs to be closed properly; if you forget closing connections, you
+  will have resource leakage pretty quickly
+* all operations carried out on a repository connection are directly carried out
+  in the database (e.g. inserting triples); the database connection is
+  transactional, i.e. changes will only be available to other transactions when
+  the commit() method is called explicitly; it is therefore good practice to
+  always commit or rollback a connection before closing it
+* a RepositoryResult has a direct correspondence to a database ResultSet and
+  therefore to a database cursor, so like with connections, it needs to be closed
+  properly or otherwise you will have resource leakage
+* the value factory of the KiWi Store maintains its own, separate database
+  connection for creating and retrieving RDF values; any newly created values are
+  committed immediately to the database to make sure they are available to other
+  transactions
+* the database tables will only be created when repository.initialize() is
+  called; if the tables already exist, initialization will check whether a schema
+  upgrade is required and automatically do the upgrade if needed
+* the repository must be explicitly shutdown when it is no longer needed, or
+  otherwise it will keep open the database connection of the value factory as well
+  as the internal connection pool
 

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/tripletable.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/tripletable.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/tripletable.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/tripletable.md.vm Mon Nov 18 14:41:54 2013
@@ -1,17 +1,26 @@
 <head><title>KiWi Triple Table</title></head> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
 
+## FIXME: in 3.2 this module moves to sesame-tools!
+
 # KiWi Triple Table #
 
-The KiWi Triple Table offers efficient Java Collections over OpenRDF Statements. It implements the Java Set interface,
-but offers query support (listing triples with wildcards) with in-memory SPOC and CSPO indexes. This is useful if you
-want to keep large temporary in-memory collections of triples and is e.g. used by the kiwi-transactions module for
-keeping track of added and removed triples in the transaction data. It can also be used for caching purposes.
+The KiWi Triple Table offers efficient [Java Collections][1] over [OpenRDF Statements][2].
+It implements the Java [Set][3] interface, but offers query support (listing triples
+with wildcards) with in-memory SPOC and CSPO indexes. This is useful if you want
+to keep large temporary in-memory collections of triples and is e.g. used by the
+kiwi-transactions module for keeping track of added and removed triples in the
+transaction data. It can also be used for caching purposes.
+
+[1]: http://docs.oracle.com/javase/tutorial/collections/interfaces/index.html
+[2]: http://openrdf.callimachus.net/sesame/2.7/apidocs/org/openrdf/model/Statement.html
+[3]: http://docs.oracle.com/javase/7/docs/api/java/util/Set.html
 
 Maven Artifact
 --------------
 
-The KiWi Triple Table can be used with any OpenRDF repository, it is merely a container for triples. To use the library
-in your own project, add the following Maven dependency to your project:
+The KiWi Triple Table can be used with any **OpenRDF repository**, it is merely a
+container for triples. To use the library in your own project, add the following
+Maven dependency to your project:
 
      <dependency>
          <groupId>org.apache.marmotta</groupId>
@@ -22,7 +31,8 @@ in your own project, add the following M
 Code Usage
 ----------
 
-As the triple table implements the Set interface, usage is very simple. The following code block illustrates how:
+As the triple table implements the Set interface, usage is very simple. The
+following code block illustrates how:
 
     TripleTable<Statement> triples = new TripleTable<Statement>();
 
@@ -41,6 +51,8 @@ As the triple table implements the Set i
         // do something with t
     }
 
-Note that the KiWi Triple Table does not implement a complete repository and therefore neither offers its own value
-factory nor allows persistence of statements or connection management. In case you need an in-memory repository with
-support for all these features, consider using a OpenRDF memory sail.
+Note that the KiWi Triple Table does not implement a complete repository and
+therefore neither offers its own value factory nor allows persistence of
+statements or connection management. In case you need an in-memory repository
+with support for all these features, consider using a OpenRDF memory sail.
+

Modified: incubator/marmotta/site/trunk/content/markdown/kiwi/versioning.md.vm
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/markdown/kiwi/versioning.md.vm?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/markdown/kiwi/versioning.md.vm (original)
+++ incubator/marmotta/site/trunk/content/markdown/kiwi/versioning.md.vm Mon Nov 18 14:41:54 2013
@@ -1,32 +1,40 @@
-<head><title>KiWi Versioning</title></head> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+<head>
+  <title>KiWi Versioning</title> <!-- awaiting for https://jira.codehaus.org/browse/DOXIA-472 -->
+  <meta name="keywords" content="KiWi, Triple Store, Quadrupel Store, KiWi Triple Store, Sesame, Sesame API, Sail, Database, relational Database, Versioning, PostgreSQL, MySQL, H2" />
+</head>
 
 # KiWi Versioning #
 
-The KiWi Versioning module allows logging of updates to the triple store as well as accessing snapshots of the
-triple store at any given time in history. In many ways, it is similar to the history functionality offered by
-Wiki systems. KiWi Versioning can be useful for many purposes:
-
-* for tracking changes to resources and the whole repository and identifying the source (provenance) of certain
-  triples
-* for creating snapshots of the repository that are "known to be good" and referring to these snapshots later
-  while still updating the repository with new data
-* for more easily reverting errorneous changes to the triple store, in a similar way to a wiki; this can e.g. be
-  used in a "data wiki"
-
-Currently, the KiWi Versioning module allows tracking changes and creating snapshots. Reverting changes has not
-yet been implemented and will be added later (together with support for pruning old versions).
-
-Versioning is tightly bound to the transaction support: a version is more or less the transaction data after
-commit time. This corresponds to the concept of "unit of work": a unit of work is finished when the user
-explicitly commits the transaction (e.g. when the entity has been completely added with all its triples or
-the ontology has been completely imported).
+The KiWi Versioning module allows logging of updates to the triple store as well
+as accessing snapshots of the triple store at any given time in history. In many
+ways, it is similar to the history functionality offered by Wiki systems. KiWi
+Versioning can be useful for many purposes:
+
+* for tracking changes to resources and the whole repository and identifying the
+  source (provenance) of certain triples
+* for creating snapshots of the repository that are "known to be good" and
+  referring to these snapshots later while still updating the repository with new
+  data
+* for more easily reverting errorneous changes to the triple store, in a similar
+  way to a wiki; this can e.g. be used in a "data wiki"
+
+Currently, the KiWi Versioning module allows tracking changes and creating
+snapshots. Reverting changes has not yet been implemented and will be added
+later (together with support for pruning old versions).
+
+Versioning is tightly bound to the transaction support: a version is more or
+less the transaction data after commit time. This corresponds to the concept of
+"unit of work": a unit of work is finished when the user explicitly commits the
+transaction (e.g. when the entity has been completely added with all its triples
+or the ontology has been completely imported).
 
 Maven Artifact
 --------------
 
-The KiWi Versioning module can only be used in conjunction with the KiWi Triple Store, because it maintains most of
-its information in the relational database (e.g. the data structures for change tracking). To include it in a
-project that uses the KiWi Triple Store, add the following dependency to your Maven project:
+The KiWi Versioning module can only be used in conjunction with the KiWi Triple
+Store, because it maintains most of its information in the relational database
+(e.g. the data structures for change tracking). To include it in a project that
+uses the KiWi Triple Store, add the following dependency to your Maven project:
 
      <dependency>
          <groupId>org.apache.marmotta</groupId>
@@ -37,8 +45,9 @@ project that uses the KiWi Triple Store,
 Code Usage
 ----------
 
-You can use the KiWi Versioning module in your own code in a sail stack with a KiWi transactional sail and a KiWi
-Store at the root. The basic usage is as follows:
+You can use the KiWi Versioning module in your own code in a sail stack with a
+KiWi transactional sail and a KiWi Store at the root. The basic usage is as
+follows:
 
     KiWiStore store = new KiWiStore("test",jdbcUrl,jdbcUser,jdbcPass,dialect, "http://localhost/context/default", "http://localhost/context/inferred");
     KiWiTransactionalSail tsail = new KiWiTransactionalSail(store);
@@ -72,20 +81,20 @@ Store at the root. The basic usage is as
         snapshotConnection.close();
     }
 
-Note that for obvious reasons (you cannot change history!), a snapshot connection is read-only. Accessing any update
-functionality of the connection will throw a RepositoryException. However, you can of course even run SPARQL queries
-over a snapshot connection (SPARQLing the past...).
+Note that for obvious reasons (you cannot change history!), a snapshot
+connection is read-only. Accessing any update functionality of the connection
+will throw a RepositoryException. However, you can of course even run SPARQL
+queries over a snapshot connection (SPARQLing the past...).
 
 
 Performance Considerations
 --------------------------
 
-When versioning is enabled, bear in mind that nothing is ever really deleted in the triple store. Triples that are
-removed in one of the updates are simply marked as "deleted" and added to the version information
-for removed triples.
-
-Otherwise there is no considerable performance impact. Accessing snapshots at any date is essentially as efficient
-as any ordinary triple access (but it does not do triple caching).
-
-
+When versioning is enabled, bear in mind that nothing is ever really deleted in
+the triple store. Triples that are removed in one of the updates are simply
+marked as "deleted" and added to the version information for removed triples.
+
+Otherwise there is no considerable performance impact. Accessing snapshots at
+any date is essentially as efficient as any ordinary triple access (but it does
+not do triple caching).
 

Modified: incubator/marmotta/site/trunk/content/resources/.htaccess
URL: http://svn.apache.org/viewvc/incubator/marmotta/site/trunk/content/resources/.htaccess?rev=1543041&r1=1543040&r2=1543041&view=diff
==============================================================================
--- incubator/marmotta/site/trunk/content/resources/.htaccess (original)
+++ incubator/marmotta/site/trunk/content/resources/.htaccess Mon Nov 18 14:41:54 2013
@@ -16,8 +16,8 @@ RewriteRule	^(.*)/introduction.html$	$1/
 
 # KiWi
 RewriteRule	^kiwi-parent$	kiwi/	[R=301,L]
-RewriteRule	^(kiwi(-parent)?/)?kiwi-([^-/]+)$	kiwi/$3.html	[R=301,L]
-RewriteRule	^(kiwi(-parent)?/)?kiwi-([^/]+)$	kiwi/	[R=301,L]
+RewriteRule	^(kiwi(-parent)?/)?kiwi-([^-/]+)(\.html)?$	kiwi/$3.html	[R=301,L]
+RewriteRule	^(kiwi(-parent)?/)?kiwi-([^/]+)(\.html)?$	kiwi/	[R=301,L]
 
 # LDCache
 RewriteRule	^ldcache-backend-([^-/]+)$	ldcache/backends.html	[R=301,L]
@@ -39,7 +39,7 @@ RewriteRule	^ldpath-([^/]+)$	ldpath/	[R=
 RewriteRule	^marmotta-(installer|webapp)$	installation.html	[R=301,L]
 RewriteRule	^marmotta-commons$	commons.html	[R=301,L]
 RewriteRule	^marmotta-client-([^/]+)$	client-library.html	[R=301,L]
-RewriteRule	^marmotta-([^-/]+)$	platform/$1-module.html	[R=301,L]
+RewriteRule	^marmotta-([^-/]+)(\.html)?$	platform/$1-module.html	[R=301,L]
 RewriteRule	^marmotta-([^/]+)$	platform/	[R=301,L]
 
 # Sesame-Tools