You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@jena.apache.org by rv...@apache.org on 2013/08/23 21:14:59 UTC

svn commit: r1516987 - /jena/site/trunk/content/documentation/jdbc/index.mdtext

Author: rvesse
Date: Fri Aug 23 19:14:59 2013
New Revision: 1516987

URL: http://svn.apache.org/r1516987
Log:
Flesh out JDBC stub documentation

Modified:
    jena/site/trunk/content/documentation/jdbc/index.mdtext

Modified: jena/site/trunk/content/documentation/jdbc/index.mdtext
URL: http://svn.apache.org/viewvc/jena/site/trunk/content/documentation/jdbc/index.mdtext?rev=1516987&r1=1516986&r2=1516987&view=diff
==============================================================================
--- jena/site/trunk/content/documentation/jdbc/index.mdtext (original)
+++ jena/site/trunk/content/documentation/jdbc/index.mdtext Fri Aug 23 19:14:59 2013
@@ -8,9 +8,10 @@ are supported.  Otherwise it is a fully 
 
 ## Documentation
 
-A general overview of the functionality of the drivers is given later on this page, the
-following additional documentation topics are also available.
-
+- [Overview](#overview)
+- [Basic Usage](#basic-usage)
+ - [Making Queries](#making-queries)
+- [Alternatives](#alternatives)
 - [Jena JDBC Drivers](drivers.html)
 - [Implementing a custom Jena JDBC Driver](custom_driver.html)
 
@@ -27,8 +28,58 @@ As detailed on the [drivers](drivers.htm
 
 These are all built on a core library which can be used to build [custom drivers](custom_driver.html)
 if desired.  This means that all drivers share common infrastructure and thus exhibit broadly speaking
-the same behaviors.
+the same behavior around handling queries, updates and results.
+
+### Treatment of Results
+
+One important behavioral aspect to understand is how results are treated compared to a traditional
+JDBC driver.  SPARQL provides four query forms and thus four forms of results while JDBC assumes all
+results have a simple tabular format.  Therefore one of the main jobs of the core library is to marshal
+the results of each kind of query into a tabular format.  For `SELECT` queries this is a trivial mapping,
+for `CONSTRUCT` and `DESCRIBE` the triples are mapped to columns named `Subject`, `Predicate` and `Object`
+respectively, finally for `ASK` the boolean is mapped to a single column named `ASK`.
+
+The second issue is that JDBC expects uniform column typing throughout a result set which is not
+something that holds true for SPARQL results.  Therefore the core library takes a pragmatic approach to column
+typing and makes the exact behavior configurable by the user.  The default behavior of the core library is
+to type all columns as `Types.NVARCHAR` with a Java type of `String`, this provides the widest compatibility
+possible with both the SPARQL results and consuming tools since we can treat everything as a string.  We
+refer to this default behavior as medium compatibility, it is sufficient to allow JDBC tools to interpret
+results for basic display but may be unsuitable for further processing.
+
+We then provide two alternatives, the first of which we refer to as high compatibility aims to present the
+data in a way that is more amenable to subsequent processing by JDBC tools.  In this mode the column types
+in a result set are detected by sniffing the data in the first row of the result set and assigning appropriate
+types.  For example if the first row for a given column has the value `"1234"^^xsd:integer` then it would
+be assigned the type `Types.BIGINT` and have the Java type of `Long`.  Doing this allows JDBC tools to carry
+out subsequent calculations on the data in a type appropriate way.  It is important to be aware that this
+sniffing may not be accurate for the entire result set so can still result in errors processing some rows.
+
+The second alternative we refer to as low compatibility and is designed for users who are using the driver
+directly and are fully aware that they are writing SPARQL queries and getting SPARQL results.  In this mode
+we make no effort to type columns in a friendly way instead typing them as `Types.JAVA_OBJECT` with the Java
+type `Node` (i.e. the Jena [Node](/documentation/javadoc/jena/com/hp/hpl/jena/graph/Node.html) class).
+
+Regardless of how you configure to do column typing the core library does it best to allow you to marshal values
+into strong types.  For example even if using default compatibility and your columns are typed as strings
+from a JDBC perspective you can still call `getLong("column")` and if there is a valid conversion the
+library will make it for you.
+
+Another point of interest is around our support of different result set types.  The drivers support both
+`ResultSet.TYPE_FORWARD_ONLY` and `ResultSet.TYPE_SCROLL_INSENSITIVE`, note that regardless of the type
+chosen and the underlying query type all result sets are `ResultSet.CONCUR_READ_ONLY` i.e. the `setLong()`
+style methods cannot be used to update the underlying RDF data.  Users should be aware that the default
+behavior is to use forward only result sets since this allows the drivers to stream the results and
+minimizes memory usage.  When scrollable result sets are used the drivers will cache all the results into
+memory which can use lots of memory when querying large datasets.
+
+## Basic Usage
+
+### Making a Connection
+
+### Making Queries
 
+You make queries as you would
 
 ## Alternatives