You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@jena.apache.org by rv...@apache.org on 2015/02/17 13:21:44 UTC

svn commit: r1660361 - in /jena/site/trunk/content/documentation/hadoop: demo.md mapred.mdtext

Author: rvesse
Date: Tue Feb 17 12:21:43 2015
New Revision: 1660361

URL: http://svn.apache.org/r1660361
Log:
Finish first pass at Elephas documentation

Added:
    jena/site/trunk/content/documentation/hadoop/demo.md
Modified:
    jena/site/trunk/content/documentation/hadoop/mapred.mdtext

Added: jena/site/trunk/content/documentation/hadoop/demo.md
URL: http://svn.apache.org/viewvc/jena/site/trunk/content/documentation/hadoop/demo.md?rev=1660361&view=auto
==============================================================================
--- jena/site/trunk/content/documentation/hadoop/demo.md (added)
+++ jena/site/trunk/content/documentation/hadoop/demo.md Tue Feb 17 12:21:43 2015
@@ -0,0 +1,106 @@
+Title: Apache Jena Elephas - RDF Stats Demo
+
+The RDF Stats Demo is a pre-built application available as a ready to run Hadoop Job JAR with all dependencies embedded within it.  The demo app uses the other libraries to allow calculating a number of basic statistics over any RDF data supported by Elephas.
+
+To use it you will first need to build it from source or download the relevant Maven artefact:
+
+    <dependency>
+      <groupId>org.apache.jena</groupId>
+      <artifactId>jena-elephas-stats</artifactId>
+      <version>x.y.z</version>
+      <classifier>hadoop-job</classifier>
+    </dependency>
+    
+Where `x.y.z` is the desired version.
+
+# Pre-requisites
+
+In order to run this demo you will need to have a Hadoop 2.x cluster available, for simple experimentation purposes a [single node cluster](http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html) will be sufficient.
+
+# Running
+
+Assuming your cluster is started and running and the `hadoop` command is available on your path you can run the application without any arguments to see help:
+
+    > hadoop jar jena-elephas-stats-VERSION-hadoop-job.jar org.apache.jena.hadoop.rdf.stats.RdfStats
+    NAME
+        hadoop jar PATH_TO_JAR org.apache.jena.hadoop.rdf.stats.RdfStats - A
+        command which computes statistics on RDF data using Hadoop
+
+    SYNOPSIS
+        hadoop jar PATH_TO_JAR org.apache.jena.hadoop.rdf.stats.RdfStats
+                [ {-a | --all} ] [ {-d | --data-types} ] [ {-g | --graph-sizes} ]
+                [ {-h | --help} ] [ --input-type <inputType> ] [ {-n | --node-count} ]
+                [ --namespaces ] {-o | --output} <OutputPath> [ {-t | --type-count} ]
+                [--] <InputPath>...
+
+    OPTIONS
+        -a, --all
+            Requests that all available statistics be calculated
+
+        -d, --data-types
+            Requests that literal data type usage counts be calculated
+
+        -g, --graph-sizes
+            Requests that the size of each named graph be counted
+
+        -h, --help
+            Display help information
+
+        --input-type <inputType>
+            Specifies whether the input data is a mixture of quads and triples,
+            just quads or just triples. Using the most specific data type will
+            yield the most accurate statistics
+
+            This options value is restricted to the following value(s):
+                mixed
+                quads
+                triples
+
+        -n, --node-count
+            Requests that node usage counts be calculated
+
+        --namespaces
+            Requests that namespace usage counts be calculated
+
+        -o <OutputPath>, --output <OutputPath>
+            Sets the output path
+
+        -t, --type-count
+            Requests that rdf:type usage counts be calculated
+
+        --
+            This option can be used to separate command-line options from the
+            list of argument, (useful when arguments might be mistaken for
+            command-line options)
+
+        <InputPath>
+            Sets the input path(s)
+
+If we wanted to calculate the node count on some data we could do the following:
+
+    > hadoop jar jena-elephas-stats-VERSION-hadoop-job.jar org.apache.jena.hadoop.rdf.stats.RdfStats --node-count --output /example/output /example/input
+
+This calculates the node counts for the input data found in `/example/input` placing the generated counts in `/example/output`
+
+## Specifying Inputs and Outputs
+
+Inputs are specified simply by providing one or more paths to the data you wish to analyse.  You can provide directory paths in which case all files within the directory will be processed.
+
+To specify the output location use the `-o` or `--output` option followed by the desired output path.
+
+By default the demo application assumes a mixture of quads and triples data, if you know your data is only in triples/quads then you can use the `--input-type` argument followed by `triples` or `quads` to indicate the type of your data.  Not doing this can skew some statistics as the default is to assume mixed data and so all triples are upgraded into quads when calculating the statistics.
+    
+## Available Statistics
+
+The following statistics are available and are activated by the relevant command line option:
+
+<table>
+  <tr><th>Command Line Option</th><th>Statistic</th><th>Description & Notes</th></tr>
+  <tr><td>`-n` or `--node-count`</td><td>Node Count</td><td>Counts the occurrences of each unique RDF term i.e. node in Jena parlance</td></tr>
+  <tr><td>`-t` or `--type-count`</td><td>Type Count</td><td>Counts the occurrences of each declared `rdf:type` value</td></tr>
+  <tr><td>`-d` or `--data-types`</td><td>Data Type Count</td><td>Counts the occurrences of each declared literal data type</td></tr>
+  <tr><td>`--namespaces`</td><td>Namespace Counts</td><td>Counts the occurrences of namespaces within the data.<br />Namespaces are determined by splitting URIs at the `#` fragment separator if present and if not the last `/` character
+  <tr><td>`-g` or `--graph-sizes`</td><td>Graph Sizes</td><td>Counts the sizes of each graph declared in the data</td></tr>
+</table>
+
+You can also use the `-a` or `--all` option if you simply wish to calculate all statistics.
\ No newline at end of file

Modified: jena/site/trunk/content/documentation/hadoop/mapred.mdtext
URL: http://svn.apache.org/viewvc/jena/site/trunk/content/documentation/hadoop/mapred.mdtext?rev=1660361&r1=1660360&r2=1660361&view=diff
==============================================================================
--- jena/site/trunk/content/documentation/hadoop/mapred.mdtext (original)
+++ jena/site/trunk/content/documentation/hadoop/mapred.mdtext Tue Feb 17 12:21:43 2015
@@ -16,6 +16,8 @@ The following common tasks are supported
 - Splitting
 - Transforming
 
+Note that standard Map/Reduce programming rules apply as normal.  For example if a mapper/reducer transforms between data types then you need to make `setMapOutputKeyClass()`, `setMapOutputValueClass()`, `setReducerOutputKeyClass()` and `setReducerOutputValueClass()` calls on your Job configuration as necessary.
+
 ## Counting
 
 Counting is one of the classic Map/Reduce tasks and features as both the official Map/Reduce example for both Hadoop itself and for Elephas.  Implementations cover a number of different counting tasks that you might want to carry out upon RDF data, in most cases you will use the desired `Mapper` implementation in conjunction with the `NodeCountReducer`.
@@ -36,7 +38,9 @@ Finally you may be interested in the usa
 
 ## Filtering
 
-Filtering is another classic Map/Reduce use case, here you want to take the data and extract only the portions that you are interested in based on some criteria.  All our filter `Mapper` implementations also support a Job configuration option named `rdf.mapreduce.filter.invert` allowing their effects to be inverted if desired.
+Filtering is another classic Map/Reduce use case, here you want to take the data and extract only the portions that you are interested in based on some criteria.  All our filter `Mapper` implementations also support a Job configuration option named `rdf.mapreduce.filter.invert` allowing their effects to be inverted if desired e.g.
+
+    config.setProperty(RdfMapReduceConstants.FILTER_INVERT, true);
 
 ### Valid Data
 
@@ -47,7 +51,7 @@ One type of filter that may be useful pa
 - Object can be a URI/Blank Node/Literal
 - Graph can only be a URI or Blank Node
 
-If you wanted to extract only the bad data e.g. for debugging then you can of course invert these filters by setting `rdf.mapreduce.filter.invert` to `true`.
+If you wanted to extract only the bad data e.g. for debugging then you can of course invert these filters by setting `rdf.mapreduce.filter.invert` to `true` as shown above.
 
 ### Ground Data
 
@@ -55,9 +59,13 @@ In some cases you may only be interestin
 
 ### Data with a specific URI
 
-In lots of case you may want to extract only data where a specific URI occurs in a specific position, for example if you wanted to extract all the `rdf:type` declarations then you might want to use the `TripleFilterByPredicateUriMapper` or `QuadFilterByPredicateUriMapper` as appropriate.  The job configuration option `rdf.mapreduce.filter.predicate.uris` is used to provide a comma separated list of the full URIs you want the filter to accept.
+In lots of case you may want to extract only data where a specific URI occurs in a specific position, for example if you wanted to extract all the `rdf:type` declarations then you might want to use the `TripleFilterByPredicateUriMapper` or `QuadFilterByPredicateUriMapper` as appropriate.  The job configuration option `rdf.mapreduce.filter.predicate.uris` is used to provide a comma separated list of the full URIs you want the filter to accept e.g.
+
+    config.setProperty(RdfMapReduceConstants.FILTER_PREDICATE_URIS, "http://example.org/predicate,http://another.org/predicate");
+
+Similar to the counting of node usage you can substitute `Predicate` for `Subject`, `Object` or `Graph` as desired.  You will also need to do this in the job configuration option, for example to filter on subject URIs in quads use the `QuadFilterBySubjectUriMapper` and the `rdf.mapreduce.filter.subject.uris` configuration option e.g.
 
-Similar to the counting of node usage you can substitute `Predicate` for `Subject`, `Object` or `Graph` as desired.  You will also need to do this in the job configuration option, for example to filter on subject URIs in quads use the `QuadFilterBySubjectUriMapper` and the `rdf.mapreduce.filter.subject.uris` configuration option.
+    config.setProperty(RdfMapReduceConstants.FILTER_SUBJECT_URIS, "http://example.org/myInstance");
 
 ## Grouping
 
@@ -74,4 +82,47 @@ Splitting allows you to split triples/qu
 
 Transforming provides some very simple implementations that allow you to convert between triples and quads.  For the lossy case of going from quads to triples simply use the `QuadsToTriplesMapper`.
 
-If you want to go the other way - triples to quads - this requires adding a graph field to each triple and we provide two implementations that do that.  Firstly there is `TriplesToQuadsBySubjectMapper` which puts each triple into a graph based on its subject i.e. all triples with a common subject go into a graph named for the subject.  Secondly there is `TriplesToQuadsConstantGraphMapper` which simply puts all triples into the default graph, if you wish to change the target graph you should extend this class.  If you wanted to select the graph to use based on some arbitrary criteria you should look at extending the `AbstractTriplesToQuadsMapper` instead.
\ No newline at end of file
+If you want to go the other way - triples to quads - this requires adding a graph field to each triple and we provide two implementations that do that.  Firstly there is `TriplesToQuadsBySubjectMapper` which puts each triple into a graph based on its subject i.e. all triples with a common subject go into a graph named for the subject.  Secondly there is `TriplesToQuadsConstantGraphMapper` which simply puts all triples into the default graph, if you wish to change the target graph you should extend this class.  If you wanted to select the graph to use based on some arbitrary criteria you should look at extending the `AbstractTriplesToQuadsMapper` instead.
+
+# Example Jobs
+
+## Node Count
+
+The following example shows how to configure a job which performs a node count i.e. counts the usages of RDF terms (aka nodes in Jena parlance) within the data:
+
+    
+    // Assumes we have already created a Hadoop Configuration 
+    // and stored it in the variable config
+    Job job = Job.getInstance(config);
+    
+    // This is necessary as otherwise Hadoop won't ship the JAR to all
+    // nodes and you'll get ClassDefNotFound and similar errors
+    job.setJarByClass(Example.class);
+    
+    // Give our job a friendly name
+    job.setJobName("RDF Triples Node Usage Count");
+
+    // Mapper class
+    // Since the output type is different from the input type have to declare
+    // our output types
+    job.setMapperClass(TripleNodeCountMapper.class);
+    job.setMapOutputKeyClass(NodeWritable.class);
+    job.setMapOutputValueClass(LongWritable.class);
+    
+    // Reducer class
+    job.setReducerClass(NodeCountReducer.class);
+
+    // Input
+    // TriplesInputFormat accepts any RDF triples serialisation
+    job.setInputFormatClass(TriplesInputFormat.class);
+    
+    // Output
+    // NTriplesNodeOutputFormat produces lines consisting of a Node formatted
+    // according to the NTriples spec and the value separated by a tab
+    job.setOutputFormatClass(NTriplesNodeOutputFormat.class);
+    
+    // Set your input and output paths
+    FileInputFormat.setInputPath(job, new Path("/example/input"));
+    FileOutputFormat.setOutputPath(job, new Path("/example/output"));
+    
+    // Now run the job...
\ No newline at end of file