You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@rya.apache.org by pu...@apache.org on 2018/10/01 13:09:58 UTC

[3/4] incubator-rya git commit: Remove the Typical First Steps section

Remove the Typical First Steps section


Project: http://git-wip-us.apache.org/repos/asf/incubator-rya/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-rya/commit/6dd68828
Tree: http://git-wip-us.apache.org/repos/asf/incubator-rya/tree/6dd68828
Diff: http://git-wip-us.apache.org/repos/asf/incubator-rya/diff/6dd68828

Branch: refs/heads/master
Commit: 6dd6882877c817dfd2308563b6cb479c8e7e52ba
Parents: 0018afd
Author: Maxim Kolchin <ko...@gmail.com>
Authored: Thu Jul 5 12:07:42 2018 +0300
Committer: Maxim Kolchin <ko...@gmail.com>
Committed: Thu Jul 5 12:07:42 2018 +0300

----------------------------------------------------------------------
 extras/rya.manual/src/site/markdown/_index.md   |  3 +-
 extras/rya.manual/src/site/markdown/index.md    |  3 +-
 .../src/site/markdown/sm-firststeps.md          | 80 --------------------
 3 files changed, 4 insertions(+), 82 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/6dd68828/extras/rya.manual/src/site/markdown/_index.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/_index.md b/extras/rya.manual/src/site/markdown/_index.md
index 07dfe50..7a3aed9 100644
--- a/extras/rya.manual/src/site/markdown/_index.md
+++ b/extras/rya.manual/src/site/markdown/_index.md
@@ -36,7 +36,7 @@ This project contains documentation about Apache Rya, a scalable RDF triple stor
 - [Kafka Connect Integration](kafka-connect-integration.md)
 
 # Samples
-- [Typical First Steps](sm-firststeps.md)
+
 - [Simple Add/Query/Remove Statements](sm-simpleaqr.md)
 - [Sparql query](sm-sparqlquery.md)
 - [Adding Authentication](sm-addauth.md)
@@ -46,4 +46,5 @@ This project contains documentation about Apache Rya, a scalable RDF triple stor
 - [Alx](alx.md)
 
 # Development
+
 - [Building From Source](build-source.md)

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/6dd68828/extras/rya.manual/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/index.md b/extras/rya.manual/src/site/markdown/index.md
index e686736..54f30e6 100644
--- a/extras/rya.manual/src/site/markdown/index.md
+++ b/extras/rya.manual/src/site/markdown/index.md
@@ -38,7 +38,7 @@ This project contains documentation about Apache Rya, a scalable RDF triple stor
 - [Kafka Connect Integration](kafka-connect-integration.md)
 
 # Samples
-- [Typical First Steps](sm-firststeps.md)
+
 - [Simple Add/Query/Remove Statements](sm-simpleaqr.md)
 - [Sparql query](sm-sparqlquery.md)
 - [Adding Authentication](sm-addauth.md)
@@ -48,4 +48,5 @@ This project contains documentation about Apache Rya, a scalable RDF triple stor
 - [Alx](alx.md)
 
 # Development
+
 - [Building From Source](build-source.md)

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/6dd68828/extras/rya.manual/src/site/markdown/sm-firststeps.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-firststeps.md b/extras/rya.manual/src/site/markdown/sm-firststeps.md
deleted file mode 100644
index 228bfb5..0000000
--- a/extras/rya.manual/src/site/markdown/sm-firststeps.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-<!--
-
-[comment]: # Licensed to the Apache Software Foundation (ASF) under one
-[comment]: # or more contributor license agreements.  See the NOTICE file
-[comment]: # distributed with this work for additional information
-[comment]: # regarding copyright ownership.  The ASF licenses this file
-[comment]: # to you under the Apache License, Version 2.0 (the
-[comment]: # "License"); you may not use this file except in compliance
-[comment]: # with the License.  You may obtain a copy of the License at
-[comment]: # 
-[comment]: #   http://www.apache.org/licenses/LICENSE-2.0
-[comment]: # 
-[comment]: # Unless required by applicable law or agreed to in writing,
-[comment]: # software distributed under the License is distributed on an
-[comment]: # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-[comment]: # KIND, either express or implied.  See the License for the
-[comment]: # specific language governing permissions and limitations
-[comment]: # under the License.
-
--->
-# Typical First Steps
-
-In this tutorial, I will give you a quick overview of some of the first steps I perform to get data loaded and read for query.
-
-## Prerequisites
-
- We are assuming Accumulo 1.5+ usage here.
-
- * Apache Rya Source Code (`web.rya.war`)
- * Accumulo on top of Hadoop 0.20+
- * RDF Data (in N-Triples format, this format is the easiest to bulk load)
-
-## Building Source
-
-Skip this section if you already have the Map Reduce artifact and the WAR
-
-See the [Build From Source Section](build-source.md) to get the appropriate artifacts built
-
-## Load Data
-
-I find that the best way to load the data is through the Bulk Load Map Reduce job.
-
-* Save the RDF Data above onto HDFS. From now on we will refer to this location as `<RDF_HDFS_LOCATION>`
-* Move the `rya.mapreduce-<version>-job.jar` onto the hadoop cluster
-* Bulk load the data. Here is a sample command line:
-
-```
-hadoop jar ../rya.mapreduce-3.2.10-SNAPSHOT-job.jar org.apache.rya.accumulo.mr.RdfFileInputTool -Drdf.tablePrefix=lubm_ -Dcb.username=user -Dcb.pwd=cbpwd -Dcb.instance=instance -Dcb.zk=zookeeperLocation -Drdf.format=N-Triples <RDF_HDFS_LOCATION>
-```
-
-Once the data is loaded, it is actually a good practice to compact your tables. You can do this by opening the accumulo shell `shell` and running the `compact` command on the generated tables. Remember the generated tables will be prefixed by the `rdf.tablePrefix` property you assigned above. The default tablePrefix is `rts`.
-
-Here is a sample accumulo shell command:
-
-```
-compact -p lubm_(.*)
-```
-
-See the [Load Data Section](loaddata.md) for more options on loading rdf data
-
-## Run the Statistics Optimizer
-
-For the best query performance, it is recommended to run the Statistics Optimizer to create the Evaluation Statistics table. This job will read through your data and gather statistics on the distribution of the dataset. This table is then queried before query execution to reorder queries based on the data distribution.
-
-See the [Evaluation Statistics Table Section](eval.md) on how to do this.
-
-## Query data
-
-I find the easiest way to query is just to use the WAR. Load the WAR into your favorite web application container and go to the sparqlQuery.jsp page. Example:
-
-```
-http://localhost:8080/web.rya/sparqlQuery.jsp
-```
-
-This page provides a very simple text box for running queries against the store and getting data back. (SPARQL queries)
-
-Remember to update the connection information in the WAR: `WEB-INF/spring/spring-accumulo.xml`
-
-See the [Query data section](querydata.md) for more information.