You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@orc.apache.org by om...@apache.org on 2018/05/17 22:51:26 UTC

orc git commit: Fix more broken links caused by jekyll update

Repository: orc
Updated Branches:
  refs/heads/master b5455cd0d -> 5b5c0d5bb


Fix more broken links caused by jekyll update

Signed-off-by: Owen O'Malley <om...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/orc/repo
Commit: http://git-wip-us.apache.org/repos/asf/orc/commit/5b5c0d5b
Tree: http://git-wip-us.apache.org/repos/asf/orc/tree/5b5c0d5b
Diff: http://git-wip-us.apache.org/repos/asf/orc/diff/5b5c0d5b

Branch: refs/heads/master
Commit: 5b5c0d5bb14469ddf8e399b4ed879cc1cca9bec6
Parents: b5455cd
Author: Owen O'Malley <om...@apache.org>
Authored: Thu May 17 15:51:00 2018 -0700
Committer: Owen O'Malley <om...@apache.org>
Committed: Thu May 17 15:51:00 2018 -0700

----------------------------------------------------------------------
 site/_docs/core-java.md            | 72 ++++++++++++++++-----------------
 site/_docs/mapred.md               | 24 +++++------
 site/_docs/mapreduce.md            | 24 +++++------
 site/_docs/types.md                |  2 +-
 site/_posts/2015-06-26-new-logo.md |  2 +-
 site/specification/ORCv0.md        |  6 +--
 site/specification/ORCv1.md        |  8 ++--
 site/specification/ORCv2.md        |  8 ++--
 8 files changed, 73 insertions(+), 73 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/_docs/core-java.md
----------------------------------------------------------------------
diff --git a/site/_docs/core-java.md b/site/_docs/core-java.md
index 7cbdfbe..c4211a9 100644
--- a/site/_docs/core-java.md
+++ b/site/_docs/core-java.md
@@ -11,10 +11,10 @@ read and write the data.
 ## Vectorized Row Batch
 
 Data is passed to ORC as instances of
-[VectorizedRowBatch]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch.html)
+[VectorizedRowBatch](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch.html)
 that contain the data for 1024 rows. The focus is on speed and
 accessing the data fields directly. `cols` is an array of
-[ColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html)
+[ColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html)
 and `size` is the number of rows.
 
 ~~~ java
@@ -27,7 +27,7 @@ public class VectorizedRowBatch {
 }
 ~~~
 
-[ColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html)
+[ColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html)
 is the parent type of the different kinds of columns and has some
 fields that are shared across all of the column types. In particular,
 the `noNulls` flag if there are no nulls in this column for this batch
@@ -58,26 +58,26 @@ The subtypes of ColumnVector are:
 
 | ORC Type | ColumnVector |
 | -------- | ------------- |
-| array | [ListColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html) |
-| binary | [BytesColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
-| bigint | [LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
-| boolean | [LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
-| char | [BytesColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
-| date | [LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
-| decimal | [DecimalColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html) |
-| double | [DoubleColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html) |
-| float | [DoubleColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html) |
-| int | [LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
-| map | [MapColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html) |
-| smallint | [LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
-| string | [BytesColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
-| struct | [StructColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html) |
-| timestamp | [TimestampColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html) |
-| tinyint | [LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
-| uniontype | [UnionColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html) |
-| varchar | [BytesColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
-
-[LongColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) handles all of the integer types (boolean, bigint,
+| array | [ListColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html) |
+| binary | [BytesColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
+| bigint | [LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
+| boolean | [LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
+| char | [BytesColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
+| date | [LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
+| decimal | [DecimalColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html) |
+| double | [DoubleColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html) |
+| float | [DoubleColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html) |
+| int | [LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
+| map | [MapColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html) |
+| smallint | [LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
+| string | [BytesColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
+| struct | [StructColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html) |
+| timestamp | [TimestampColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html) |
+| tinyint | [LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) |
+| uniontype | [UnionColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html) |
+| varchar | [BytesColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html) |
+
+[LongColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html) handles all of the integer types (boolean, bigint,
 date, int, smallint, and tinyint). The data is represented as an array of
 longs where each value is sign-extended as necessary.
 
@@ -88,7 +88,7 @@ public class LongColumnVector extends ColumnVector {
 }
 ~~~
 
-[TimestampColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html)
+[TimestampColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html)
 handles timestamp values. The data is represented as an array of longs
 and an array of ints.
 
@@ -104,7 +104,7 @@ public class TimestampColumnVector extends ColumnVector {
 }
 ~~~
 
-[DoubleColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html)
+[DoubleColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html)
 handles all of the floating point types (double, and float). The data
 is represented as an array of doubles.
 
@@ -115,7 +115,7 @@ public class DoubleColumnVector extends ColumnVector {
 }
 ~~~
 
-[DecimalColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html)
+[DecimalColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html)
 handles decimal columns. The data is represented as an array of
 HiveDecimalWritable. Note that this implementation is not performant
 and will likely be replaced.
@@ -127,7 +127,7 @@ public class DecimalColumnVector extends ColumnVector {
 }
 ~~~
 
-[BytesColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html)
+[BytesColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html)
 handles all of the binary types (binary, char, string, and
 varchar). The data is represented as a byte array, offset, and
 length. The byte arrays may or may not be shared between values.
@@ -141,7 +141,7 @@ public class BytesColumnVector extends ColumnVector {
 }
 ~~~
 
-[StructColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html)
+[StructColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html)
 handles the struct columns and represents the data as an array of
 `ColumnVector`. The value for row 5 consists of the fifth value from
 each of the `fields` values.
@@ -153,7 +153,7 @@ public class StructColumnVector extends ColumnVector {
 }
 ~~~
 
-[UnionColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html)
+[UnionColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html)
 handles the union columns and represents the data as an array of
 integers that pick the subtype and a `fields` array one per a
 subtype. Only the value of the `fields` that corresponds to
@@ -167,7 +167,7 @@ public class UnionColumnVector extends ColumnVector {
 }
 ~~~
 
-[ListColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html)
+[ListColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html)
 handles the array columns and represents the data as two arrays of
 integers for the offset and lengths and a `ColumnVector` for the
 children values.
@@ -187,7 +187,7 @@ public class ListColumnVector extends ColumnVector {
 }
 ~~~
 
-[MapColumnVector]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html)
+[MapColumnVector](/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html)
 handles the map columns and represents the data as two arrays of
 integers for the offset and lengths and two `ColumnVector`s for the
 keys and values.
@@ -212,9 +212,9 @@ public class MapColumnVector extends ColumnVector {
 
 ### Simple Example
 To write an ORC file, you need to define the schema and use the
-[OrcFile]({{site.url}}/api/orc-core/index.html?org/apache/orc/OrcFile.html)
+[OrcFile](/api/orc-core/index.html?org/apache/orc/OrcFile.html)
 class to create a
-[Writer]({{site.url}}/api/orc-core/index.html?org/apache/orc/Writer.html)
+[Writer](/api/orc-core/index.html?org/apache/orc/Writer.html)
 with the desired filename. This example sets the required schema
 parameter, but there are many other options to control the ORC writer.
 
@@ -319,9 +319,9 @@ writer.close();
 ## Reading ORC Files
 
 To read ORC files, use the
-[OrcFile]({{site.url}}/api/orc-core/index.html?org/apache/orc/OrcFile.html)
+[OrcFile](/api/orc-core/index.html?org/apache/orc/OrcFile.html)
 class to create a
-[Reader]({{site.url}}/api/orc-core/index.html?org/apache/orc/Reader.html)
+[Reader](/api/orc-core/index.html?org/apache/orc/Reader.html)
 that contains the metadata about the file. There are a few options to
 the ORC reader, but far fewer than the writer and none of them are
 required. The reader has methods for getting the number of rows,
@@ -333,7 +333,7 @@ Reader reader = OrcFile.createReader(new Path("my-file.orc"),
 ~~~
 
 To get the data, create a
-[RecordReader]({{site.url}}/api/orc-core/index.html?org/apache/orc/RecordReader.html)
+[RecordReader](/api/orc-core/index.html?org/apache/orc/RecordReader.html)
 object. By default, the RecordReader reads all rows and all columns,
 but there are options to control the data that is read.
 

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/_docs/mapred.md
----------------------------------------------------------------------
diff --git a/site/_docs/mapred.md b/site/_docs/mapred.md
index 3b1493a..137ec5c 100644
--- a/site/_docs/mapred.md
+++ b/site/_docs/mapred.md
@@ -7,7 +7,7 @@ permalink: /docs/mapred.html
 This page describes how to read and write ORC files from Hadoop's
 older org.apache.hadoop.mapred MapReduce APIs. If you want to use the
 new org.apache.hadoop.mapreduce API, please look at the [next
-page]({{site.url}}/docs/mapreduce.html).
+page](/docs/mapreduce.html).
 
 ## Reading ORC files
 
@@ -30,7 +30,7 @@ Add ORC and your desired version of Hadoop to your `pom.xml`:
 
 Set the minimal properties in your JobConf:
 
-* **mapreduce.job.inputformat.class** = [org.apache.orc.mapred.OrcInputFormat]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcInputFormat.html)
+* **mapreduce.job.inputformat.class** = [org.apache.orc.mapred.OrcInputFormat](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcInputFormat.html)
 * **mapreduce.input.fileinputformat.inputdir** = your input directory
 
 ORC files contain a series of values of the same type and that type
@@ -44,23 +44,23 @@ the key and a value based on the table below expanded recursively.
 
 | ORC Type | Writable Type |
 | -------- | ------------- |
-| array | [org.apache.orc.mapred.OrcList]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
+| array | [org.apache.orc.mapred.OrcList](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
 | binary | org.apache.hadoop.io.BytesWritable |
 | bigint | org.apache.hadoop.io.LongWritable |
 | boolean | org.apache.hadoop.io.BooleanWritable |
 | char | org.apache.hadoop.io.Text |
-| date | [org.apache.hadoop.hive.serde2.io.DateWritable]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html) |
-| decimal | [org.apache.hadoop.hive.serde2.io.HiveDecimalWritable]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html) |
+| date | [org.apache.hadoop.hive.serde2.io.DateWritable](/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html) |
+| decimal | [org.apache.hadoop.hive.serde2.io.HiveDecimalWritable](/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html) |
 | double | org.apache.hadoop.io.DoubleWritable |
 | float | org.apache.hadoop.io.FloatWritable |
 | int | org.apache.hadoop.io.IntWritable |
-| map | [org.apache.orc.mapred.OrcMap]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html) |
+| map | [org.apache.orc.mapred.OrcMap](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html) |
 | smallint | org.apache.hadoop.io.ShortWritable |
 | string | org.apache.hadoop.io.Text |
-| struct | [org.apache.orc.mapred.OrcStruct]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
-| timestamp | [org.apache.orc.mapred.OrcTimestamp]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html) |
+| struct | [org.apache.orc.mapred.OrcStruct](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
+| timestamp | [org.apache.orc.mapred.OrcTimestamp](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html) |
 | tinyint | org.apache.hadoop.io.ByteWritable |
-| uniontype | [org.apache.orc.mapred.OrcUnion]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html) |
+| uniontype | [org.apache.orc.mapred.OrcUnion](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html) |
 | varchar | org.apache.hadoop.io.Text |
 
 Let's assume that your input directory contains ORC files with the
@@ -90,7 +90,7 @@ public class MyMapper
 
 To write ORC files from your MapReduce job, you'll need to set
 
-* **mapreduce.job.outputformat.class** = [org.apache.orc.mapred.OrcOutputFormat]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcOutputFormat.html)
+* **mapreduce.job.outputformat.class** = [org.apache.orc.mapred.OrcOutputFormat](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcOutputFormat.html)
 * **mapreduce.output.fileoutputformat.outputdir** = your output directory
 * **orc.mapred.output.schema** = the schema to write to the ORC file
 
@@ -140,9 +140,9 @@ MapReduce shuffle. The complex ORC types, since they are generic
 types, need to have their full type information provided to create the
 object. To enable MapReduce to properly instantiate the OrcStruct and
 other ORC types, we need to wrap it in either an
-[OrcKey]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html)
+[OrcKey](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html)
 for the shuffle key or
-[OrcValue]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html)
+[OrcValue](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html)
 for the shuffle value.
 
 To send two OrcStructs through the shuffle, define the following properties

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/_docs/mapreduce.md
----------------------------------------------------------------------
diff --git a/site/_docs/mapreduce.md b/site/_docs/mapreduce.md
index bba6148..2a88de6 100644
--- a/site/_docs/mapreduce.md
+++ b/site/_docs/mapreduce.md
@@ -7,7 +7,7 @@ permalink: /docs/mapreduce.html
 This page describes how to read and write ORC files from Hadoop's
 newer org.apache.hadoop.mapreduce MapReduce APIs. If you want to use the
 older org.apache.hadoop.mapred API, please look at the [previous
-page]({{site.url}}/docs/mapred.html).
+page](/docs/mapred.html).
 
 ## Reading ORC files
 
@@ -30,7 +30,7 @@ Add ORC and your desired version of Hadoop to your `pom.xml`:
 
 Set the minimal properties in your JobConf:
 
-* **mapreduce.job.inputformat.class** = [org.apache.orc.mapreduce.OrcInputFormat]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcInputFormat.html)
+* **mapreduce.job.inputformat.class** = [org.apache.orc.mapreduce.OrcInputFormat](/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcInputFormat.html)
 * **mapreduce.input.fileinputformat.inputdir** = your input directory
 
 ORC files contain a series of values of the same type and that type
@@ -44,23 +44,23 @@ the key and a value based on the table below expanded recursively.
 
 | ORC Type | Writable Type |
 | -------- | ------------- |
-| array | [org.apache.orc.mapred.OrcList]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
+| array | [org.apache.orc.mapred.OrcList](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
 | binary | org.apache.hadoop.io.BytesWritable |
 | bigint | org.apache.hadoop.io.LongWritable |
 | boolean | org.apache.hadoop.io.BooleanWritable |
 | char | org.apache.hadoop.io.Text |
-| date | [org.apache.hadoop.hive.serde2.io.DateWritable]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html) |
-| decimal | [org.apache.hadoop.hive.serde2.io.HiveDecimalWritable]({{site.url}}/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html) |
+| date | [org.apache.hadoop.hive.serde2.io.DateWritable](/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html) |
+| decimal | [org.apache.hadoop.hive.serde2.io.HiveDecimalWritable](/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html) |
 | double | org.apache.hadoop.io.DoubleWritable |
 | float | org.apache.hadoop.io.FloatWritable |
 | int | org.apache.hadoop.io.IntWritable |
-| map | [org.apache.orc.mapred.OrcMap]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html) |
+| map | [org.apache.orc.mapred.OrcMap](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html) |
 | smallint | org.apache.hadoop.io.ShortWritable |
 | string | org.apache.hadoop.io.Text |
-| struct | [org.apache.orc.mapred.OrcStruct]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
-| timestamp | [org.apache.orc.mapred.OrcTimestamp]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html) |
+| struct | [org.apache.orc.mapred.OrcStruct](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html) |
+| timestamp | [org.apache.orc.mapred.OrcTimestamp](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html) |
 | tinyint | org.apache.hadoop.io.ByteWritable |
-| uniontype | [org.apache.orc.mapred.OrcUnion]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html) |
+| uniontype | [org.apache.orc.mapred.OrcUnion](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html) |
 | varchar | org.apache.hadoop.io.Text |
 
 Let's assume that your input directory contains ORC files with the
@@ -86,7 +86,7 @@ public static class MyMapper
 
 To write ORC files from your MapReduce job, you'll need to set
 
-* **mapreduce.job.outputformat.class** = [org.apache.orc.mapreduce.OrcOutputFormat]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcOutputFormat.html)
+* **mapreduce.job.outputformat.class** = [org.apache.orc.mapreduce.OrcOutputFormat](/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcOutputFormat.html)
 * **mapreduce.output.fileoutputformat.outputdir** = your output directory
 * **orc.mapred.output.schema** = the schema to write to the ORC file
 
@@ -132,9 +132,9 @@ MapReduce shuffle. The complex ORC types, since they are generic
 types, need to have their full type information provided to create the
 object. To enable MapReduce to properly instantiate the OrcStruct and
 other ORC types, we need to wrap it in either an
-[OrcKey]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html)
+[OrcKey](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html)
 for the shuffle key or
-[OrcValue]({{site.url}}/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html)
+[OrcValue](/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html)
 for the shuffle value.
 
 To send two OrcStructs through the shuffle, define the following properties

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/_docs/types.md
----------------------------------------------------------------------
diff --git a/site/_docs/types.md b/site/_docs/types.md
index c58580e..6dc16ad 100644
--- a/site/_docs/types.md
+++ b/site/_docs/types.md
@@ -59,5 +59,5 @@ file would form the given tree.
 );
 ```
 
-![ORC column structure]({{ site.url }}/img/TreeWriters.png)
+![ORC column structure](/img/TreeWriters.png)
 

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/_posts/2015-06-26-new-logo.md
----------------------------------------------------------------------
diff --git a/site/_posts/2015-06-26-new-logo.md b/site/_posts/2015-06-26-new-logo.md
index a5121a7..b22a89d 100644
--- a/site/_posts/2015-06-26-new-logo.md
+++ b/site/_posts/2015-06-26-new-logo.md
@@ -8,6 +8,6 @@ categories: [project]
 
 The ORC project has adopted a new logo. We hope you like it.
 
-![orc logo]({{ site.url }}/img/logo.png "orc logo")
+![orc logo](/img/logo.png "orc logo")
 
 Other great options included a big white hand on a black shield. *smile*
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/specification/ORCv0.md
----------------------------------------------------------------------
diff --git a/site/specification/ORCv0.md b/site/specification/ORCv0.md
index 9a7168d..9cecd34 100644
--- a/site/specification/ORCv0.md
+++ b/site/specification/ORCv0.md
@@ -27,7 +27,7 @@ include the minimum and maximum values for each column in each set of
 file reader can skip entire sets of rows that aren't important for
 this query.
 
-![ORC file structure]({{ site.url }}/img/OrcFileLayout.png)
+![ORC file structure](/img/OrcFileLayout.png)
 
 # File Tail
 
@@ -154,7 +154,7 @@ All of the rows in an ORC file must have the same schema. Logically
 the schema is expressed as a tree as in the figure below, where
 the compound types have subcolumns under them.
 
-![ORC column structure]({{ site.url }}/img/TreeWriters.png)
+![ORC column structure](/img/TreeWriters.png)
 
 The equivalent Hive DDL would be:
 
@@ -363,7 +363,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
 that as long as a decompressor starts at the top of a header, it can
 start decompressing without the previous bytes.
 
-![compression streams]({{ site.url }}/img/CompressionStream.png)
+![compression streams](/img/CompressionStream.png)
 
 The default compression chunk size is 256K, but writers can choose
 their own value. Larger chunks lead to better compression, but require

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/specification/ORCv1.md
----------------------------------------------------------------------
diff --git a/site/specification/ORCv1.md b/site/specification/ORCv1.md
index 23e9935..57a6758 100644
--- a/site/specification/ORCv1.md
+++ b/site/specification/ORCv1.md
@@ -27,7 +27,7 @@ include the minimum and maximum values for each column in each set of
 file reader can skip entire sets of rows that aren't important for
 this query.
 
-![ORC file structure]({{ site.url }}/img/OrcFileLayout.png)
+![ORC file structure](/img/OrcFileLayout.png)
 
 # File Tail
 
@@ -154,7 +154,7 @@ All of the rows in an ORC file must have the same schema. Logically
 the schema is expressed as a tree as in the figure below, where
 the compound types have subcolumns under them.
 
-![ORC column structure]({{ site.url }}/img/TreeWriters.png)
+![ORC column structure](/img/TreeWriters.png)
 
 The equivalent Hive DDL would be:
 
@@ -363,7 +363,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
 that as long as a decompressor starts at the top of a header, it can
 start decompressing without the previous bytes.
 
-![compression streams]({{ site.url }}/img/CompressionStream.png)
+![compression streams](/img/CompressionStream.png)
 
 The default compression chunk size is 256K, but writers can choose
 their own value. Larger chunks lead to better compression, but require
@@ -1009,4 +1009,4 @@ Bloom filter streams are interlaced with row group indexes. This placement
 makes it convenient to read the bloom filter stream and row index stream
 together in single read operation.
 
-![bloom filter]({{ site.url }}/img/BloomFilter.png)
+![bloom filter](/img/BloomFilter.png)

http://git-wip-us.apache.org/repos/asf/orc/blob/5b5c0d5b/site/specification/ORCv2.md
----------------------------------------------------------------------
diff --git a/site/specification/ORCv2.md b/site/specification/ORCv2.md
index 74fc974..9a5f8c3 100644
--- a/site/specification/ORCv2.md
+++ b/site/specification/ORCv2.md
@@ -47,7 +47,7 @@ include the minimum and maximum values for each column in each set of
 file reader can skip entire sets of rows that aren't important for
 this query.
 
-![ORC file structure]({{ site.url }}/img/OrcFileLayout.png)
+![ORC file structure](/img/OrcFileLayout.png)
 
 # File Tail
 
@@ -174,7 +174,7 @@ All of the rows in an ORC file must have the same schema. Logically
 the schema is expressed as a tree as in the figure below, where
 the compound types have subcolumns under them.
 
-![ORC column structure]({{ site.url }}/img/TreeWriters.png)
+![ORC column structure](/img/TreeWriters.png)
 
 The equivalent Hive DDL would be:
 
@@ -383,7 +383,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
 that as long as a decompressor starts at the top of a header, it can
 start decompressing without the previous bytes.
 
-![compression streams]({{ site.url }}/img/CompressionStream.png)
+![compression streams](/img/CompressionStream.png)
 
 The default compression chunk size is 256K, but writers can choose
 their own value. Larger chunks lead to better compression, but require
@@ -1026,4 +1026,4 @@ Bloom filter streams are interlaced with row group indexes. This placement
 makes it convenient to read the bloom filter stream and row index stream
 together in single read operation.
 
-![bloom filter]({{ site.url }}/img/BloomFilter.png)
+![bloom filter](/img/BloomFilter.png)