You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@orc.apache.org by om...@apache.org on 2018/05/17 22:52:11 UTC
orc git commit: Push additional link fixes.
Repository: orc
Updated Branches:
refs/heads/asf-site fa6a04b8e -> 56663dffe
Push additional link fixes.
Signed-off-by: Owen O'Malley <om...@apache.org>
Project: http://git-wip-us.apache.org/repos/asf/orc/repo
Commit: http://git-wip-us.apache.org/repos/asf/orc/commit/56663dff
Tree: http://git-wip-us.apache.org/repos/asf/orc/tree/56663dff
Diff: http://git-wip-us.apache.org/repos/asf/orc/diff/56663dff
Branch: refs/heads/asf-site
Commit: 56663dffe4f716c7df99ea6c57f3cb9b2f1b8937
Parents: fa6a04b
Author: Owen O'Malley <om...@apache.org>
Authored: Thu May 17 15:51:55 2018 -0700
Committer: Owen O'Malley <om...@apache.org>
Committed: Thu May 17 15:51:55 2018 -0700
----------------------------------------------------------------------
docs/core-java.html | 70 ++++++++++++++++----------------
docs/mapred.html | 24 +++++------
docs/mapreduce.html | 24 +++++------
docs/types.html | 2 +-
news/2015/06/26/new-logo/index.html | 2 +-
news/index.html | 2 +-
security/CVE-2018-8015/index.html | 2 +-
specification/ORCv0/index.html | 6 +--
specification/ORCv1/index.html | 8 ++--
specification/ORCv2/index.html | 8 ++--
10 files changed, 74 insertions(+), 74 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/docs/core-java.html
----------------------------------------------------------------------
diff --git a/docs/core-java.html b/docs/core-java.html
index 4424fa9..96eaef9 100644
--- a/docs/core-java.html
+++ b/docs/core-java.html
@@ -681,10 +681,10 @@ read and write the data.</p>
<h2 id="vectorized-row-batch">Vectorized Row Batch</h2>
<p>Data is passed to ORC as instances of
-<a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch.html">VectorizedRowBatch</a>
+<a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch.html">VectorizedRowBatch</a>
that contain the data for 1024 rows. The focus is on speed and
accessing the data fields directly. <code class="highlighter-rouge">cols</code> is an array of
-<a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html">ColumnVector</a>
+<a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html">ColumnVector</a>
and <code class="highlighter-rouge">size</code> is the number of rows.</p>
<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">package</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hadoop</span><span class="o">.</span><span class="na">hive</span><span class="o">.</span><span class="na">ql</span><span class="o">.</span><span class="na">exec</span><span class="o">.</span><span class="na">vector</span><span class="o">;</span>
@@ -696,7 +696,7 @@ and <code class="highlighter-rouge">size</code> is the number of rows.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html">ColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ColumnVector.html">ColumnVector</a>
is the parent type of the different kinds of columns and has some
fields that are shared across all of the column types. In particular,
the <code class="highlighter-rouge">noNulls</code> flag if there are no nulls in this column for this batch
@@ -734,80 +734,80 @@ true if that value is null.</p>
<tbody>
<tr>
<td>array</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html">ListColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html">ListColumnVector</a></td>
</tr>
<tr>
<td>binary</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
</tr>
<tr>
<td>bigint</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
</tr>
<tr>
<td>boolean</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
</tr>
<tr>
<td>char</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
</tr>
<tr>
<td>date</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
</tr>
<tr>
<td>decimal</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html">DecimalColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html">DecimalColumnVector</a></td>
</tr>
<tr>
<td>double</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html">DoubleColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html">DoubleColumnVector</a></td>
</tr>
<tr>
<td>float</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html">DoubleColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html">DoubleColumnVector</a></td>
</tr>
<tr>
<td>int</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
</tr>
<tr>
<td>map</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html">MapColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html">MapColumnVector</a></td>
</tr>
<tr>
<td>smallint</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
</tr>
<tr>
<td>string</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
</tr>
<tr>
<td>struct</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html">StructColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html">StructColumnVector</a></td>
</tr>
<tr>
<td>timestamp</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html">TimestampColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html">TimestampColumnVector</a></td>
</tr>
<tr>
<td>tinyint</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a></td>
</tr>
<tr>
<td>uniontype</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html">UnionColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html">UnionColumnVector</a></td>
</tr>
<tr>
<td>varchar</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a></td>
</tr>
</tbody>
</table>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a> handles all of the integer types (boolean, bigint,
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.html">LongColumnVector</a> handles all of the integer types (boolean, bigint,
date, int, smallint, and tinyint). The data is represented as an array of
longs where each value is sign-extended as necessary.</p>
@@ -817,7 +817,7 @@ longs where each value is sign-extended as necessary.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html">TimestampColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/TimestampColumnVector.html">TimestampColumnVector</a>
handles timestamp values. The data is represented as an array of longs
and an array of ints.</p>
@@ -832,7 +832,7 @@ and an array of ints.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html">DoubleColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.html">DoubleColumnVector</a>
handles all of the floating point types (double, and float). The data
is represented as an array of doubles.</p>
@@ -842,7 +842,7 @@ is represented as an array of doubles.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html">DecimalColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.html">DecimalColumnVector</a>
handles decimal columns. The data is represented as an array of
HiveDecimalWritable. Note that this implementation is not performant
and will likely be replaced.</p>
@@ -853,7 +853,7 @@ and will likely be replaced.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.html">BytesColumnVector</a>
handles all of the binary types (binary, char, string, and
varchar). The data is represented as a byte array, offset, and
length. The byte arrays may or may not be shared between values.</p>
@@ -866,7 +866,7 @@ length. The byte arrays may or may not be shared between values.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html">StructColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/StructColumnVector.html">StructColumnVector</a>
handles the struct columns and represents the data as an array of
<code class="highlighter-rouge">ColumnVector</code>. The value for row 5 consists of the fifth value from
each of the <code class="highlighter-rouge">fields</code> values.</p>
@@ -877,7 +877,7 @@ each of the <code class="highlighter-rouge">fields</code> values.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html">UnionColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/UnionColumnVector.html">UnionColumnVector</a>
handles the union columns and represents the data as an array of
integers that pick the subtype and a <code class="highlighter-rouge">fields</code> array one per a
subtype. Only the value of the <code class="highlighter-rouge">fields</code> that corresponds to
@@ -890,7 +890,7 @@ subtype. Only the value of the <code class="highlighter-rouge">fields</code> tha
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html">ListColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/ListColumnVector.html">ListColumnVector</a>
handles the array columns and represents the data as two arrays of
integers for the offset and lengths and a <code class="highlighter-rouge">ColumnVector</code> for the
children values.</p>
@@ -909,7 +909,7 @@ children values.</p>
<span class="o">}</span>
</code></pre></div></div>
-<p><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html">MapColumnVector</a>
+<p><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/ql/exec/vector/MapColumnVector.html">MapColumnVector</a>
handles the map columns and represents the data as two arrays of
integers for the offset and lengths and two <code class="highlighter-rouge">ColumnVector</code>s for the
keys and values.</p>
@@ -933,9 +933,9 @@ keys and values.</p>
<h3 id="simple-example">Simple Example</h3>
<p>To write an ORC file, you need to define the schema and use the
-<a href="http://localhost:4000/api/orc-core/index.html?org/apache/orc/OrcFile.html">OrcFile</a>
+<a href="/api/orc-core/index.html?org/apache/orc/OrcFile.html">OrcFile</a>
class to create a
-<a href="http://localhost:4000/api/orc-core/index.html?org/apache/orc/Writer.html">Writer</a>
+<a href="/api/orc-core/index.html?org/apache/orc/Writer.html">Writer</a>
with the desired filename. This example sets the required schema
parameter, but there are many other options to control the ORC writer.</p>
@@ -1036,9 +1036,9 @@ ranging from “<row>.0” to “<row>.4”.</p>
<h2 id="reading-orc-files">Reading ORC Files</h2>
<p>To read ORC files, use the
-<a href="http://localhost:4000/api/orc-core/index.html?org/apache/orc/OrcFile.html">OrcFile</a>
+<a href="/api/orc-core/index.html?org/apache/orc/OrcFile.html">OrcFile</a>
class to create a
-<a href="http://localhost:4000/api/orc-core/index.html?org/apache/orc/Reader.html">Reader</a>
+<a href="/api/orc-core/index.html?org/apache/orc/Reader.html">Reader</a>
that contains the metadata about the file. There are a few options to
the ORC reader, but far fewer than the writer and none of them are
required. The reader has methods for getting the number of rows,
@@ -1049,7 +1049,7 @@ schema, compression, etc. from the file.</p>
</code></pre></div></div>
<p>To get the data, create a
-<a href="http://localhost:4000/api/orc-core/index.html?org/apache/orc/RecordReader.html">RecordReader</a>
+<a href="/api/orc-core/index.html?org/apache/orc/RecordReader.html">RecordReader</a>
object. By default, the RecordReader reads all rows and all columns,
but there are options to control the data that is read.</p>
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/docs/mapred.html
----------------------------------------------------------------------
diff --git a/docs/mapred.html b/docs/mapred.html
index d23b006..dfcd516 100644
--- a/docs/mapred.html
+++ b/docs/mapred.html
@@ -676,7 +676,7 @@
<h1>Using in MapRed</h1>
<p>This page describes how to read and write ORC files from Hadoop’s
older org.apache.hadoop.mapred MapReduce APIs. If you want to use the
-new org.apache.hadoop.mapreduce API, please look at the <a href="http://localhost:4000/docs/mapreduce.html">next
+new org.apache.hadoop.mapreduce API, please look at the <a href="/docs/mapreduce.html">next
page</a>.</p>
<h2 id="reading-orc-files">Reading ORC files</h2>
@@ -700,7 +700,7 @@ page</a>.</p>
<p>Set the minimal properties in your JobConf:</p>
<ul>
- <li><strong>mapreduce.job.inputformat.class</strong> = <a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcInputFormat.html">org.apache.orc.mapred.OrcInputFormat</a></li>
+ <li><strong>mapreduce.job.inputformat.class</strong> = <a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcInputFormat.html">org.apache.orc.mapred.OrcInputFormat</a></li>
<li><strong>mapreduce.input.fileinputformat.inputdir</strong> = your input directory</li>
</ul>
@@ -723,7 +723,7 @@ the key and a value based on the table below expanded recursively.</p>
<tbody>
<tr>
<td>array</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcList</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcList</a></td>
</tr>
<tr>
<td>binary</td>
@@ -743,11 +743,11 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>date</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html">org.apache.hadoop.hive.serde2.io.DateWritable</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html">org.apache.hadoop.hive.serde2.io.DateWritable</a></td>
</tr>
<tr>
<td>decimal</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html">org.apache.hadoop.hive.serde2.io.HiveDecimalWritable</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html">org.apache.hadoop.hive.serde2.io.HiveDecimalWritable</a></td>
</tr>
<tr>
<td>double</td>
@@ -763,7 +763,7 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>map</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html">org.apache.orc.mapred.OrcMap</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html">org.apache.orc.mapred.OrcMap</a></td>
</tr>
<tr>
<td>smallint</td>
@@ -775,11 +775,11 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>struct</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcStruct</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcStruct</a></td>
</tr>
<tr>
<td>timestamp</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html">org.apache.orc.mapred.OrcTimestamp</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html">org.apache.orc.mapred.OrcTimestamp</a></td>
</tr>
<tr>
<td>tinyint</td>
@@ -787,7 +787,7 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>uniontype</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html">org.apache.orc.mapred.OrcUnion</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html">org.apache.orc.mapred.OrcUnion</a></td>
</tr>
<tr>
<td>varchar</td>
@@ -823,7 +823,7 @@ mapper code would look like:</p>
<p>To write ORC files from your MapReduce job, you’ll need to set</p>
<ul>
- <li><strong>mapreduce.job.outputformat.class</strong> = <a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcOutputFormat.html">org.apache.orc.mapred.OrcOutputFormat</a></li>
+ <li><strong>mapreduce.job.outputformat.class</strong> = <a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcOutputFormat.html">org.apache.orc.mapred.OrcOutputFormat</a></li>
<li><strong>mapreduce.output.fileoutputformat.outputdir</strong> = your output directory</li>
<li><strong>orc.mapred.output.schema</strong> = the schema to write to the ORC file</li>
</ul>
@@ -873,9 +873,9 @@ MapReduce shuffle. The complex ORC types, since they are generic
types, need to have their full type information provided to create the
object. To enable MapReduce to properly instantiate the OrcStruct and
other ORC types, we need to wrap it in either an
-<a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html">OrcKey</a>
+<a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html">OrcKey</a>
for the shuffle key or
-<a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html">OrcValue</a>
+<a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html">OrcValue</a>
for the shuffle value.</p>
<p>To send two OrcStructs through the shuffle, define the following properties
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/docs/mapreduce.html
----------------------------------------------------------------------
diff --git a/docs/mapreduce.html b/docs/mapreduce.html
index 162e3b7..45eb8fd 100644
--- a/docs/mapreduce.html
+++ b/docs/mapreduce.html
@@ -676,7 +676,7 @@
<h1>Using in MapReduce</h1>
<p>This page describes how to read and write ORC files from Hadoop’s
newer org.apache.hadoop.mapreduce MapReduce APIs. If you want to use the
-older org.apache.hadoop.mapred API, please look at the <a href="http://localhost:4000/docs/mapred.html">previous
+older org.apache.hadoop.mapred API, please look at the <a href="/docs/mapred.html">previous
page</a>.</p>
<h2 id="reading-orc-files">Reading ORC files</h2>
@@ -700,7 +700,7 @@ page</a>.</p>
<p>Set the minimal properties in your JobConf:</p>
<ul>
- <li><strong>mapreduce.job.inputformat.class</strong> = <a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcInputFormat.html">org.apache.orc.mapreduce.OrcInputFormat</a></li>
+ <li><strong>mapreduce.job.inputformat.class</strong> = <a href="/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcInputFormat.html">org.apache.orc.mapreduce.OrcInputFormat</a></li>
<li><strong>mapreduce.input.fileinputformat.inputdir</strong> = your input directory</li>
</ul>
@@ -723,7 +723,7 @@ the key and a value based on the table below expanded recursively.</p>
<tbody>
<tr>
<td>array</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcList</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcList</a></td>
</tr>
<tr>
<td>binary</td>
@@ -743,11 +743,11 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>date</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html">org.apache.hadoop.hive.serde2.io.DateWritable</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/DateWritable.html">org.apache.hadoop.hive.serde2.io.DateWritable</a></td>
</tr>
<tr>
<td>decimal</td>
- <td><a href="http://localhost:4000/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html">org.apache.hadoop.hive.serde2.io.HiveDecimalWritable</a></td>
+ <td><a href="/api/hive-storage-api/index.html?org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.html">org.apache.hadoop.hive.serde2.io.HiveDecimalWritable</a></td>
</tr>
<tr>
<td>double</td>
@@ -763,7 +763,7 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>map</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html">org.apache.orc.mapred.OrcMap</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcMap.html">org.apache.orc.mapred.OrcMap</a></td>
</tr>
<tr>
<td>smallint</td>
@@ -775,11 +775,11 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>struct</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcStruct</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcStruct.html">org.apache.orc.mapred.OrcStruct</a></td>
</tr>
<tr>
<td>timestamp</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html">org.apache.orc.mapred.OrcTimestamp</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcTimestamp.html">org.apache.orc.mapred.OrcTimestamp</a></td>
</tr>
<tr>
<td>tinyint</td>
@@ -787,7 +787,7 @@ the key and a value based on the table below expanded recursively.</p>
</tr>
<tr>
<td>uniontype</td>
- <td><a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html">org.apache.orc.mapred.OrcUnion</a></td>
+ <td><a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcUnion.html">org.apache.orc.mapred.OrcUnion</a></td>
</tr>
<tr>
<td>varchar</td>
@@ -819,7 +819,7 @@ mapper code would look like:</p>
<p>To write ORC files from your MapReduce job, you’ll need to set</p>
<ul>
- <li><strong>mapreduce.job.outputformat.class</strong> = <a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcOutputFormat.html">org.apache.orc.mapreduce.OrcOutputFormat</a></li>
+ <li><strong>mapreduce.job.outputformat.class</strong> = <a href="/api/orc-mapreduce/index.html?org/apache/orc/mapreduce/OrcOutputFormat.html">org.apache.orc.mapreduce.OrcOutputFormat</a></li>
<li><strong>mapreduce.output.fileoutputformat.outputdir</strong> = your output directory</li>
<li><strong>orc.mapred.output.schema</strong> = the schema to write to the ORC file</li>
</ul>
@@ -865,9 +865,9 @@ MapReduce shuffle. The complex ORC types, since they are generic
types, need to have their full type information provided to create the
object. To enable MapReduce to properly instantiate the OrcStruct and
other ORC types, we need to wrap it in either an
-<a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html">OrcKey</a>
+<a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcKey.html">OrcKey</a>
for the shuffle key or
-<a href="http://localhost:4000/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html">OrcValue</a>
+<a href="/api/orc-mapreduce/index.html?org/apache/orc/mapred/OrcValue.html">OrcValue</a>
for the shuffle value.</p>
<p>To send two OrcStructs through the shuffle, define the following properties
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/docs/types.html
----------------------------------------------------------------------
diff --git a/docs/types.html b/docs/types.html
index 9538981..38d239a 100644
--- a/docs/types.html
+++ b/docs/types.html
@@ -749,7 +749,7 @@ file would form the given tree.</p>
);
</code></p>
-<p><img src="http://localhost:4000/img/TreeWriters.png" alt="ORC column structure" /></p>
+<p><img src="/img/TreeWriters.png" alt="ORC column structure" /></p>
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/news/2015/06/26/new-logo/index.html
----------------------------------------------------------------------
diff --git a/news/2015/06/26/new-logo/index.html b/news/2015/06/26/new-logo/index.html
index e731556..7b0a20d 100644
--- a/news/2015/06/26/new-logo/index.html
+++ b/news/2015/06/26/new-logo/index.html
@@ -173,7 +173,7 @@
<div class="post-content">
<p>The ORC project has adopted a new logo. We hope you like it.</p>
-<p><img src="http://localhost:4000/img/logo.png" alt="orc logo" title="orc logo" /></p>
+<p><img src="/img/logo.png" alt="orc logo" title="orc logo" /></p>
<p>Other great options included a big white hand on a black shield. <em>smile</em></p>
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/news/index.html
----------------------------------------------------------------------
diff --git a/news/index.html b/news/index.html
index 78b11de..c16cffe 100644
--- a/news/index.html
+++ b/news/index.html
@@ -1607,7 +1607,7 @@ more.</p>
<div class="post-content">
<p>The ORC project has adopted a new logo. We hope you like it.</p>
-<p><img src="http://localhost:4000/img/logo.png" alt="orc logo" title="orc logo" /></p>
+<p><img src="/img/logo.png" alt="orc logo" title="orc logo" /></p>
<p>Other great options included a big white hand on a black shield. <em>smile</em></p>
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/security/CVE-2018-8015/index.html
----------------------------------------------------------------------
diff --git a/security/CVE-2018-8015/index.html b/security/CVE-2018-8015/index.html
index a51e319..6eb6fd3 100644
--- a/security/CVE-2018-8015/index.html
+++ b/security/CVE-2018-8015/index.html
@@ -128,7 +128,7 @@ a child will cause the parser to infinitely recurse until the stack overflows.</
<p>This issue was discovered by Terry Chia.</p>
<h2 id="references">References:</h2>
-<p><a href="http://localhost:4000/security/">Apache ORC security</a></p>
+<p><a href="/security">Apache ORC security</a></p>
</article>
</div>
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/specification/ORCv0/index.html
----------------------------------------------------------------------
diff --git a/specification/ORCv0/index.html b/specification/ORCv0/index.html
index d6d5fde..7781b3e 100644
--- a/specification/ORCv0/index.html
+++ b/specification/ORCv0/index.html
@@ -108,7 +108,7 @@ include the minimum and maximum values for each column in each set of
file reader can skip entire sets of rows that aren’t important for
this query.</p>
-<p><img src="http://localhost:4000/img/OrcFileLayout.png" alt="ORC file structure" /></p>
+<p><img src="/img/OrcFileLayout.png" alt="ORC file structure" /></p>
<h1 id="file-tail">File Tail</h1>
@@ -233,7 +233,7 @@ needs to be read.
the schema is expressed as a tree as in the figure below, where
the compound types have subcolumns under them.</p>
-<p><img src="http://localhost:4000/img/TreeWriters.png" alt="ORC column structure" /></p>
+<p><img src="/img/TreeWriters.png" alt="ORC column structure" /></p>
<p>The equivalent Hive DDL would be:</p>
@@ -435,7 +435,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
that as long as a decompressor starts at the top of a header, it can
start decompressing without the previous bytes.</p>
-<p><img src="http://localhost:4000/img/CompressionStream.png" alt="compression streams" /></p>
+<p><img src="/img/CompressionStream.png" alt="compression streams" /></p>
<p>The default compression chunk size is 256K, but writers can choose
their own value. Larger chunks lead to better compression, but require
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/specification/ORCv1/index.html
----------------------------------------------------------------------
diff --git a/specification/ORCv1/index.html b/specification/ORCv1/index.html
index 08c07eb..16ce862 100644
--- a/specification/ORCv1/index.html
+++ b/specification/ORCv1/index.html
@@ -108,7 +108,7 @@ include the minimum and maximum values for each column in each set of
file reader can skip entire sets of rows that aren’t important for
this query.</p>
-<p><img src="http://localhost:4000/img/OrcFileLayout.png" alt="ORC file structure" /></p>
+<p><img src="/img/OrcFileLayout.png" alt="ORC file structure" /></p>
<h1 id="file-tail">File Tail</h1>
@@ -233,7 +233,7 @@ needs to be read.
the schema is expressed as a tree as in the figure below, where
the compound types have subcolumns under them.</p>
-<p><img src="http://localhost:4000/img/TreeWriters.png" alt="ORC column structure" /></p>
+<p><img src="/img/TreeWriters.png" alt="ORC column structure" /></p>
<p>The equivalent Hive DDL would be:</p>
@@ -435,7 +435,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
that as long as a decompressor starts at the top of a header, it can
start decompressing without the previous bytes.</p>
-<p><img src="http://localhost:4000/img/CompressionStream.png" alt="compression streams" /></p>
+<p><img src="/img/CompressionStream.png" alt="compression streams" /></p>
<p>The default compression chunk size is 256K, but writers can choose
their own value. Larger chunks lead to better compression, but require
@@ -1258,7 +1258,7 @@ within bitset and bit position within the long uses little endian order.
makes it convenient to read the bloom filter stream and row index stream
together in single read operation.</p>
-<p><img src="http://localhost:4000/img/BloomFilter.png" alt="bloom filter" /></p>
+<p><img src="/img/BloomFilter.png" alt="bloom filter" /></p>
</article>
</div>
http://git-wip-us.apache.org/repos/asf/orc/blob/56663dff/specification/ORCv2/index.html
----------------------------------------------------------------------
diff --git a/specification/ORCv2/index.html b/specification/ORCv2/index.html
index 0c7a42f..1616307 100644
--- a/specification/ORCv2/index.html
+++ b/specification/ORCv2/index.html
@@ -133,7 +133,7 @@ include the minimum and maximum values for each column in each set of
file reader can skip entire sets of rows that aren’t important for
this query.</p>
-<p><img src="http://localhost:4000/img/OrcFileLayout.png" alt="ORC file structure" /></p>
+<p><img src="/img/OrcFileLayout.png" alt="ORC file structure" /></p>
<h1 id="file-tail">File Tail</h1>
@@ -258,7 +258,7 @@ needs to be read.
the schema is expressed as a tree as in the figure below, where
the compound types have subcolumns under them.</p>
-<p><img src="http://localhost:4000/img/TreeWriters.png" alt="ORC column structure" /></p>
+<p><img src="/img/TreeWriters.png" alt="ORC column structure" /></p>
<p>The equivalent Hive DDL would be:</p>
@@ -460,7 +460,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
that as long as a decompressor starts at the top of a header, it can
start decompressing without the previous bytes.</p>
-<p><img src="http://localhost:4000/img/CompressionStream.png" alt="compression streams" /></p>
+<p><img src="/img/CompressionStream.png" alt="compression streams" /></p>
<p>The default compression chunk size is 256K, but writers can choose
their own value. Larger chunks lead to better compression, but require
@@ -1280,7 +1280,7 @@ within bitset and bit position within the long uses little endian order.
makes it convenient to read the bloom filter stream and row index stream
together in single read operation.</p>
-<p><img src="http://localhost:4000/img/BloomFilter.png" alt="bloom filter" /></p>
+<p><img src="/img/BloomFilter.png" alt="bloom filter" /></p>
</article>
</div>