You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by cw...@apache.org on 2019/09/25 00:32:47 UTC
[incubator-druid-website-src] branch 0.16.0-incubating updated:
update to latest 0.16.0-incubating, set download versions
This is an automated email from the ASF dual-hosted git repository.
cwylie pushed a commit to branch 0.16.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git
The following commit(s) were added to refs/heads/0.16.0-incubating by this push:
new 19e4c4d update to latest 0.16.0-incubating, set download versions
19e4c4d is described below
commit 19e4c4d7c1a623a9c09b55a25db4e6aba1316fdf
Author: Clint Wylie <cw...@apache.org>
AuthorDate: Tue Sep 24 17:32:35 2019 -0700
update to latest 0.16.0-incubating, set download versions
---
_config.yml | 8 +++----
docs/0.16.0-incubating/operations/pull-deps.html | 14 ++++++------
docs/0.16.0-incubating/querying/query-context.html | 2 +-
docs/0.16.0-incubating/tutorials/cluster.html | 10 ++++-----
docs/0.16.0-incubating/tutorials/index.html | 26 +++++++++++-----------
.../tutorials/tutorial-batch-hadoop.html | 4 ++--
.../tutorials/tutorial-batch.html | 2 +-
.../tutorials/tutorial-ingestion-spec.html | 2 +-
.../tutorials/tutorial-rollup.html | 2 +-
docs/latest/operations/pull-deps.html | 14 ++++++------
docs/latest/querying/query-context.html | 2 +-
docs/latest/tutorials/cluster.html | 10 ++++-----
docs/latest/tutorials/index.html | 26 +++++++++++-----------
docs/latest/tutorials/tutorial-batch-hadoop.html | 4 ++--
docs/latest/tutorials/tutorial-batch.html | 2 +-
docs/latest/tutorials/tutorial-ingestion-spec.html | 2 +-
docs/latest/tutorials/tutorial-rollup.html | 2 +-
17 files changed, 66 insertions(+), 66 deletions(-)
diff --git a/_config.yml b/_config.yml
index 731618f..4d41436 100644
--- a/_config.yml
+++ b/_config.yml
@@ -26,14 +26,14 @@ description: 'Real²time Exploratory Analytics on Large Datasets'
druid_versions:
+ - release: 0.16
+ versions:
+ - version: 0.16.0-incubating
+ date: 2019-09-24
- release: 0.15
versions:
- version: 0.15.1-incubating
date: 2019-08-15
- - release: 0.14
- versions:
- - version: 0.14.2-incubating
- date: 2019-05-27
tranquility_stable_version: 0.8.3
diff --git a/docs/0.16.0-incubating/operations/pull-deps.html b/docs/0.16.0-incubating/operations/pull-deps.html
index bf08df6..9acac93 100644
--- a/docs/0.16.0-incubating/operations/pull-deps.html
+++ b/docs/0.16.0-incubating/operations/pull-deps.html
@@ -94,7 +94,7 @@
<p><code>--no-default-remote-repositories</code></p>
<p>Don't use the default remote repositories, only use the repositories provided directly via --remoteRepository.</p>
<p><code>-d</code> or <code>--defaultVersion</code></p>
-<p>Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is <code>org.apache.druid.extensions:mysql-metadata-storage</code>, and default version is <code>#{DRUIDVERSION}</code>, then this coordinate will be treated as <code>org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION}</code></p>
+<p>Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is <code>org.apache.druid.extensions:mysql-metadata-storage</code>, and default version is <code>0.16.0-incubating</code>, then this coordinate will be treated as <code>org.apache.druid.extensions:mysql-metadata-storage:0.16.0-incubating</code></p>
<p><code>--use-proxy</code></p>
<p>Use http/https proxy to send request to the remote repository servers. <code>--proxy-host</code> and <code>--proxy-port</code> must be set explicitly if this option is enabled.</p>
<p><code>--proxy-type</code></p>
@@ -113,15 +113,15 @@
<li><p>Tell <code>pull-deps</code> what to download using <code>-c</code> or <code>-h</code> option, which are followed by a maven coordinate.</p></li>
</ol>
<p>Example:</p>
-<p>Suppose you want to download <code>mysql-metadata-storage</code> and <code>hadoop-client</code>(both 2.3.0 and 2.4.0) with a specific version, you can run <code>pull-deps</code> command with <code>-c org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION}</code>, <code>-h org.apache.hadoop:hadoop-client:2.3.0</code> and <code>-h org.apache.hadoop:hadoop-client:2.4.0</code>, an example command would be:</p>
-<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>:mysql-metadata-storage:#{DRUIDVER [...]
+<p>Suppose you want to download <code>mysql-metadata-storage</code> and <code>hadoop-client</code>(both 2.3.0 and 2.4.0) with a specific version, you can run <code>pull-deps</code> command with <code>-c org.apache.druid.extensions:mysql-metadata-storage:0.16.0-incubating</code>, <code>-h org.apache.hadoop:hadoop-client:2.3.0</code> and <code>-h org.apache.hadoop:hadoop-client:2.4.0</code>, an example command would be:</p>
+<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>:mysql-metadata-storage:0.16.0-inc [...]
</code></pre>
<p>Because <code>--clean</code> is supplied, this command will first remove the directories specified at <code>druid.extensions.directory</code> and <code>druid.extensions.hadoopDependenciesDir</code>, then recreate them and start downloading the extensions there. After finishing downloading, if you go to the extension directories you specified, you will see</p>
<pre><code class="hljs">tree <span class="hljs-keyword">extensions
</span><span class="hljs-keyword">extensions
</span>└── mysql-metadata-storage
- └── mysql-metadata-storage-<span class="hljs-comment">#{DRUIDVERSION}.jar</span>
-</code></pre>
+ └── mysql-metadata-storage-0.16.0-incubating.<span class="hljs-keyword">jar
+</span></code></pre>
<pre><code class="hljs">tree hadoop-dependencies
hadoop-dependencies/
└── hadoop-client
@@ -142,8 +142,8 @@ hadoop-dependencies/
├── commons-codec-<span class="hljs-number">1.4</span><span class="hljs-selector-class">.jar</span>
..... lots of jars
</code></pre>
-<p>Note that if you specify <code>--defaultVersion</code>, you don't have to put version information in the coordinate. For example, if you want <code>mysql-metadata-storage</code> to use version <code>#{DRUIDVERSION}</code>, you can change the command above to</p>
-<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --defaultVersion #{DRUIDVERSION} --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>: [...]
+<p>Note that if you specify <code>--defaultVersion</code>, you don't have to put version information in the coordinate. For example, if you want <code>mysql-metadata-storage</code> to use version <code>0.16.0-incubating</code>, you can change the command above to</p>
+<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --defaultVersion 0.16.0-incubating --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span [...]
</code></pre>
<blockquote>
<p>Please note to use the pull-deps tool you must know the Maven groupId, artifactId, and version of your extension.</p>
diff --git a/docs/0.16.0-incubating/querying/query-context.html b/docs/0.16.0-incubating/querying/query-context.html
index 5fe512e..c53ffc9 100644
--- a/docs/0.16.0-incubating/querying/query-context.html
+++ b/docs/0.16.0-incubating/querying/query-context.html
@@ -138,7 +138,7 @@ include "selector", "bound", "in", "like"
</ul>
<p>Other query types (like TopN, Scan, Select, and Search) ignore the "vectorize" parameter, and will execute without
vectorization. These query types will ignore the "vectorize" parameter even if it is set to <code>"force"</code>.</p>
-<p>Vectorization is an alpha-quality feature as of Druid #{DRUIDVERSION}. We heartily welcome any feedback and testing
+<p>Vectorization is an alpha-quality feature as of Druid 0.16.0-incubating. We heartily welcome any feedback and testing
from the community as we work to battle-test it.</p>
<table>
<thead>
diff --git a/docs/0.16.0-incubating/tutorials/cluster.html b/docs/0.16.0-incubating/tutorials/cluster.html
index 731b97c..db6191e 100644
--- a/docs/0.16.0-incubating/tutorials/cluster.html
+++ b/docs/0.16.0-incubating/tutorials/cluster.html
@@ -154,11 +154,11 @@ OSes</a>.</p>
<p>First, download and unpack the release archive. It's best to do this on a single machine at first,
since you will be editing the configurations and then copying the modified distribution out to all
of your servers.</p>
-<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/#{DRUIDVERSION}/apache-druid-#{DRUIDVERSION}-bin.tar.gz">Download</a>
-the #{DRUIDVERSION} release.</p>
+<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.16.0-incubating/apache-druid-0.16.0-incubating-bin.tar.gz">Download</a>
+the 0.16.0-incubating release.</p>
<p>Extract Druid by running the following commands in your terminal:</p>
-<pre><code class="hljs css language-bash">tar -xzf apache-druid-<span class="hljs-comment">#{DRUIDVERSION}-bin.tar.gz</span>
-<span class="hljs-built_in">cd</span> apache-druid-<span class="hljs-comment">#{DRUIDVERSION}</span>
+<pre><code class="hljs css language-bash">tar -xzf apache-druid-0.16.0-incubating-bin.tar.gz
+<span class="hljs-built_in">cd</span> apache-druid-0.16.0-incubating
</code></pre>
<p>In the package, you should find:</p>
<ul>
@@ -360,7 +360,7 @@ rather than on the Master server.</p>
<h2><a class="anchor" aria-hidden="true" id="start-master-server"></a><a href="#start-master-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
<p>Copy the Druid distribution and your edited configurations to your Master server.</p>
<p>If you have been editing the configurations on your local machine, you can use <em>rsync</em> to copy them:</p>
-<pre><code class="hljs css language-bash">rsync -az apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/ MASTER_SERVER:apache-druid-#{DRUIDVERSION}/</span>
+<pre><code class="hljs css language-bash">rsync -az apache-druid-0.16.0-incubating/ MASTER_SERVER:apache-druid-0.16.0-incubating/
</code></pre>
<h3><a class="anchor" aria-hidden="true" id="no-zookeeper-on-master"></a><a href="#no-zookeeper-on-master" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
<p>From the distribution root, run the following command to start the Master server:</p>
diff --git a/docs/0.16.0-incubating/tutorials/index.html b/docs/0.16.0-incubating/tutorials/index.html
index e1f5f5c..1b0b822 100644
--- a/docs/0.16.0-incubating/tutorials/index.html
+++ b/docs/0.16.0-incubating/tutorials/index.html
@@ -96,11 +96,11 @@ a good choice, sized for a 4CPU/16GB RAM environment.</p>
<p>If you plan to use the single-machine deployment for further evaluation beyond the tutorials, we recommend a larger
configuration than <code>micro-quickstart</code>.</p>
<h2><a class="anchor" aria-hidden="true" id="getting-started"></a><a href="#getting-started" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 [...]
-<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/#{DRUIDVERSION}/apache-druid-#{DRUIDVERSION}-bin.tar.gz">Download</a>
-the #{DRUIDVERSION} release.</p>
+<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.16.0-incubating/apache-druid-0.16.0-incubating-bin.tar.gz">Download</a>
+the 0.16.0-incubating release.</p>
<p>Extract Druid by running the following commands in your terminal:</p>
-<pre><code class="hljs css language-bash">tar -xzf apache-druid-<span class="hljs-comment">#{DRUIDVERSION}-bin.tar.gz</span>
-<span class="hljs-built_in">cd</span> apache-druid-<span class="hljs-comment">#{DRUIDVERSION}</span>
+<pre><code class="hljs css language-bash">tar -xzf apache-druid-0.16.0-incubating-bin.tar.gz
+<span class="hljs-built_in">cd</span> apache-druid-0.16.0-incubating
</code></pre>
<p>In the package, you should find:</p>
<ul>
@@ -121,24 +121,24 @@ tar -xzf zookeeper-3.4.14.tar.gz
mv zookeeper-3.4.14 zk
</code></pre>
<p>The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at <code>zk</code> under the
-apache-druid-#{DRUIDVERSION} package root.</p>
+apache-druid-0.16.0-incubating package root.</p>
<h2><a class="anchor" aria-hidden="true" id="start-up-druid-services"></a><a href="#start-up-druid-services" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
<p>The following commands will assume that you are using the <code>micro-quickstart</code> single-machine configuration. If you are
using a different configuration, the <code>bin</code> directory has equivalent scripts for each configuration, such as
<code>bin/start-single-server-small</code>.</p>
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following command:</p>
+<p>From the apache-druid-0.16.0-incubating package root, run the following command:</p>
<pre><code class="hljs css language-bash">./bin/start-micro-quickstart
</code></pre>
<p>This will bring up instances of Zookeeper and the Druid services, all running on the local machine, e.g.:</p>
<pre><code class="hljs css language-bash">$ ./bin/start-micro-quickstart
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[zk], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/zk.log]: bin/run-zk conf</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[coordinator-overlord], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[broker], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[router], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[historical], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[middleManager], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart</span>
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[zk], logging to[/apache-druid-0.16.0-incubating/var/sv/zk.log]: bin/run-zk conf
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[coordinator-overlord], logging to[/apache-druid-0.16.0-incubating/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[broker], logging to[/apache-druid-0.16.0-incubating/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[router], logging to[/apache-druid-0.16.0-incubating/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[historical], logging to[/apache-druid-0.16.0-incubating/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[middleManager], logging to[/apache-druid-0.16.0-incubating/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
</code></pre>
-<p>All persistent state such as the cluster metadata store and segments for the services will be kept in the <code>var</code> directory under the apache-druid-#{DRUIDVERSION} package root. Logs for the services are located at <code>var/sv</code>.</p>
+<p>All persistent state such as the cluster metadata store and segments for the services will be kept in the <code>var</code> directory under the apache-druid-0.16.0-incubating package root. Logs for the services are located at <code>var/sv</code>.</p>
<p>Later on, if you'd like to stop the services, CTRL-C to exit the <code>bin/start-micro-quickstart</code> script, which will terminate the Druid processes.</p>
<p>Once the cluster has started, you can navigate to <a href="http://localhost:8888">http://localhost:8888</a>.
The <a href="/docs/0.16.0-incubating/design/router.html">Druid router process</a>, which serves the <a href="/docs/0.16.0-incubating/operations/druid-console.html">Druid console</a>, resides at this address.</p>
diff --git a/docs/0.16.0-incubating/tutorials/tutorial-batch-hadoop.html b/docs/0.16.0-incubating/tutorials/tutorial-batch-hadoop.html
index 45abf9e..1c83077 100644
--- a/docs/0.16.0-incubating/tutorials/tutorial-batch-hadoop.html
+++ b/docs/0.16.0-incubating/tutorials/tutorial-batch-hadoop.html
@@ -87,7 +87,7 @@
<h2><a class="anchor" aria-hidden="true" id="build-the-hadoop-docker-image"></a><a href="#build-the-hadoop-docker-image" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 [...]
<p>For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.3 cluster, which we'll use to run the batch indexing task.</p>
<p>This Dockerfile and related files are located at <code>quickstart/tutorial/hadoop/docker</code>.</p>
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":</p>
+<p>From the apache-druid-0.16.0-incubating package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":</p>
<pre><code class="hljs css language-bash"><span class="hljs-built_in">cd</span> quickstart/tutorial/hadoop/docker
docker build -t druid-hadoop-demo:2.8.3 .
</code></pre>
@@ -128,7 +128,7 @@ bash-4.1<span class="hljs-comment">#</span>
<pre><code class="hljs"><span class="hljs-symbol">docker</span> exec -<span class="hljs-keyword">it </span>druid-hadoop-demo <span class="hljs-keyword">bash
</span></code></pre>
<h3><a class="anchor" aria-hidden="true" id="copy-input-data-to-the-hadoop-container"></a><a href="#copy-input-data-to-the-hadoop-container" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 [...]
-<p>From the apache-druid-#{DRUIDVERSION} package root on the host, copy the <code>quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz</code> sample data to the shared folder:</p>
+<p>From the apache-druid-0.16.0-incubating package root on the host, copy the <code>quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz</code> sample data to the shared folder:</p>
<pre><code class="hljs css language-bash">cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz
</code></pre>
<h3><a class="anchor" aria-hidden="true" id="setup-hdfs-directories"></a><a href="#setup-hdfs-directories" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
diff --git a/docs/0.16.0-incubating/tutorials/tutorial-batch.html b/docs/0.16.0-incubating/tutorials/tutorial-batch.html
index 5f51b7e..147ca62 100644
--- a/docs/0.16.0-incubating/tutorials/tutorial-batch.html
+++ b/docs/0.16.0-incubating/tutorials/tutorial-batch.html
@@ -231,7 +231,7 @@ wikipedia loading complete! You may now query your data
<p>Once the spec is submitted, you can follow the same instructions as above to wait for the data to load and then query it.</p>
<h2><a class="anchor" aria-hidden="true" id="loading-data-without-the-script"></a><a href="#loading-data-without-the-script" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 [...]
<p>Let's briefly discuss how we would've submitted the ingestion task without using the script. You do not need to run these commands.</p>
-<p>To submit the task, POST it to Druid in a new terminal window from the apache-druid-#{DRUIDVERSION} directory:</p>
+<p>To submit the task, POST it to Druid in a new terminal window from the apache-druid-0.16.0-incubating directory:</p>
<pre><code class="hljs css language-bash">curl -X <span class="hljs-string">'POST'</span> -H <span class="hljs-string">'Content-Type:application/json'</span> -d @quickstart/tutorial/wikipedia-index.json http://localhost:8081/druid/indexer/v1/task
</code></pre>
<p>Which will print the ID of the task if the submission was successful:</p>
diff --git a/docs/0.16.0-incubating/tutorials/tutorial-ingestion-spec.html b/docs/0.16.0-incubating/tutorials/tutorial-ingestion-spec.html
index 77f2e14..cb1d6b1 100644
--- a/docs/0.16.0-incubating/tutorials/tutorial-ingestion-spec.html
+++ b/docs/0.16.0-incubating/tutorials/tutorial-ingestion-spec.html
@@ -575,7 +575,7 @@ the <a href="index.html">single-machine quickstart</a> and have it running on yo
}
</code></pre>
<h2><a class="anchor" aria-hidden="true" id="submit-the-task-and-query-the-data"></a><a href="#submit-the-task-and-query-the-data" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5 [...]
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following command:</p>
+<p>From the apache-druid-0.16.0-incubating package root, run the following command:</p>
<pre><code class="hljs css language-bash">bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081
</code></pre>
<p>After the script completes, we will query the data.</p>
diff --git a/docs/0.16.0-incubating/tutorials/tutorial-rollup.html b/docs/0.16.0-incubating/tutorials/tutorial-rollup.html
index f792c21..1f26208 100644
--- a/docs/0.16.0-incubating/tutorials/tutorial-rollup.html
+++ b/docs/0.16.0-incubating/tutorials/tutorial-rollup.html
@@ -151,7 +151,7 @@ the <a href="index.html">single-machine quickstart</a> and have it running on yo
<p>Note that we have <code>srcIP</code> and <code>dstIP</code> defined as dimensions, a longSum metric is defined for the <code>packets</code> and <code>bytes</code> columns, and the <code>queryGranularity</code> has been defined as <code>minute</code>.</p>
<p>We will see how these definitions are used after we load this data.</p>
<h2><a class="anchor" aria-hidden="true" id="load-the-example-data"></a><a href="#load-the-example-data" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following command:</p>
+<p>From the apache-druid-0.16.0-incubating package root, run the following command:</p>
<pre><code class="hljs css language-bash">bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081
</code></pre>
<p>After the script completes, we will query the data.</p>
diff --git a/docs/latest/operations/pull-deps.html b/docs/latest/operations/pull-deps.html
index 68ae1cb..b99d386 100644
--- a/docs/latest/operations/pull-deps.html
+++ b/docs/latest/operations/pull-deps.html
@@ -94,7 +94,7 @@
<p><code>--no-default-remote-repositories</code></p>
<p>Don't use the default remote repositories, only use the repositories provided directly via --remoteRepository.</p>
<p><code>-d</code> or <code>--defaultVersion</code></p>
-<p>Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is <code>org.apache.druid.extensions:mysql-metadata-storage</code>, and default version is <code>#{DRUIDVERSION}</code>, then this coordinate will be treated as <code>org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION}</code></p>
+<p>Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is <code>org.apache.druid.extensions:mysql-metadata-storage</code>, and default version is <code>latest</code>, then this coordinate will be treated as <code>org.apache.druid.extensions:mysql-metadata-storage:latest</code></p>
<p><code>--use-proxy</code></p>
<p>Use http/https proxy to send request to the remote repository servers. <code>--proxy-host</code> and <code>--proxy-port</code> must be set explicitly if this option is enabled.</p>
<p><code>--proxy-type</code></p>
@@ -113,15 +113,15 @@
<li><p>Tell <code>pull-deps</code> what to download using <code>-c</code> or <code>-h</code> option, which are followed by a maven coordinate.</p></li>
</ol>
<p>Example:</p>
-<p>Suppose you want to download <code>mysql-metadata-storage</code> and <code>hadoop-client</code>(both 2.3.0 and 2.4.0) with a specific version, you can run <code>pull-deps</code> command with <code>-c org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION}</code>, <code>-h org.apache.hadoop:hadoop-client:2.3.0</code> and <code>-h org.apache.hadoop:hadoop-client:2.4.0</code>, an example command would be:</p>
-<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>:mysql-metadata-storage:#{DRUIDVER [...]
+<p>Suppose you want to download <code>mysql-metadata-storage</code> and <code>hadoop-client</code>(both 2.3.0 and 2.4.0) with a specific version, you can run <code>pull-deps</code> command with <code>-c org.apache.druid.extensions:mysql-metadata-storage:latest</code>, <code>-h org.apache.hadoop:hadoop-client:2.3.0</code> and <code>-h org.apache.hadoop:hadoop-client:2.4.0</code>, an example command would be:</p>
+<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>:mysql-metadata-storage:latest -h [...]
</code></pre>
<p>Because <code>--clean</code> is supplied, this command will first remove the directories specified at <code>druid.extensions.directory</code> and <code>druid.extensions.hadoopDependenciesDir</code>, then recreate them and start downloading the extensions there. After finishing downloading, if you go to the extension directories you specified, you will see</p>
<pre><code class="hljs">tree <span class="hljs-keyword">extensions
</span><span class="hljs-keyword">extensions
</span>└── mysql-metadata-storage
- └── mysql-metadata-storage-<span class="hljs-comment">#{DRUIDVERSION}.jar</span>
-</code></pre>
+ └── mysql-metadata-storage-latest.<span class="hljs-keyword">jar
+</span></code></pre>
<pre><code class="hljs">tree hadoop-dependencies
hadoop-dependencies/
└── hadoop-client
@@ -142,8 +142,8 @@ hadoop-dependencies/
├── commons-codec-<span class="hljs-number">1.4</span><span class="hljs-selector-class">.jar</span>
..... lots of jars
</code></pre>
-<p>Note that if you specify <code>--defaultVersion</code>, you don't have to put version information in the coordinate. For example, if you want <code>mysql-metadata-storage</code> to use version <code>#{DRUIDVERSION}</code>, you can change the command above to</p>
-<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --defaultVersion #{DRUIDVERSION} --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>: [...]
+<p>Note that if you specify <code>--defaultVersion</code>, you don't have to put version information in the coordinate. For example, if you want <code>mysql-metadata-storage</code> to use version <code>latest</code>, you can change the command above to</p>
+<pre><code class="hljs">java -classpath <span class="hljs-string">"/my/druid/lib/*"</span> org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> tools pull-deps --defaultVersion latest --clean -c org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.extensions</span>:mysql-met [...]
</code></pre>
<blockquote>
<p>Please note to use the pull-deps tool you must know the Maven groupId, artifactId, and version of your extension.</p>
diff --git a/docs/latest/querying/query-context.html b/docs/latest/querying/query-context.html
index 46e09a9..316fbd6 100644
--- a/docs/latest/querying/query-context.html
+++ b/docs/latest/querying/query-context.html
@@ -138,7 +138,7 @@ include "selector", "bound", "in", "like"
</ul>
<p>Other query types (like TopN, Scan, Select, and Search) ignore the "vectorize" parameter, and will execute without
vectorization. These query types will ignore the "vectorize" parameter even if it is set to <code>"force"</code>.</p>
-<p>Vectorization is an alpha-quality feature as of Druid #{DRUIDVERSION}. We heartily welcome any feedback and testing
+<p>Vectorization is an alpha-quality feature as of Druid latest. We heartily welcome any feedback and testing
from the community as we work to battle-test it.</p>
<table>
<thead>
diff --git a/docs/latest/tutorials/cluster.html b/docs/latest/tutorials/cluster.html
index 981e4ec..f5b3f89 100644
--- a/docs/latest/tutorials/cluster.html
+++ b/docs/latest/tutorials/cluster.html
@@ -154,11 +154,11 @@ OSes</a>.</p>
<p>First, download and unpack the release archive. It's best to do this on a single machine at first,
since you will be editing the configurations and then copying the modified distribution out to all
of your servers.</p>
-<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/#{DRUIDVERSION}/apache-druid-#{DRUIDVERSION}-bin.tar.gz">Download</a>
-the #{DRUIDVERSION} release.</p>
+<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/latest/apache-druid-latest-bin.tar.gz">Download</a>
+the latest release.</p>
<p>Extract Druid by running the following commands in your terminal:</p>
-<pre><code class="hljs css language-bash">tar -xzf apache-druid-<span class="hljs-comment">#{DRUIDVERSION}-bin.tar.gz</span>
-<span class="hljs-built_in">cd</span> apache-druid-<span class="hljs-comment">#{DRUIDVERSION}</span>
+<pre><code class="hljs css language-bash">tar -xzf apache-druid-latest-bin.tar.gz
+<span class="hljs-built_in">cd</span> apache-druid-latest
</code></pre>
<p>In the package, you should find:</p>
<ul>
@@ -360,7 +360,7 @@ rather than on the Master server.</p>
<h2><a class="anchor" aria-hidden="true" id="start-master-server"></a><a href="#start-master-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
<p>Copy the Druid distribution and your edited configurations to your Master server.</p>
<p>If you have been editing the configurations on your local machine, you can use <em>rsync</em> to copy them:</p>
-<pre><code class="hljs css language-bash">rsync -az apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/ MASTER_SERVER:apache-druid-#{DRUIDVERSION}/</span>
+<pre><code class="hljs css language-bash">rsync -az apache-druid-latest/ MASTER_SERVER:apache-druid-latest/
</code></pre>
<h3><a class="anchor" aria-hidden="true" id="no-zookeeper-on-master"></a><a href="#no-zookeeper-on-master" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
<p>From the distribution root, run the following command to start the Master server:</p>
diff --git a/docs/latest/tutorials/index.html b/docs/latest/tutorials/index.html
index 7a28767..5d555e7 100644
--- a/docs/latest/tutorials/index.html
+++ b/docs/latest/tutorials/index.html
@@ -96,11 +96,11 @@ a good choice, sized for a 4CPU/16GB RAM environment.</p>
<p>If you plan to use the single-machine deployment for further evaluation beyond the tutorials, we recommend a larger
configuration than <code>micro-quickstart</code>.</p>
<h2><a class="anchor" aria-hidden="true" id="getting-started"></a><a href="#getting-started" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 [...]
-<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/#{DRUIDVERSION}/apache-druid-#{DRUIDVERSION}-bin.tar.gz">Download</a>
-the #{DRUIDVERSION} release.</p>
+<p><a href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/latest/apache-druid-latest-bin.tar.gz">Download</a>
+the latest release.</p>
<p>Extract Druid by running the following commands in your terminal:</p>
-<pre><code class="hljs css language-bash">tar -xzf apache-druid-<span class="hljs-comment">#{DRUIDVERSION}-bin.tar.gz</span>
-<span class="hljs-built_in">cd</span> apache-druid-<span class="hljs-comment">#{DRUIDVERSION}</span>
+<pre><code class="hljs css language-bash">tar -xzf apache-druid-latest-bin.tar.gz
+<span class="hljs-built_in">cd</span> apache-druid-latest
</code></pre>
<p>In the package, you should find:</p>
<ul>
@@ -121,24 +121,24 @@ tar -xzf zookeeper-3.4.14.tar.gz
mv zookeeper-3.4.14 zk
</code></pre>
<p>The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at <code>zk</code> under the
-apache-druid-#{DRUIDVERSION} package root.</p>
+apache-druid-latest package root.</p>
<h2><a class="anchor" aria-hidden="true" id="start-up-druid-services"></a><a href="#start-up-druid-services" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
<p>The following commands will assume that you are using the <code>micro-quickstart</code> single-machine configuration. If you are
using a different configuration, the <code>bin</code> directory has equivalent scripts for each configuration, such as
<code>bin/start-single-server-small</code>.</p>
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following command:</p>
+<p>From the apache-druid-latest package root, run the following command:</p>
<pre><code class="hljs css language-bash">./bin/start-micro-quickstart
</code></pre>
<p>This will bring up instances of Zookeeper and the Druid services, all running on the local machine, e.g.:</p>
<pre><code class="hljs css language-bash">$ ./bin/start-micro-quickstart
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[zk], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/zk.log]: bin/run-zk conf</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[coordinator-overlord], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[broker], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[router], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[historical], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart</span>
-[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[middleManager], logging to[/apache-druid-<span class="hljs-comment">#{DRUIDVERSION}/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart</span>
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[zk], logging to[/apache-druid-latest/var/sv/zk.log]: bin/run-zk conf
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[coordinator-overlord], logging to[/apache-druid-latest/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[broker], logging to[/apache-druid-latest/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[router], logging to[/apache-druid-latest/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[historical], logging to[/apache-druid-latest/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
+[Fri May 3 11:40:50 2019] Running <span class="hljs-built_in">command</span>[middleManager], logging to[/apache-druid-latest/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
</code></pre>
-<p>All persistent state such as the cluster metadata store and segments for the services will be kept in the <code>var</code> directory under the apache-druid-#{DRUIDVERSION} package root. Logs for the services are located at <code>var/sv</code>.</p>
+<p>All persistent state such as the cluster metadata store and segments for the services will be kept in the <code>var</code> directory under the apache-druid-latest package root. Logs for the services are located at <code>var/sv</code>.</p>
<p>Later on, if you'd like to stop the services, CTRL-C to exit the <code>bin/start-micro-quickstart</code> script, which will terminate the Druid processes.</p>
<p>Once the cluster has started, you can navigate to <a href="http://localhost:8888">http://localhost:8888</a>.
The <a href="/docs/latest/design/router.html">Druid router process</a>, which serves the <a href="/docs/latest/operations/druid-console.html">Druid console</a>, resides at this address.</p>
diff --git a/docs/latest/tutorials/tutorial-batch-hadoop.html b/docs/latest/tutorials/tutorial-batch-hadoop.html
index 06f2168..cf1f3c3 100644
--- a/docs/latest/tutorials/tutorial-batch-hadoop.html
+++ b/docs/latest/tutorials/tutorial-batch-hadoop.html
@@ -87,7 +87,7 @@
<h2><a class="anchor" aria-hidden="true" id="build-the-hadoop-docker-image"></a><a href="#build-the-hadoop-docker-image" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 [...]
<p>For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.3 cluster, which we'll use to run the batch indexing task.</p>
<p>This Dockerfile and related files are located at <code>quickstart/tutorial/hadoop/docker</code>.</p>
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":</p>
+<p>From the apache-druid-latest package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":</p>
<pre><code class="hljs css language-bash"><span class="hljs-built_in">cd</span> quickstart/tutorial/hadoop/docker
docker build -t druid-hadoop-demo:2.8.3 .
</code></pre>
@@ -128,7 +128,7 @@ bash-4.1<span class="hljs-comment">#</span>
<pre><code class="hljs"><span class="hljs-symbol">docker</span> exec -<span class="hljs-keyword">it </span>druid-hadoop-demo <span class="hljs-keyword">bash
</span></code></pre>
<h3><a class="anchor" aria-hidden="true" id="copy-input-data-to-the-hadoop-container"></a><a href="#copy-input-data-to-the-hadoop-container" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 [...]
-<p>From the apache-druid-#{DRUIDVERSION} package root on the host, copy the <code>quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz</code> sample data to the shared folder:</p>
+<p>From the apache-druid-latest package root on the host, copy the <code>quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz</code> sample data to the shared folder:</p>
<pre><code class="hljs css language-bash">cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz
</code></pre>
<h3><a class="anchor" aria-hidden="true" id="setup-hdfs-directories"></a><a href="#setup-hdfs-directories" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
diff --git a/docs/latest/tutorials/tutorial-batch.html b/docs/latest/tutorials/tutorial-batch.html
index 374f9fe..491d2e4 100644
--- a/docs/latest/tutorials/tutorial-batch.html
+++ b/docs/latest/tutorials/tutorial-batch.html
@@ -231,7 +231,7 @@ wikipedia loading complete! You may now query your data
<p>Once the spec is submitted, you can follow the same instructions as above to wait for the data to load and then query it.</p>
<h2><a class="anchor" aria-hidden="true" id="loading-data-without-the-script"></a><a href="#loading-data-without-the-script" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 [...]
<p>Let's briefly discuss how we would've submitted the ingestion task without using the script. You do not need to run these commands.</p>
-<p>To submit the task, POST it to Druid in a new terminal window from the apache-druid-#{DRUIDVERSION} directory:</p>
+<p>To submit the task, POST it to Druid in a new terminal window from the apache-druid-latest directory:</p>
<pre><code class="hljs css language-bash">curl -X <span class="hljs-string">'POST'</span> -H <span class="hljs-string">'Content-Type:application/json'</span> -d @quickstart/tutorial/wikipedia-index.json http://localhost:8081/druid/indexer/v1/task
</code></pre>
<p>Which will print the ID of the task if the submission was successful:</p>
diff --git a/docs/latest/tutorials/tutorial-ingestion-spec.html b/docs/latest/tutorials/tutorial-ingestion-spec.html
index cb586d9..f5ee86c 100644
--- a/docs/latest/tutorials/tutorial-ingestion-spec.html
+++ b/docs/latest/tutorials/tutorial-ingestion-spec.html
@@ -575,7 +575,7 @@ the <a href="index.html">single-machine quickstart</a> and have it running on yo
}
</code></pre>
<h2><a class="anchor" aria-hidden="true" id="submit-the-task-and-query-the-data"></a><a href="#submit-the-task-and-query-the-data" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5 [...]
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following command:</p>
+<p>From the apache-druid-latest package root, run the following command:</p>
<pre><code class="hljs css language-bash">bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081
</code></pre>
<p>After the script completes, we will query the data.</p>
diff --git a/docs/latest/tutorials/tutorial-rollup.html b/docs/latest/tutorials/tutorial-rollup.html
index f1145eb..9c20e09 100644
--- a/docs/latest/tutorials/tutorial-rollup.html
+++ b/docs/latest/tutorials/tutorial-rollup.html
@@ -151,7 +151,7 @@ the <a href="index.html">single-machine quickstart</a> and have it running on yo
<p>Note that we have <code>srcIP</code> and <code>dstIP</code> defined as dimensions, a longSum metric is defined for the <code>packets</code> and <code>bytes</code> columns, and the <code>queryGranularity</code> has been defined as <code>minute</code>.</p>
<p>We will see how these definitions are used after we load this data.</p>
<h2><a class="anchor" aria-hidden="true" id="load-the-example-data"></a><a href="#load-the-example-data" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
-<p>From the apache-druid-#{DRUIDVERSION} package root, run the following command:</p>
+<p>From the apache-druid-latest package root, run the following command:</p>
<pre><code class="hljs css language-bash">bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081
</code></pre>
<p>After the script completes, we will query the data.</p>
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org