You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by jo...@apache.org on 2020/10/09 06:36:55 UTC

[druid-website] branch 20rc2_update created (now bb16010)

This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


      at bb16010  0.20.0-rc2 updates

This branch includes the following new commits:

     new bb16010  0.20.0-rc2 updates

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[druid-website] 01/01: 0.20.0-rc2 updates

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit bb160106366d98022f4340f4336e27e2ed24da24
Author: jon-wei <jo...@imply.io>
AuthorDate: Thu Oct 8 23:36:41 2020 -0700

    0.20.0-rc2 updates
---
 community/index.html                              |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 ++++++++++++-
 docs/0.20.0/ingestion/data-formats.html           |   9 +++++++++
 docs/latest/development/extensions-core/avro.html |  13 ++++++++++++-
 docs/latest/ingestion/data-formats.html           |   9 +++++++++
 img/favicon.png                                   | Bin 4514 -> 1156 bytes
 index.html                                        |  16 ++++++++--------
 libraries.html                                    |   1 +
 technology.html                                   |   2 +-
 use-cases.html                                    |   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)

diff --git a/community/index.html b/community/index.html
index 685a0d1..ae2159a 100644
--- a/community/index.html
+++ b/community/index.html
@@ -159,6 +159,7 @@ new features, on <a href="https://github.com/apache/druid">GitHub</a>.</p>
 <ul>
 <li><a href="https://www.cloudera.com/">Cloudera</a></li>
 <li><a href="https://datumo.io/">Datumo</a></li>
+<li><a href="https://www.deep.bi/solutions/apache-druid">Deep.BI</a></li>
 <li><a href="https://imply.io/">Imply</a></li>
 </ul>
 
diff --git a/docs/0.20.0/development/extensions-core/avro.html b/docs/0.20.0/development/extensions-core/avro.html
index b12a1e7..38a808e 100644
--- a/docs/0.20.0/development/extensions-core/avro.html
+++ b/docs/0.20.0/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See <a href="/docs/0.20.0/ingestion/data-formats.html#avro-hadoop-parser">Avro Hadoop Parser</a> and <a href="/docs/0.20.0/ingestion/data-formats.html#avro-stream-parser">Avro Stream Parser</a>
 for more details about how to use these in an ingestion spec.</p>
+<p>Additionally, it provides an InputFormat for reading Avro OCF files when using
+<a href="/docs/0.20.0/ingestion/native-batch.html">native batch indexing</a>, see <a href="/docs/0.20.0/ingestion/data-formats.html#avro-ocf">Avro OCF</a>
+for details on how to ingest OCF files.</p>
 <p>Make sure to <a href="/docs/0.20.0/development/extensions.html#loading-extensions">include</a> <code>druid-avro-extensions</code> as an extension.</p>
-</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.20.0/development/extensions-core/approximate-histograms.html"><span class="arrow-prev">← </span><span>Approximate Histogram aggregators</span></a><a class="docs-next button" href="/docs/0.20.0/development/extensions-core/azure.html"><span>Microsoft Azure</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#avro-extension" [...]
+<h3><a class="anchor" aria-hidden="true" id="avro-types"></a><a href="#avro-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>Druid supports most Avro types natively, there are however some exceptions which are detailed here.</p>
+<p><code>union</code> types which aren't of the form <code>[null, otherType]</code> aren't supported at this time.</p>
+<p><code>bytes</code> and <code>fixed</code> Avro types will be returned by default as base64 encoded strings unless the <code>binaryAsString</code> option is enabled on the Avro parser.
+This setting will decode these types as UTF-8 strings.</p>
+<p><code>enum</code> types will be returned as <code>string</code> of the enum symbol.</p>
+<p><code>record</code> and <code>map</code> types representing nested data can be ingested using <a href="/docs/0.20.0/ingestion/data-formats.html#flattenspec">flattenSpec</a> on the parser.</p>
+<p>Druid doesn't currently support Avro logical types, they will be ignored and fields will be handled according to the underlying primitive type.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.20.0/development/extensions-core/approximate-histograms.html"><span class="arrow-prev">← </span><span>Approximate Histogram aggregators</span></a><a class="docs-next button" href="/docs/0.20.0/development/extensions-core/azure.html"><span>Microsoft Azure</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#avro-extension" [...]
                 document.addEventListener('keyup', function(e) {
                   if (e.target !== document.body) {
                     return;
diff --git a/docs/0.20.0/ingestion/data-formats.html b/docs/0.20.0/ingestion/data-formats.html
index 7846c68..9553c2b 100644
--- a/docs/0.20.0/ingestion/data-formats.html
+++ b/docs/0.20.0/ingestion/data-formats.html
@@ -262,6 +262,9 @@ please read <a href="/docs/0.20.0/development/extensions-core/orc.html#migration
 <blockquote>
 <p>You need to include the <a href="/docs/0.20.0/development/extensions-core/avro.html"><code>druid-avro-extensions</code></a> as an extension to use the Avro OCF input format.</p>
 </blockquote>
+<blockquote>
+<p>See the <a href="/docs/0.20.0/development/extensions-core/avro.html#avro-types">Avro Types</a> section for how Avro types are handled in Druid</p>
+</blockquote>
 <p>The <code>inputFormat</code> to load data of Avro OCF format. An example is:</p>
 <pre><code class="hljs css language-json">"ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using <a href="#parsespec"><code>parseSpec</code
 <blockquote>
 <p>You need to include the <a href="/docs/0.20.0/development/extensions-core/avro.html"><code>druid-avro-extensions</code></a> as an extension to use the Avro Hadoop Parser.</p>
 </blockquote>
+<blockquote>
+<p>See the <a href="/docs/0.20.0/development/extensions-core/avro.html#avro-types">Avro Types</a> section for how Avro types are handled in Druid</p>
+</blockquote>
 <p>This parser is for <a href="/docs/0.20.0/ingestion/hadoop.html">Hadoop batch ingestion</a>.
 The <code>inputFormat</code> of <code>inputSpec</code> in <code>ioConfig</code> must be set to <code>&quot;org.apache.druid.data.input.avro.AvroValueInputFormat&quot;</code>.
 You may want to set Avro reader's schema in <code>jobProperties</code> in <code>tuningConfig</code>,
@@ -880,6 +886,9 @@ an explicitly defined <a href="http://www.joda.org/joda-time/apidocs/org/joda/ti
 <blockquote>
 <p>You need to include the <a href="/docs/0.20.0/development/extensions-core/avro.html"><code>druid-avro-extensions</code></a> as an extension to use the Avro Stream Parser.</p>
 </blockquote>
+<blockquote>
+<p>See the <a href="/docs/0.20.0/development/extensions-core/avro.html#avro-types">Avro Types</a> section for how Avro types are handled in Druid</p>
+</blockquote>
 <p>This parser is for <a href="/docs/0.20.0/ingestion/index.html#streaming">stream ingestion</a> and reads Avro data from a stream directly.</p>
 <table>
 <thead>
diff --git a/docs/latest/development/extensions-core/avro.html b/docs/latest/development/extensions-core/avro.html
index 1b8a32b..32e689b 100644
--- a/docs/latest/development/extensions-core/avro.html
+++ b/docs/latest/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See <a href="/docs/latest/ingestion/data-formats.html#avro-hadoop-parser">Avro Hadoop Parser</a> and <a href="/docs/latest/ingestion/data-formats.html#avro-stream-parser">Avro Stream Parser</a>
 for more details about how to use these in an ingestion spec.</p>
+<p>Additionally, it provides an InputFormat for reading Avro OCF files when using
+<a href="/docs/latest/ingestion/native-batch.html">native batch indexing</a>, see <a href="/docs/latest/ingestion/data-formats.html#avro-ocf">Avro OCF</a>
+for details on how to ingest OCF files.</p>
 <p>Make sure to <a href="/docs/latest/development/extensions.html#loading-extensions">include</a> <code>druid-avro-extensions</code> as an extension.</p>
-</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/latest/development/extensions-core/approximate-histograms.html"><span class="arrow-prev">← </span><span>Approximate Histogram aggregators</span></a><a class="docs-next button" href="/docs/latest/development/extensions-core/azure.html"><span>Microsoft Azure</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#avro-extension" [...]
+<h3><a class="anchor" aria-hidden="true" id="avro-types"></a><a href="#avro-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>Druid supports most Avro types natively, there are however some exceptions which are detailed here.</p>
+<p><code>union</code> types which aren't of the form <code>[null, otherType]</code> aren't supported at this time.</p>
+<p><code>bytes</code> and <code>fixed</code> Avro types will be returned by default as base64 encoded strings unless the <code>binaryAsString</code> option is enabled on the Avro parser.
+This setting will decode these types as UTF-8 strings.</p>
+<p><code>enum</code> types will be returned as <code>string</code> of the enum symbol.</p>
+<p><code>record</code> and <code>map</code> types representing nested data can be ingested using <a href="/docs/latest/ingestion/data-formats.html#flattenspec">flattenSpec</a> on the parser.</p>
+<p>Druid doesn't currently support Avro logical types, they will be ignored and fields will be handled according to the underlying primitive type.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/latest/development/extensions-core/approximate-histograms.html"><span class="arrow-prev">← </span><span>Approximate Histogram aggregators</span></a><a class="docs-next button" href="/docs/latest/development/extensions-core/azure.html"><span>Microsoft Azure</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#avro-extension" [...]
                 document.addEventListener('keyup', function(e) {
                   if (e.target !== document.body) {
                     return;
diff --git a/docs/latest/ingestion/data-formats.html b/docs/latest/ingestion/data-formats.html
index efe7249..192e9c0 100644
--- a/docs/latest/ingestion/data-formats.html
+++ b/docs/latest/ingestion/data-formats.html
@@ -262,6 +262,9 @@ please read <a href="/docs/latest/development/extensions-core/orc.html#migration
 <blockquote>
 <p>You need to include the <a href="/docs/latest/development/extensions-core/avro.html"><code>druid-avro-extensions</code></a> as an extension to use the Avro OCF input format.</p>
 </blockquote>
+<blockquote>
+<p>See the <a href="/docs/latest/development/extensions-core/avro.html#avro-types">Avro Types</a> section for how Avro types are handled in Druid</p>
+</blockquote>
 <p>The <code>inputFormat</code> to load data of Avro OCF format. An example is:</p>
 <pre><code class="hljs css language-json">"ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using <a href="#parsespec"><code>parseSpec</code
 <blockquote>
 <p>You need to include the <a href="/docs/latest/development/extensions-core/avro.html"><code>druid-avro-extensions</code></a> as an extension to use the Avro Hadoop Parser.</p>
 </blockquote>
+<blockquote>
+<p>See the <a href="/docs/latest/development/extensions-core/avro.html#avro-types">Avro Types</a> section for how Avro types are handled in Druid</p>
+</blockquote>
 <p>This parser is for <a href="/docs/latest/ingestion/hadoop.html">Hadoop batch ingestion</a>.
 The <code>inputFormat</code> of <code>inputSpec</code> in <code>ioConfig</code> must be set to <code>&quot;org.apache.druid.data.input.avro.AvroValueInputFormat&quot;</code>.
 You may want to set Avro reader's schema in <code>jobProperties</code> in <code>tuningConfig</code>,
@@ -880,6 +886,9 @@ an explicitly defined <a href="http://www.joda.org/joda-time/apidocs/org/joda/ti
 <blockquote>
 <p>You need to include the <a href="/docs/latest/development/extensions-core/avro.html"><code>druid-avro-extensions</code></a> as an extension to use the Avro Stream Parser.</p>
 </blockquote>
+<blockquote>
+<p>See the <a href="/docs/latest/development/extensions-core/avro.html#avro-types">Avro Types</a> section for how Avro types are handled in Druid</p>
+</blockquote>
 <p>This parser is for <a href="/docs/latest/ingestion/index.html#streaming">stream ingestion</a> and reads Avro data from a stream directly.</p>
 <table>
 <thead>
diff --git a/img/favicon.png b/img/favicon.png
index caf8e68..96b9aff 100644
Binary files a/img/favicon.png and b/img/favicon.png differ
diff --git a/index.html b/index.html
index 9e35cc2..9427feb 100644
--- a/index.html
+++ b/index.html
@@ -139,35 +139,35 @@
       <div class="features">
         <div class="feature">
           <span class="fa fa-chart-line fa"></span>
-          <h5>A modern cloud-native, stream-native, analytics database</h5>
+          <h5>A fast, modern analytics database</h5>
           <p>
-            Druid is designed for workflows where fast queries and ingest really matter. Druid excels at instant data visibility, ad-hoc queries, operational analytics, and handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of <a href='/use-cases'>use cases</a>.
+            Druid is designed for <a href='/use-cases'>workflows</a> where fast ad-hoc analytics, instant data visibility, or supporting high concurrency is important. As such, Druid is often used to power UIs where an interactive, consistent user experience is desired.
           </p>
         </div>
         <div class="feature">
           <span class="fa fa-forward fa"></span>
           <h5>Easy integration with your existing data pipelines</h5>
           <p>
-            Druid can natively stream data from message buses such as <a href='http://kafka.apache.org/'>Kafka</a>, <a href='https://aws.amazon.com/kinesis/'>Amazon Kinesis</a>, and more, and batch load files from data lakes such as <a href='https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html'>HDFS</a>, <a href='https://aws.amazon.com/s3/'>Amazon S3</a>, and more.
+            Druid streams data from message buses such as <a href='http://kafka.apache.org/'>Kafka</a>, and <a href='https://aws.amazon.com/kinesis/'>Amazon Kinesis</a>, and batch load files from data lakes such as <a href='https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html'>HDFS</a>, and <a href='https://aws.amazon.com/s3/'>Amazon S3</a>. Druid supports most popular file formats for structured and semi-structured data.
           </p>
         </div>
         <div class="feature">
           <span class="fa fa-lightbulb fa"></span>
-          <h5>Up to 100x faster than traditional solutions</h5>
+          <h5>Fast, consistent queries at high concurrency</h5>
           <p>
-            Druid has been <a href='https://imply.io/post/performance-benchmark-druid-presto-hive'>benchmarked</a> to greatly outperform legacy solutions for data ingestion and data querying. Druid's novel architecture combines the best of <a href='https://en.wikipedia.org/wiki/Data_warehouse'>data warehouses</a>, <a href='https://en.wikipedia.org/wiki/Time_series_database'>timeseries databases</a>, and <a href='https://en.wikipedia.org/wiki/Search_engine_(computing)'>search systems</a>.
+            Druid has been <a href='https://imply.io/post/performance-benchmark-druid-presto-hive'>benchmarked</a> to greatly outperform legacy solutions. Druid combines novel storage ideas, indexing structures, and both exact and approximate queries to return most results in under a second.
           </p>
         </div>
         <div class="feature">
           <span class="fa fa-unlock fa"></span>
-          <h5>Unlock new workflows</h5>
+          <h5>Broad applicability</h5>
           <p>
-            Druid <a href='/use-cases'>unlocks new types of queries and workflows</a> for clickstream, APM, supply chain, network telemetry, digital marketing, and many other forms of event-driven data. Druid is purpose built for rapid, ad-hoc queries on both real-time and historical data.
+            Druid <a href='/use-cases'>unlocks new types of queries and workflows</a> for clickstream, APM, supply chain, network telemetry, digital marketing, risk/fraud, and many other types of data. Druid is purpose built for rapid, ad-hoc queries on both real-time and historical data.
           </p>
         </div>
         <div class="feature">
           <span class="fa fa-cloud fa"></span>
-          <h5>Deploy in AWS/GCP/Azure, hybrid clouds, Kubernetes, and bare metal</h5>
+          <h5>Deploy in public, private, and hybrid clouds</h5>
           <p>
             Druid can be deployed in any *NIX environment on commodity hardware, both in the cloud and on premise. Deploying Druid is easy: scaling up and down is as simple as adding and removing Druid services.
           </p>
diff --git a/libraries.html b/libraries.html
index 331600e..7fe46bc 100644
--- a/libraries.html
+++ b/libraries.html
@@ -219,6 +219,7 @@
 
 <ul>
 <li><a href="https://github.com/airbnb/superset">airbnb/superset</a> - A web application to slice, dice and visualize data out of Druid. Formerly Caravel and Panoramix</li>
+<li><a href="https://www.deep.bi/solutions/apache-druid">Deep.Explorer</a> - A UI built for slice &amp; dice analytics, adhoc queries and powerful, easy data visualizations</li>
 <li><a href="https://github.com/societe-generale/druidplugin">Grafana</a> - A plugin for <a href="http://grafana.org/">Grafana</a></li>
 <li><a href="https://github.com/Quantiply/grafana-plugins/tree/master/features/druid">grafana</a> - A plugin for <a href="http://grafana.org/">Grafana</a></li>
 <li><a href="https://github.com/implydata/pivot">Pivot</a> - An exploratory analytics UI for Druid</li>
diff --git a/technology.html b/technology.html
index 19c3de0..2ce98ad 100644
--- a/technology.html
+++ b/technology.html
@@ -126,7 +126,7 @@
   <div class="row">
     <div class="col-md-10 col-md-offset-1">
       <p>Apache Druid is an open source distributed data store.
-Druid’s core design combines ideas from <a href="https://en.wikipedia.org/wiki/Data_warehouse">data warehouses</a>, <a href="https://en.wikipedia.org/wiki/Time_series_database">timeseries databases</a>, and <a href="https://en.wikipedia.org/wiki/Full-text_search">search systems</a> to create a unified system for real-time analytics for a broad range of <a href="/use-cases">use cases</a>. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, q [...]
+Druid’s core design combines ideas from <a href="https://en.wikipedia.org/wiki/Data_warehouse">data warehouses</a>, <a href="https://en.wikipedia.org/wiki/Time_series_database">timeseries databases</a>, and <a href="https://en.wikipedia.org/wiki/Full-text_search">search systems</a> to create a high performance real-time analytics database for a broad range of <a href="/use-cases">use cases</a>. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage fo [...]
 
 <div class="image-large">
   <img src="img/diagram-2.png" style="max-width: 360px">
diff --git a/use-cases.html b/use-cases.html
index 287a849..858a2f5 100644
--- a/use-cases.html
+++ b/use-cases.html
@@ -125,7 +125,7 @@
 <div class="container">
   <div class="row">
     <div class="col-md-10 col-md-offset-1">
-      <h2 id="real-time-analytics-and-intelligence">Real-time analytics and intelligence</h2>
+      <h2 id="power-real-time-analytics-data-applications-and-more">Power real-time analytics, data applications, and more</h2>
 
 <p>Apache Druid is a database that is most often used for powering use cases where real-time ingest, fast query performance, and high uptime are important. As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs that need fast aggregations. Druid works best with event-oriented data.</p>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org