You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by cw...@apache.org on 2019/05/10 04:37:01 UTC

[incubator-druid] branch 0.14.2-incubating created (now 283f5c1)

This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a change to branch 0.14.2-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git.


      at 283f5c1  make-redirects.py requires python3, explicitly specify it (#7625)

This branch includes the following new commits:

     new cf9db11  fix issue #7607 (#7619)
     new 578304e  Fix exception when using complex aggs with result level caching (#7614)
     new d4b35d4  Fix resultLevelCache for timeseries with grandTotal (#7624)
     new e4f7160  Add plain text README.txt, use relative link from README.md to build.md (#7611)
     new 283f5c1  make-redirects.py requires python3, explicitly specify it (#7625)

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[incubator-druid] 04/05: Add plain text README.txt, use relative link from README.md to build.md (#7611)

Posted by cw...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.14.2-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git

commit e4f7160fd5ae2f015d7ee9a095f1a20454e225c5
Author: Clint Wylie <cw...@apache.org>
AuthorDate: Thu May 9 21:29:26 2019 -0700

    Add plain text README.txt, use relative link from README.md to build.md (#7611)
    
    * use relative link to build instructions from top level readme
    
    * add textfile to readme
    
    * formatting
    
    * make README.BINARY plaintext, move LABELS.md to LABELS, README.txt to README
    
    * exclude README.BINARY still
    
    * remove jdk links/recommmendations
    
    * add script to use DRUIDVERSION in textfile README instead of latest, add links to recommended jdk to build.md
    
    * license
    
    * better readme template, links to latest if does not detect an apache release version
    
    * fix
---
 .gitignore                                    |   1 +
 LABELS                                        | 105 +++++++++++++++++++++++++
 LABELS.md                                     | 106 --------------------------
 README.md                                     |   2 +-
 README.template                               |  89 +++++++++++++++++++++
 distribution/pom.xml                          |  18 ++++-
 distribution/src/assembly/source-assembly.xml |   3 +-
 docs/_bin/build-textfile-readme.sh            |  28 +++++++
 docs/content/development/build.md             |   5 +-
 docs/content/tutorials/index.md               |  14 +---
 pom.xml                                       |   2 +-
 11 files changed, 251 insertions(+), 122 deletions(-)

diff --git a/.gitignore b/.gitignore
index da1117c..40e0adb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -15,3 +15,4 @@ target
 _site
 dependency-reduced-pom.xml
 README.BINARY
+README
diff --git a/LABELS b/LABELS
new file mode 100644
index 0000000..aff8285
--- /dev/null
+++ b/LABELS
@@ -0,0 +1,105 @@
+
+Licenses
+-----------
+
+This product bundles fonts from Font Awesome Free version 4.2.0, copyright Font Awesome,
+ which is available under the SIL OFL 1.1. For details, see licenses/bin/font-awesome.silofl
+  * https://fontawesome.com/
+
+This product bundles JavaBeans Activation Framework version 1.2.0, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.activation.CDDL11
+  * https://github.com/javaee/activation
+  * com.sun.activation:javax.activation
+
+This product bundles Jersey version 1.19.3, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
+  * https://jersey.github.io/
+  * com.sun.jersey:jersey-core
+  * com.sun.jersey:jersey-server
+  * com.sun.jersey:jersey-servlet
+  * com.sun.jersey:contribs
+
+This product bundles Expression Language 3.0 API version 3.0.0., copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/el-spec
+  * javax.el:javax.el-api
+
+This product bundles Java Servlet API version 3.1.0, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/servlet-spec
+  * javax.servlet:javax.servlet-api
+
+This product bundles JSR311 API version 1.1.1, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/jsr311-api.CDDL11
+  * https://github.com/javaee/jsr311
+  * javax.ws.rs:jsr311-api
+
+This product bundles Expression Language 3.0 version 3.0.0., copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/el-spec
+  * org.glassfish:javax.el
+  
+This product bundles Jersey version 1.9, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
+  * https://jersey.github.io/
+  * com.sun.jersey:jersey-client
+  * com.sun.jersey:jersey-core
+
+This product bundles JavaBeans Activation Framework version 1.1, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javaxCDDL11
+  * https://github.com/javaee/activation
+  * javax.activation:activation
+
+This product bundles Java Servlet API version 2.5, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/servlet-spec
+  * javax.servlet:javax.servlet-api
+
+This product bundles JAXB version 2.2.2, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/jaxb-v2
+  * javax.xml.bind:jaxb-api
+
+This product bundles stax-api version 1.0-2, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/
+  * javax.xml.stream:stax-api
+
+This product bundles jsp-api version 2.1, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
+  * https://github.com/javaee/javaee-jsp-api
+  * javax.servlet.jsp:jsp-api
+
+This product bundles Jersey version 1.15, copyright Oracle and/or its affiliates.,
+ which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
+  * https://jersey.github.io/
+  * com.sun.jersey:jersey-client
+
+This product bundles OkHttp Aether Connector version 0.0.9, copyright to original author or authors,
+ which is available under the Eclipse Public License 1.0. For details, see licenses/bin/aether-connector-okhttp.EPL1.
+  * https://github.com/takari/aether-connector-okhttp
+  * io.tesla.aether:aether-connector-okhttp
+
+This product bundles Tesla Aether version 0.0.5, copyright to original author or authors,
+ which is available under the Eclipse Public License 1.0. For details, see licenses/bin/tesla-aether.EPL1.
+  * https://github.com/tesla/tesla-aether
+  * io.tesla.aether:tesla-aether
+
+This product bundles Eclipse Aether libraries version 0.9.0.M2, copyright Sonatype, Inc.,
+ which is available under the Eclipse Public License 1.0. For details, see licenses/bin/aether-core.EPL1.
+  * https://github.com/eclipse/aether-core
+  * org.eclipse.aether:aether-api
+  * org.eclipse.aether:aether-connector-file
+  * org.eclipse.aether:aether-impl
+  * org.eclipse.aether:aether-spi
+  * org.eclipse.aether:aether-util
+
+This product bundles Rhino version 1.7R5, copyright Mozilla and individual contributors.,
+ which is available under the Mozilla Public License Version 2.0. For details, see licenses/bin/rhino.MPL2.
+  * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino
+  * org.mozilla:rhino
+  
+This product bundles "Java Concurrency In Practice" Book Annotations, copyright Brian Goetz and Tim Peierls,
+ which is available under the Creative Commons Attribution 2.5 license. For details, see licenses/bin/creative-commons-2.5.LICENSE.
+  * http://jcip.net/
+  * net.jcip:jcip-annotations
diff --git a/LABELS.md b/LABELS.md
deleted file mode 100644
index 26866c4..0000000
--- a/LABELS.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-### Licensing Labels
-
-#### Binary-only
-
-    This product bundles fonts from Font Awesome Free version 4.2.0, copyright Font Awesome,
-     which is available under the SIL OFL 1.1. For details, see licenses/bin/font-awesome.silofl
-      * https://fontawesome.com/
-
-    This product bundles JavaBeans Activation Framework version 1.2.0, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.activation.CDDL11
-      * https://github.com/javaee/activation
-      * com.sun.activation:javax.activation
-
-    This product bundles Jersey version 1.19.3, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
-      * https://jersey.github.io/
-      * com.sun.jersey:jersey-core
-      * com.sun.jersey:jersey-server
-      * com.sun.jersey:jersey-servlet
-      * com.sun.jersey:contribs
-
-    This product bundles Expression Language 3.0 API version 3.0.0., copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/el-spec
-      * javax.el:javax.el-api
-
-    This product bundles Java Servlet API version 3.1.0, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/servlet-spec
-      * javax.servlet:javax.servlet-api
-
-    This product bundles JSR311 API version 1.1.1, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/jsr311-api.CDDL11
-      * https://github.com/javaee/jsr311
-      * javax.ws.rs:jsr311-api
-
-    This product bundles Expression Language 3.0 version 3.0.0., copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/el-spec
-      * org.glassfish:javax.el
-      
-    This product bundles Jersey version 1.9, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
-      * https://jersey.github.io/
-      * com.sun.jersey:jersey-client
-      * com.sun.jersey:jersey-core
-
-    This product bundles JavaBeans Activation Framework version 1.1, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javaxCDDL11
-      * https://github.com/javaee/activation
-      * javax.activation:activation
-
-    This product bundles Java Servlet API version 2.5, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/servlet-spec
-      * javax.servlet:javax.servlet-api
-
-    This product bundles JAXB version 2.2.2, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/jaxb-v2
-      * javax.xml.bind:jaxb-api
-
-    This product bundles stax-api version 1.0-2, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/
-      * javax.xml.stream:stax-api
-
-    This product bundles jsp-api version 2.1, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/javax.CDDL11
-      * https://github.com/javaee/javaee-jsp-api
-      * javax.servlet.jsp:jsp-api
-
-    This product bundles Jersey version 1.15, copyright Oracle and/or its affiliates.,
-     which is available under the CDDL 1.1. For details, see licenses/bin/jersey.CDDL11
-      * https://jersey.github.io/
-      * com.sun.jersey:jersey-client
-
-    This product bundles OkHttp Aether Connector version 0.0.9, copyright to original author or authors,
-     which is available under the Eclipse Public License 1.0. For details, see licenses/bin/aether-connector-okhttp.EPL1.
-      * https://github.com/takari/aether-connector-okhttp
-      * io.tesla.aether:aether-connector-okhttp
-
-    This product bundles Tesla Aether version 0.0.5, copyright to original author or authors,
-     which is available under the Eclipse Public License 1.0. For details, see licenses/bin/tesla-aether.EPL1.
-      * https://github.com/tesla/tesla-aether
-      * io.tesla.aether:tesla-aether
-
-    This product bundles Eclipse Aether libraries version 0.9.0.M2, copyright Sonatype, Inc.,
-     which is available under the Eclipse Public License 1.0. For details, see licenses/bin/aether-core.EPL1.
-      * https://github.com/eclipse/aether-core
-      * org.eclipse.aether:aether-api
-      * org.eclipse.aether:aether-connector-file
-      * org.eclipse.aether:aether-impl
-      * org.eclipse.aether:aether-spi
-      * org.eclipse.aether:aether-util
-
-    This product bundles Rhino version 1.7R5, copyright Mozilla and individual contributors.,
-     which is available under the Mozilla Public License Version 2.0. For details, see licenses/bin/rhino.MPL2.
-      * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino
-      * org.mozilla:rhino
-      
-    This product bundles "Java Concurrency In Practice" Book Annotations, copyright Brian Goetz and Tim Peierls,
-     which is available under the Creative Commons Attribution 2.5 license. For details, see licenses/bin/creative-commons-2.5.LICENSE.
-      * http://jcip.net/
-      * net.jcip:jcip-annotations
diff --git a/README.md b/README.md
index 7ea739f..8202e8c 100644
--- a/README.md
+++ b/README.md
@@ -68,7 +68,7 @@ We also have a couple people hanging out on IRC in `#druid-dev` on
 
 Please note that JDK 8 is required to build Druid.
 
-For instructions on building Druid from source, see [docs/content/development/build.md](https://github.com/apache/incubator-druid/blob/0.14.0-incubating/docs/content/development/build.md)
+For instructions on building Druid from source, see [docs/content/development/build.md](https://github.com/apache/incubator-druid/blob/0.14.1-incubating/docs/content/development/build.md)
 
 ### Contributing
 
diff --git a/README.template b/README.template
new file mode 100644
index 0000000..fbf4d8d
--- /dev/null
+++ b/README.template
@@ -0,0 +1,89 @@
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+
+
+Apache Druid (incubating) is a high performance analytics data store for event-driven data. More information about Druid
+can be found on http://www.druid.io.
+
+The Druid community is in the process of migrating to Apache by way of the Apache Incubator. Eventually, as we proceed
+along this path, our site will move from http://druid.io/ to https://druid.apache.org/.
+
+
+Documentation
+-------------
+You can find the documentation for {THIS_OR_THE_LATEST} Druid release on the project website http://druid.io/docs/{DRUIDVERSION}/.
+
+You can get started with Druid with our quickstart at http://druid.io/docs/{DRUIDVERSION}/tutorials/quickstart.html.
+
+
+Build from Source
+-----------------
+You can build Apache Druid (incubating) directly from source.
+
+Prerequisites:
+  JDK 8, 8u92+
+  Maven version 3.x
+
+
+The basic command to build Druid from source is:
+
+    mvn clean install
+
+
+This will run static analysis, unit tests, compile classes, and package the projects into JARs. It will not generate the
+source or binary distribution tarball.
+
+In addition to the basic stages, you may also want to add the following profiles and properties:
+
+  -Pdist           - Distribution profile: Generates the binary distribution tarball by pulling in core extensions and
+                     dependencies and packaging the files as 'distribution/target/apache-druid-x.x.x-bin.tar.gz'
+  -Papache-release - Apache release profile: Generates GPG signature and checksums, and builds the source distribution
+                     tarball as `distribution/target/apache-druid-x.x.x-src.tar.gz`
+  -Prat            - Apache Rat profile: Runs the Apache Rat license audit tool
+  -DskipTests      - Skips unit tests (which reduces build time)
+
+Putting these together, if you wish to build the source and binary distributions with signatures and checksums, audit
+licenses, and skip the unit tests, you would run:
+
+    mvn clean install -Papache-release,dist,rat -DskipTests
+
+Note: the AWS S3 unit tests require the 'AWS_DEFAULT_REGION' environment variable to be set to function correctly.
+
+
+Community
+---------
+Community support is available on the druid-user mailing list druid-user@googlegroups.com also available at
+https://groups.google.com/forum/#!forum/druid-user.
+
+Development discussions occur on dev@druid.apache.org (archive available at
+https://lists.apache.org/list.html?dev@druid.apache.org), which you can subscribe to by emailing
+dev-subscribe@druid.apache.org.
+
+
+Contributing
+------------
+If you find any bugs, please file a GitHub issue at https://github.com/apache/incubator-druid/issues.
+
+If you wish to contribute, please follow the guidelines listed at http://druid.io/community/.
+
+
+Disclaimer: Apache Druid is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the
+Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the
+infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful
+ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it
+does indicate that the project has yet to be fully endorsed by the ASF.
diff --git a/distribution/pom.xml b/distribution/pom.xml
index e6efc1b..322167f 100644
--- a/distribution/pom.xml
+++ b/distribution/pom.xml
@@ -86,8 +86,8 @@
                         <configuration>
                             <target>
                                 <concat destfile="${project.build.directory}/../../README.BINARY">
-                                    <fileset file="${project.build.directory}/../../README.md" />
-                                    <fileset file="${project.build.directory}/../../LABELS.md" />
+                                    <fileset file="${project.build.directory}/../../README" />
+                                    <fileset file="${project.build.directory}/../../LABELS" />
                                 </concat>
                             </target>
                         </configuration>
@@ -116,6 +116,20 @@
                         <artifactId>exec-maven-plugin</artifactId>
                         <executions>
                             <execution>
+                                <id>versionify-readme</id>
+                                <phase>initialize</phase>
+                                <goals>
+                                    <goal>exec</goal>
+                                </goals>
+                                <configuration>
+                                    <executable>${project.parent.basedir}/docs/_bin/build-textfile-readme.sh</executable>
+                                    <arguments>
+                                        <argument>${project.basedir}/../</argument>
+                                        <argument>${project.parent.version}</argument>
+                                    </arguments>
+                                </configuration>
+                            </execution>
+                            <execution>
                                 <id>pull-deps</id>
                                 <phase>package</phase>
                                 <goals>
diff --git a/distribution/src/assembly/source-assembly.xml b/distribution/src/assembly/source-assembly.xml
index b7d1550..db66bda 100644
--- a/distribution/src/assembly/source-assembly.xml
+++ b/distribution/src/assembly/source-assembly.xml
@@ -46,6 +46,7 @@
 
                 <exclude>.gitignore</exclude>
                 <exclude>.travis.yml</exclude>
+                <exclude>README.md</exclude>
                 <exclude>README.BINARY</exclude>
                 <exclude>publications/**</exclude>
                 <exclude>upload.sh</exclude>
@@ -67,7 +68,7 @@
             <directory>${project.build.directory}</directory>
             <includes>
                 <include>git.version</include>
-                <include>README.md</include>
+                <include>README</include>
             </includes>
             <outputDirectory/>
         </fileSet>
diff --git a/docs/_bin/build-textfile-readme.sh b/docs/_bin/build-textfile-readme.sh
new file mode 100755
index 0000000..dbce463
--- /dev/null
+++ b/docs/_bin/build-textfile-readme.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+BASEDIR=$1
+DRUID_VERSION=$2
+THIS_OR_THE_LATEST="this"
+
+if ! [[ "$DRUID_VERSION" =~ [0-9]+\.[0-9]+\.[0-9]+(\-incubating)?$ ]];
+then
+  DRUID_VERSION="latest"
+  THIS_OR_THE_LATEST="the latest"
+fi
+
+sed -e "s/{THIS_OR_THE_LATEST}/${THIS_OR_THE_LATEST}/;s/{DRUIDVERSION}/${DRUID_VERSION}/" ${BASEDIR}/README.template > ${BASEDIR}/README
diff --git a/docs/content/development/build.md b/docs/content/development/build.md
index 3600406..b28a836 100644
--- a/docs/content/development/build.md
+++ b/docs/content/development/build.md
@@ -31,9 +31,12 @@ For building the latest code in master, follow the instructions [here](https://g
 #### Prerequisites
 
 ##### Installing Java and Maven:
-- [JDK 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)
+- JDK 8, 8u92+. We recommend using an OpenJDK distribution that provides long-term support and open-source licensing,
+  like [Amazon Corretto](https://aws.amazon.com/corretto/) or [Azul Zulu](https://www.azul.com/downloads/zulu/).
 - [Maven version 3.x](http://maven.apache.org/download.cgi)
 
+
+
 ##### Downloading the source:
 
 ```bash
diff --git a/docs/content/tutorials/index.md b/docs/content/tutorials/index.md
index 9f79165..38f920f 100644
--- a/docs/content/tutorials/index.md
+++ b/docs/content/tutorials/index.md
@@ -22,7 +22,7 @@ title: "Apache Druid (incubating) Quickstart"
   ~ under the License.
   -->
 
-# Druid Quickstart
+# Apache Druid (incubating) Quickstart
 
 In this quickstart, we will download Druid and set it up on a single machine. The cluster will be ready to load data
 after completing this initial setup.
@@ -32,20 +32,14 @@ Before beginning the quickstart, it is helpful to read the [general Druid overvi
 
 ## Prerequisites
 
-You will need:
+### Software
 
-  * Java 8
+You will need:
+  * Java 8 (8u92+)
   * Linux, Mac OS X, or other Unix-like OS (Windows is not supported)
   * 8G of RAM
   * 2 vCPUs
 
-On Mac OS X, you can use [Oracle's JDK
-8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) to install
-Java.
-
-On Linux, your OS package manager should be able to help for Java. If your Ubuntu-
-based OS does not have a recent enough version of Java, WebUpd8 offers [packages for those
-OSes](http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html).
 
 ## Getting started
 
diff --git a/pom.xml b/pom.xml
index 7b00c5d..28c90db 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1485,7 +1485,7 @@
                                 <exclude>LICENSE.BINARY</exclude>
                                 <exclude>NOTICE</exclude>
                                 <exclude>NOTICE.BINARY</exclude>
-                                <exclude>LABELS.md</exclude>
+                                <exclude>LABELS</exclude>
                                 <exclude>.github/ISSUE_TEMPLATE/*.md</exclude>
                                 <exclude>git.version</exclude>
                                 <exclude>node_modules/**</exclude>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[incubator-druid] 01/05: fix issue #7607 (#7619)

Posted by cw...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.14.2-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git

commit cf9db11bd9b68d75fc5b52508f6bbd9d03d2eb3c
Author: Alexander Saydakov <13...@users.noreply.github.com>
AuthorDate: Thu May 9 17:33:29 2019 -0700

    fix issue #7607 (#7619)
    
    * fix issue #7607
    
    * exclude com.google.code.findbugs:annotations
---
 extensions-core/datasketches/pom.xml | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/extensions-core/datasketches/pom.xml b/extensions-core/datasketches/pom.xml
index a3e2c2d..284024d 100644
--- a/extensions-core/datasketches/pom.xml
+++ b/extensions-core/datasketches/pom.xml
@@ -37,7 +37,13 @@
     <dependency>
       <groupId>com.yahoo.datasketches</groupId>
       <artifactId>sketches-core</artifactId>
-      <version>0.13.1</version>
+      <version>0.13.3</version>
+      <exclusions>
+        <exclusion>
+          <groupId>com.google.code.findbugs</groupId>
+          <artifactId>annotations</artifactId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.commons</groupId>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[incubator-druid] 02/05: Fix exception when using complex aggs with result level caching (#7614)

Posted by cw...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.14.2-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git

commit 578304eacc1c393a6f5c3361c217196a9614848f
Author: Jonathan Wei <jo...@users.noreply.github.com>
AuthorDate: Thu May 9 13:49:11 2019 -0700

    Fix exception when using complex aggs with result level caching (#7614)
    
    * Fix exception when using complex aggs with result level caching
    
    * Add test comments
    
    * checkstyle
    
    * Add helper function for getting aggs from cache
    
    * Move method to CacheStrategy
    
    * Revert QueryToolChest changes
    
    * Update test comments
---
 .../java/org/apache/druid/query/CacheStrategy.java |  30 ++++
 .../query/groupby/GroupByQueryQueryToolChest.java  |  18 ++-
 .../timeseries/TimeseriesQueryQueryToolChest.java  |  16 ++-
 .../druid/query/topn/TopNQueryQueryToolChest.java  |  15 +-
 .../groupby/GroupByQueryQueryToolChestTest.java    | 154 +++++++++++++++++++++
 .../TimeseriesQueryQueryToolChestTest.java         |  18 ++-
 .../query/topn/TopNQueryQueryToolChestTest.java    |  83 ++++++++++-
 7 files changed, 312 insertions(+), 22 deletions(-)

diff --git a/processing/src/main/java/org/apache/druid/query/CacheStrategy.java b/processing/src/main/java/org/apache/druid/query/CacheStrategy.java
index 8b106a6..f93a395 100644
--- a/processing/src/main/java/org/apache/druid/query/CacheStrategy.java
+++ b/processing/src/main/java/org/apache/druid/query/CacheStrategy.java
@@ -22,8 +22,11 @@ package org.apache.druid.query;
 import com.fasterxml.jackson.core.type.TypeReference;
 import com.google.common.base.Function;
 import org.apache.druid.guice.annotations.ExtensionPoint;
+import org.apache.druid.query.aggregation.AggregatorFactory;
 
+import java.util.Iterator;
 import java.util.concurrent.ExecutorService;
+import java.util.function.BiFunction;
 
 /**
  */
@@ -98,4 +101,31 @@ public interface CacheStrategy<T, CacheType, QueryType extends Query<T>>
   {
     return pullFromCache(false);
   }
+
+  /**
+   * Helper function used by TopN, GroupBy, Timeseries queries in {@link #pullFromCache(boolean)}.
+   * When using the result level cache, the agg values seen here are
+   * finalized values generated by AggregatorFactory.finalizeComputation().
+   * These finalized values are deserialized from the cache as generic Objects, which will
+   * later be reserialized and returned to the user without further modification.
+   * Because the agg values are deserialized as generic Objects, the values are subject to the same
+   * type consistency issues handled by DimensionHandlerUtils.convertObjectToType() in the pullFromCache implementations
+   * for dimension values (e.g., a Float would become Double).
+   */
+  static void fetchAggregatorsFromCache(
+      Iterator<AggregatorFactory> aggIter,
+      Iterator<Object> resultIter,
+      boolean isResultLevelCache,
+      BiFunction<String, Object, Void> addToResultFunction
+  )
+  {
+    while (aggIter.hasNext() && resultIter.hasNext()) {
+      final AggregatorFactory factory = aggIter.next();
+      if (isResultLevelCache) {
+        addToResultFunction.apply(factory.getName(), resultIter.next());
+      } else {
+        addToResultFunction.apply(factory.getName(), factory.deserialize(resultIter.next()));
+      }
+    }
+  }
 }
diff --git a/processing/src/main/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChest.java b/processing/src/main/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChest.java
index c94d427..e7d1e27 100644
--- a/processing/src/main/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChest.java
+++ b/processing/src/main/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChest.java
@@ -555,7 +555,7 @@ public class GroupByQueryQueryToolChest extends QueryToolChest<Row, GroupByQuery
 
             DateTime timestamp = granularity.toDateTime(((Number) results.next()).longValue());
 
-            Map<String, Object> event = Maps.newLinkedHashMap();
+            final Map<String, Object> event = Maps.newLinkedHashMap();
             Iterator<DimensionSpec> dimsIter = dims.iterator();
             while (dimsIter.hasNext() && results.hasNext()) {
               final DimensionSpec dimensionSpec = dimsIter.next();
@@ -566,12 +566,18 @@ public class GroupByQueryQueryToolChest extends QueryToolChest<Row, GroupByQuery
                   DimensionHandlerUtils.convertObjectToType(results.next(), dimensionSpec.getOutputType())
               );
             }
-
             Iterator<AggregatorFactory> aggsIter = aggs.iterator();
-            while (aggsIter.hasNext() && results.hasNext()) {
-              final AggregatorFactory factory = aggsIter.next();
-              event.put(factory.getName(), factory.deserialize(results.next()));
-            }
+
+            CacheStrategy.fetchAggregatorsFromCache(
+                aggsIter,
+                results,
+                isResultLevelCache,
+                (aggName, aggValueObject) -> {
+                  event.put(aggName, aggValueObject);
+                  return null;
+                }
+            );
+
             if (isResultLevelCache) {
               Iterator<PostAggregator> postItr = query.getPostAggregatorSpecs().iterator();
               while (postItr.hasNext() && results.hasNext()) {
diff --git a/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java b/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java
index f8f5aa0..d625c31 100644
--- a/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java
+++ b/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java
@@ -327,17 +327,23 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser
           public Result<TimeseriesResultValue> apply(@Nullable Object input)
           {
             List<Object> results = (List<Object>) input;
-            Map<String, Object> retVal = Maps.newLinkedHashMap();
+            final Map<String, Object> retVal = Maps.newLinkedHashMap();
 
             Iterator<AggregatorFactory> aggsIter = aggs.iterator();
             Iterator<Object> resultIter = results.iterator();
 
             DateTime timestamp = granularity.toDateTime(((Number) resultIter.next()).longValue());
 
-            while (aggsIter.hasNext() && resultIter.hasNext()) {
-              final AggregatorFactory factory = aggsIter.next();
-              retVal.put(factory.getName(), factory.deserialize(resultIter.next()));
-            }
+            CacheStrategy.fetchAggregatorsFromCache(
+                aggsIter,
+                resultIter,
+                isResultLevelCache,
+                (aggName, aggValueObject) -> {
+                  retVal.put(aggName, aggValueObject);
+                  return null;
+                }
+            );
+
             if (isResultLevelCache) {
               Iterator<PostAggregator> postItr = query.getPostAggregatorSpecs().iterator();
               while (postItr.hasNext() && resultIter.hasNext()) {
diff --git a/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java b/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
index 2c3bd2b..d87a178 100644
--- a/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
+++ b/processing/src/main/java/org/apache/druid/query/topn/TopNQueryQueryToolChest.java
@@ -398,7 +398,7 @@ public class TopNQueryQueryToolChest extends QueryToolChest<Result<TopNResultVal
 
             while (inputIter.hasNext()) {
               List<Object> result = (List<Object>) inputIter.next();
-              Map<String, Object> vals = Maps.newLinkedHashMap();
+              final Map<String, Object> vals = Maps.newLinkedHashMap();
 
               Iterator<AggregatorFactory> aggIter = aggs.iterator();
               Iterator<Object> resultIter = result.iterator();
@@ -409,10 +409,15 @@ public class TopNQueryQueryToolChest extends QueryToolChest<Result<TopNResultVal
                   DimensionHandlerUtils.convertObjectToType(resultIter.next(), query.getDimensionSpec().getOutputType())
               );
 
-              while (aggIter.hasNext() && resultIter.hasNext()) {
-                final AggregatorFactory factory = aggIter.next();
-                vals.put(factory.getName(), factory.deserialize(resultIter.next()));
-              }
+              CacheStrategy.fetchAggregatorsFromCache(
+                  aggIter,
+                  resultIter,
+                  isResultLevelCache,
+                  (aggName, aggValueObject) -> {
+                    vals.put(aggName, aggValueObject);
+                    return null;
+                  }
+              );
 
               for (PostAggregator postAgg : postAggs) {
                 vals.put(postAgg.getName(), postAgg.compute(vals));
diff --git a/processing/src/test/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChestTest.java b/processing/src/test/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChestTest.java
index 2bad8f8..94842e0 100644
--- a/processing/src/test/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChestTest.java
+++ b/processing/src/test/java/org/apache/druid/query/groupby/GroupByQueryQueryToolChestTest.java
@@ -19,15 +19,26 @@
 
 package org.apache.druid.query.groupby;
 
+import com.fasterxml.jackson.databind.ObjectMapper;
 import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Lists;
+import org.apache.druid.collections.SerializablePair;
+import org.apache.druid.data.input.MapBasedRow;
 import org.apache.druid.data.input.Row;
 import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.query.CacheStrategy;
 import org.apache.druid.query.QueryRunnerTestHelper;
+import org.apache.druid.query.aggregation.AggregatorFactory;
 import org.apache.druid.query.aggregation.DoubleSumAggregatorFactory;
 import org.apache.druid.query.aggregation.FloatSumAggregatorFactory;
 import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import org.apache.druid.query.aggregation.SerializablePairLongString;
+import org.apache.druid.query.aggregation.last.DoubleLastAggregatorFactory;
+import org.apache.druid.query.aggregation.last.FloatLastAggregatorFactory;
+import org.apache.druid.query.aggregation.last.LongLastAggregatorFactory;
+import org.apache.druid.query.aggregation.last.StringLastAggregatorFactory;
+import org.apache.druid.query.aggregation.post.ConstantPostAggregator;
 import org.apache.druid.query.aggregation.post.ExpressionPostAggregator;
 import org.apache.druid.query.dimension.DefaultDimensionSpec;
 import org.apache.druid.query.expression.TestExprMacroTable;
@@ -46,10 +57,14 @@ import org.apache.druid.query.groupby.having.OrHavingSpec;
 import org.apache.druid.query.groupby.orderby.DefaultLimitSpec;
 import org.apache.druid.query.groupby.orderby.OrderByColumnSpec;
 import org.apache.druid.query.ordering.StringComparators;
+import org.apache.druid.segment.TestHelper;
+import org.apache.druid.segment.column.ValueType;
 import org.junit.Assert;
 import org.junit.Test;
 
+import java.io.IOException;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.List;
 
 public class GroupByQueryQueryToolChestTest
@@ -483,4 +498,143 @@ public class GroupByQueryQueryToolChestTest
     ));
   }
 
+  @Test
+  public void testCacheStrategy() throws Exception
+  {
+    doTestCacheStrategy(ValueType.STRING, "val1");
+    doTestCacheStrategy(ValueType.FLOAT, 2.1f);
+    doTestCacheStrategy(ValueType.DOUBLE, 2.1d);
+    doTestCacheStrategy(ValueType.LONG, 2L);
+  }
+
+  private AggregatorFactory getComplexAggregatorFactoryForValueType(final ValueType valueType)
+  {
+    switch (valueType) {
+      case LONG:
+        return new LongLastAggregatorFactory("complexMetric", "test");
+      case DOUBLE:
+        return new DoubleLastAggregatorFactory("complexMetric", "test");
+      case FLOAT:
+        return new FloatLastAggregatorFactory("complexMetric", "test");
+      case STRING:
+        return new StringLastAggregatorFactory("complexMetric", "test", null);
+      default:
+        throw new IllegalArgumentException("bad valueType: " + valueType);
+    }
+  }
+
+  private SerializablePair getIntermediateComplexValue(final ValueType valueType, final Object dimValue)
+  {
+    switch (valueType) {
+      case LONG:
+      case DOUBLE:
+      case FLOAT:
+        return new SerializablePair<>(123L, dimValue);
+      case STRING:
+        return new SerializablePairLongString(123L, (String) dimValue);
+      default:
+        throw new IllegalArgumentException("bad valueType: " + valueType);
+    }
+  }
+
+  private void doTestCacheStrategy(final ValueType valueType, final Object dimValue) throws IOException
+  {
+    final GroupByQuery query1 = GroupByQuery
+        .builder()
+        .setDataSource(QueryRunnerTestHelper.dataSource)
+        .setQuerySegmentSpec(QueryRunnerTestHelper.firstToThird)
+        .setDimensions(Collections.singletonList(
+            new DefaultDimensionSpec("test", "test", valueType)
+        ))
+        .setAggregatorSpecs(
+            Arrays.asList(
+                QueryRunnerTestHelper.rowsCount,
+                getComplexAggregatorFactoryForValueType(valueType)
+            )
+        )
+        .setPostAggregatorSpecs(
+            ImmutableList.of(new ConstantPostAggregator("post", 10))
+        )
+        .setGranularity(QueryRunnerTestHelper.dayGran)
+        .build();
+
+    CacheStrategy<Row, Object, GroupByQuery> strategy =
+        new GroupByQueryQueryToolChest(null, null).getCacheStrategy(
+            query1
+        );
+
+    final Row result1 = new MapBasedRow(
+        // test timestamps that result in integer size millis
+        DateTimes.utc(123L),
+        ImmutableMap.of(
+            "test", dimValue,
+            "rows", 1,
+            "complexMetric", getIntermediateComplexValue(valueType, dimValue)
+        )
+    );
+
+    Object preparedValue = strategy.prepareForSegmentLevelCache().apply(
+        result1
+    );
+
+    ObjectMapper objectMapper = TestHelper.makeJsonMapper();
+    Object fromCacheValue = objectMapper.readValue(
+        objectMapper.writeValueAsBytes(preparedValue),
+        strategy.getCacheObjectClazz()
+    );
+
+    Row fromCacheResult = strategy.pullFromSegmentLevelCache().apply(fromCacheValue);
+
+    Assert.assertEquals(result1, fromCacheResult);
+
+    final Row result2 = new MapBasedRow(
+        // test timestamps that result in integer size millis
+        DateTimes.utc(123L),
+        ImmutableMap.of(
+            "test", dimValue,
+            "rows", 1,
+            "complexMetric", dimValue,
+            "post", 10
+        )
+    );
+
+    // Please see the comments on aggregator serde and type handling in CacheStrategy.fetchAggregatorsFromCache()
+    final Row typeAdjustedResult2;
+    if (valueType == ValueType.FLOAT) {
+      typeAdjustedResult2 = new MapBasedRow(
+          DateTimes.utc(123L),
+          ImmutableMap.of(
+              "test", dimValue,
+              "rows", 1,
+              "complexMetric", 2.1d,
+              "post", 10
+          )
+      );
+    } else if (valueType == ValueType.LONG) {
+      typeAdjustedResult2 = new MapBasedRow(
+          DateTimes.utc(123L),
+          ImmutableMap.of(
+              "test", dimValue,
+              "rows", 1,
+              "complexMetric", 2,
+              "post", 10
+          )
+      );
+    } else {
+      typeAdjustedResult2 = result2;
+    }
+
+
+    Object preparedResultCacheValue = strategy.prepareForCache(true).apply(
+        result2
+    );
+
+    Object fromResultCacheValue = objectMapper.readValue(
+        objectMapper.writeValueAsBytes(preparedResultCacheValue),
+        strategy.getCacheObjectClazz()
+    );
+
+    Row fromResultCacheResult = strategy.pullFromCache(true).apply(fromResultCacheValue);
+    Assert.assertEquals(typeAdjustedResult2, fromResultCacheResult);
+  }
 }
diff --git a/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java b/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java
index 6d07c59..304ac9b 100644
--- a/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java
+++ b/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java
@@ -32,6 +32,8 @@ import org.apache.druid.query.Result;
 import org.apache.druid.query.TableDataSource;
 import org.apache.druid.query.aggregation.CountAggregatorFactory;
 import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import org.apache.druid.query.aggregation.SerializablePairLongString;
+import org.apache.druid.query.aggregation.last.StringLastAggregatorFactory;
 import org.apache.druid.query.aggregation.post.ArithmeticPostAggregator;
 import org.apache.druid.query.aggregation.post.ConstantPostAggregator;
 import org.apache.druid.query.aggregation.post.FieldAccessPostAggregator;
@@ -77,7 +79,8 @@ public class TimeseriesQueryQueryToolChestTest
                 Granularities.ALL,
                 ImmutableList.of(
                     new CountAggregatorFactory("metric1"),
-                    new LongSumAggregatorFactory("metric0", "metric0")
+                    new LongSumAggregatorFactory("metric0", "metric0"),
+                    new StringLastAggregatorFactory("complexMetric", "test", null)
                 ),
                 ImmutableList.of(new ConstantPostAggregator("post", 10)),
                 0,
@@ -89,7 +92,11 @@ public class TimeseriesQueryQueryToolChestTest
         // test timestamps that result in integer size millis
         DateTimes.utc(123L),
         new TimeseriesResultValue(
-            ImmutableMap.of("metric1", 2, "metric0", 3)
+            ImmutableMap.of(
+                "metric1", 2,
+                "metric0", 3,
+                "complexMetric", new SerializablePairLongString(123L, "val1")
+            )
         )
     );
 
@@ -109,7 +116,12 @@ public class TimeseriesQueryQueryToolChestTest
         // test timestamps that result in integer size millis
         DateTimes.utc(123L),
         new TimeseriesResultValue(
-            ImmutableMap.of("metric1", 2, "metric0", 3, "post", 10)
+            ImmutableMap.of(
+                "metric1", 2,
+                "metric0", 3,
+                "complexMetric", "val1",
+                "post", 10
+            )
         )
     );
 
diff --git a/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java b/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
index cede671..50b0498 100644
--- a/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
+++ b/processing/src/test/java/org/apache/druid/query/topn/TopNQueryQueryToolChestTest.java
@@ -23,6 +23,7 @@ import com.fasterxml.jackson.databind.ObjectMapper;
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableMap;
 import org.apache.druid.collections.CloseableStupidPool;
+import org.apache.druid.collections.SerializablePair;
 import org.apache.druid.java.util.common.DateTimes;
 import org.apache.druid.java.util.common.Intervals;
 import org.apache.druid.java.util.common.granularity.Granularities;
@@ -35,8 +36,14 @@ import org.apache.druid.query.QueryRunnerTestHelper;
 import org.apache.druid.query.Result;
 import org.apache.druid.query.TableDataSource;
 import org.apache.druid.query.TestQueryRunners;
+import org.apache.druid.query.aggregation.AggregatorFactory;
 import org.apache.druid.query.aggregation.CountAggregatorFactory;
 import org.apache.druid.query.aggregation.LongSumAggregatorFactory;
+import org.apache.druid.query.aggregation.SerializablePairLongString;
+import org.apache.druid.query.aggregation.last.DoubleLastAggregatorFactory;
+import org.apache.druid.query.aggregation.last.FloatLastAggregatorFactory;
+import org.apache.druid.query.aggregation.last.LongLastAggregatorFactory;
+import org.apache.druid.query.aggregation.last.StringLastAggregatorFactory;
 import org.apache.druid.query.aggregation.post.ArithmeticPostAggregator;
 import org.apache.druid.query.aggregation.post.ConstantPostAggregator;
 import org.apache.druid.query.aggregation.post.FieldAccessPostAggregator;
@@ -269,6 +276,36 @@ public class TopNQueryQueryToolChestTest
     }
   }
 
+  private AggregatorFactory getComplexAggregatorFactoryForValueType(final ValueType valueType)
+  {
+    switch (valueType) {
+      case LONG:
+        return new LongLastAggregatorFactory("complexMetric", "test");
+      case DOUBLE:
+        return new DoubleLastAggregatorFactory("complexMetric", "test");
+      case FLOAT:
+        return new FloatLastAggregatorFactory("complexMetric", "test");
+      case STRING:
+        return new StringLastAggregatorFactory("complexMetric", "test", null);
+      default:
+        throw new IllegalArgumentException("bad valueType: " + valueType);
+    }
+  }
+
+  private SerializablePair getIntermediateComplexValue(final ValueType valueType, final Object dimValue)
+  {
+    switch (valueType) {
+      case LONG:
+      case DOUBLE:
+      case FLOAT:
+        return new SerializablePair<>(123L, dimValue);
+      case STRING:
+        return new SerializablePairLongString(123L, (String) dimValue);
+      default:
+        throw new IllegalArgumentException("bad valueType: " + valueType);
+    }
+  }
+
   private void doTestCacheStrategy(final ValueType valueType, final Object dimValue) throws IOException
   {
     CacheStrategy<Result<TopNResultValue>, Object, TopNQuery> strategy =
@@ -282,7 +319,10 @@ public class TopNQueryQueryToolChestTest
                 new MultipleIntervalSegmentSpec(ImmutableList.of(Intervals.of("2015-01-01/2015-01-02"))),
                 null,
                 Granularities.ALL,
-                ImmutableList.of(new CountAggregatorFactory("metric1")),
+                ImmutableList.of(
+                    new CountAggregatorFactory("metric1"),
+                    getComplexAggregatorFactoryForValueType(valueType)
+                ),
                 ImmutableList.of(new ConstantPostAggregator("post", 10)),
                 null
             )
@@ -295,7 +335,8 @@ public class TopNQueryQueryToolChestTest
             Collections.singletonList(
                 ImmutableMap.of(
                     "test", dimValue,
-                    "metric1", 2
+                    "metric1", 2,
+                    "complexMetric", getIntermediateComplexValue(valueType, dimValue)
                 )
             )
         )
@@ -323,12 +364,48 @@ public class TopNQueryQueryToolChestTest
                 ImmutableMap.of(
                     "test", dimValue,
                     "metric1", 2,
+                    "complexMetric", dimValue,
                     "post", 10
                 )
             )
         )
     );
 
+    // Please see the comments on aggregator serde and type handling in CacheStrategy.fetchAggregatorsFromCache()
+    final Result<TopNResultValue> typeAdjustedResult2;
+    if (valueType == ValueType.FLOAT) {
+      typeAdjustedResult2 = new Result<>(
+          DateTimes.utc(123L),
+          new TopNResultValue(
+              Collections.singletonList(
+                  ImmutableMap.of(
+                      "test", dimValue,
+                      "metric1", 2,
+                      "complexMetric", 2.1d,
+                      "post", 10
+                  )
+              )
+          )
+      );
+    } else if (valueType == ValueType.LONG) {
+      typeAdjustedResult2 = new Result<>(
+          DateTimes.utc(123L),
+          new TopNResultValue(
+              Collections.singletonList(
+                  ImmutableMap.of(
+                      "test", dimValue,
+                      "metric1", 2,
+                      "complexMetric", 2,
+                      "post", 10
+                  )
+              )
+          )
+      );
+    } else {
+      typeAdjustedResult2 = result2;
+    }
+
+
     Object preparedResultCacheValue = strategy.prepareForCache(true).apply(
         result2
     );
@@ -339,7 +416,7 @@ public class TopNQueryQueryToolChestTest
     );
 
     Result<TopNResultValue> fromResultCacheResult = strategy.pullFromCache(true).apply(fromResultCacheValue);
-    Assert.assertEquals(result2, fromResultCacheResult);
+    Assert.assertEquals(typeAdjustedResult2, fromResultCacheResult);
   }
 
   static class MockQueryRunner implements QueryRunner<Result<TopNResultValue>>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[incubator-druid] 05/05: make-redirects.py requires python3, explicitly specify it (#7625)

Posted by cw...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.14.2-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git

commit 283f5c10f8fdc1fa6446636b8e0fae1ca505e8bd
Author: Clint Wylie <cw...@apache.org>
AuthorDate: Thu May 9 21:32:58 2019 -0700

    make-redirects.py requires python3, explicitly specify it (#7625)
---
 docs/_bin/make-redirects.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/_bin/make-redirects.py b/docs/_bin/make-redirects.py
index 5affcb5..ac645c7 100755
--- a/docs/_bin/make-redirects.py
+++ b/docs/_bin/make-redirects.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[incubator-druid] 03/05: Fix resultLevelCache for timeseries with grandTotal (#7624)

Posted by cw...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.14.2-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git

commit d4b35d49dcf739164efe0d38ce4b2f6856a6aef0
Author: Jihoon Son <ji...@apache.org>
AuthorDate: Thu May 9 18:11:04 2019 -0700

    Fix resultLevelCache for timeseries with grandTotal (#7624)
    
    * Fix resultLevelCache for timeseries with grandTotal
    
    * Address comment
    
    * fix test
---
 .../main/java/org/apache/druid/query/Result.java   |  26 +++--
 .../timeseries/TimeseriesQueryQueryToolChest.java  |  19 +++-
 .../java/org/apache/druid/query/ResultTest.java    |  38 ++++++++
 .../TimeseriesQueryQueryToolChestTest.java         |  17 ++++
 .../druid/query/ResultLevelCachingQueryRunner.java | 105 +++++++++++----------
 5 files changed, 140 insertions(+), 65 deletions(-)

diff --git a/processing/src/main/java/org/apache/druid/query/Result.java b/processing/src/main/java/org/apache/druid/query/Result.java
index c1ec3a3..9ec75ad 100644
--- a/processing/src/main/java/org/apache/druid/query/Result.java
+++ b/processing/src/main/java/org/apache/druid/query/Result.java
@@ -24,6 +24,9 @@ import com.fasterxml.jackson.annotation.JsonProperty;
 import org.apache.druid.guice.annotations.PublicApi;
 import org.joda.time.DateTime;
 
+import javax.annotation.Nullable;
+import java.util.Comparator;
+import java.util.Objects;
 import java.util.function.Function;
 
 /**
@@ -33,11 +36,12 @@ public class Result<T> implements Comparable<Result<T>>
 {
   public static String MISSING_SEGMENTS_KEY = "missingSegments";
 
+  @Nullable
   private final DateTime timestamp;
   private final T value;
 
   @JsonCreator
-  public Result(@JsonProperty("timestamp") DateTime timestamp, @JsonProperty("result") T value)
+  public Result(@JsonProperty("timestamp") @Nullable DateTime timestamp, @JsonProperty("result") T value)
   {
     this.timestamp = timestamp;
     this.value = value;
@@ -51,10 +55,12 @@ public class Result<T> implements Comparable<Result<T>>
   @Override
   public int compareTo(Result<T> tResult)
   {
-    return timestamp.compareTo(tResult.timestamp);
+    // timestamp is null for grandTotal which should come last.
+    return Comparator.nullsLast(DateTime::compareTo).compare(this.timestamp, tResult.timestamp);
   }
 
   @JsonProperty
+  @Nullable
   public DateTime getTimestamp()
   {
     return timestamp;
@@ -78,22 +84,22 @@ public class Result<T> implements Comparable<Result<T>>
 
     Result result = (Result) o;
 
-    if (timestamp != null ? !(timestamp.isEqual(result.timestamp) && timestamp.getZone().getOffset(timestamp) == result.timestamp.getZone().getOffset(result.timestamp)) : result.timestamp != null) {
-      return false;
-    }
-    if (value != null ? !value.equals(result.value) : result.value != null) {
+    if (timestamp != null && result.timestamp != null) {
+      if (!timestamp.isEqual(result.timestamp)
+          && timestamp.getZone().getOffset(timestamp) == result.timestamp.getZone().getOffset(result.timestamp)) {
+        return false;
+      }
+    } else if (timestamp == null ^ result.timestamp == null) {
       return false;
     }
 
-    return true;
+    return Objects.equals(value, result.value);
   }
 
   @Override
   public int hashCode()
   {
-    int result = timestamp != null ? timestamp.hashCode() : 0;
-    result = 31 * result + (value != null ? value.hashCode() : 0);
-    return result;
+    return Objects.hash(timestamp, value);
   }
 
   @Override
diff --git a/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java b/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java
index d625c31..0ae9a70 100644
--- a/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java
+++ b/processing/src/main/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChest.java
@@ -22,6 +22,7 @@ package org.apache.druid.query.timeseries;
 import com.fasterxml.jackson.core.type.TypeReference;
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Lists;
@@ -52,7 +53,6 @@ import org.apache.druid.query.cache.CacheKeyBuilder;
 import org.apache.druid.query.groupby.RowBasedColumnSelectorFactory;
 import org.joda.time.DateTime;
 
-import javax.annotation.Nullable;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.Iterator;
@@ -303,7 +303,12 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser
           TimeseriesResultValue results = input.getValue();
           final List<Object> retVal = Lists.newArrayListWithCapacity(1 + aggs.size());
 
-          retVal.add(input.getTimestamp().getMillis());
+          // Timestamp can be null if grandTotal is true.
+          if (isResultLevelCache) {
+            retVal.add(input.getTimestamp() == null ? null : input.getTimestamp().getMillis());
+          } else {
+            retVal.add(Preconditions.checkNotNull(input.getTimestamp(), "timestamp of input[%s]", input).getMillis());
+          }
           for (AggregatorFactory agg : aggs) {
             retVal.add(results.getMetric(agg.getName()));
           }
@@ -324,7 +329,7 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser
           private final Granularity granularity = query.getGranularity();
 
           @Override
-          public Result<TimeseriesResultValue> apply(@Nullable Object input)
+          public Result<TimeseriesResultValue> apply(Object input)
           {
             List<Object> results = (List<Object>) input;
             final Map<String, Object> retVal = Maps.newLinkedHashMap();
@@ -332,7 +337,13 @@ public class TimeseriesQueryQueryToolChest extends QueryToolChest<Result<Timeser
             Iterator<AggregatorFactory> aggsIter = aggs.iterator();
             Iterator<Object> resultIter = results.iterator();
 
-            DateTime timestamp = granularity.toDateTime(((Number) resultIter.next()).longValue());
+            final Number timestampNumber = (Number) resultIter.next();
+            final DateTime timestamp;
+            if (isResultLevelCache) {
+              timestamp = timestampNumber == null ? null : granularity.toDateTime(timestampNumber.longValue());
+            } else {
+              timestamp = granularity.toDateTime(Preconditions.checkNotNull(timestampNumber, "timestamp").longValue());
+            }
 
             CacheStrategy.fetchAggregatorsFromCache(
                 aggsIter,
diff --git a/processing/src/test/java/org/apache/druid/query/ResultTest.java b/processing/src/test/java/org/apache/druid/query/ResultTest.java
new file mode 100644
index 0000000..7fe6815
--- /dev/null
+++ b/processing/src/test/java/org/apache/druid/query/ResultTest.java
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.query;
+
+import org.apache.druid.java.util.common.DateTimes;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class ResultTest
+{
+  @Test
+  public void testCompareNullTimestamp()
+  {
+    final Result<Object> nullTimestamp = new Result<>(null, null);
+    final Result<Object> nullTimestamp2 = new Result<>(null, null);
+    final Result<Object> nonNullTimestamp = new Result<>(DateTimes.nowUtc(), null);
+
+    Assert.assertEquals(0, nullTimestamp.compareTo(nullTimestamp2));
+    Assert.assertEquals(1, nullTimestamp.compareTo(nonNullTimestamp));
+  }
+}
diff --git a/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java b/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java
index 304ac9b..89fdbb7 100644
--- a/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java
+++ b/processing/src/test/java/org/apache/druid/query/timeseries/TimeseriesQueryQueryToolChestTest.java
@@ -133,6 +133,23 @@ public class TimeseriesQueryQueryToolChestTest
 
     Result<TimeseriesResultValue> fromResultLevelCacheRes = strategy.pullFromCache(true).apply(fromResultLevelCacheValue);
     Assert.assertEquals(result2, fromResultLevelCacheRes);
+
+    final Result<TimeseriesResultValue> result3 = new Result<>(
+        // null timestamp similar to grandTotal
+        null,
+        new TimeseriesResultValue(
+            ImmutableMap.of("metric1", 2, "metric0", 3, "complexMetric", "val1", "post", 10)
+        )
+    );
+
+    preparedResultLevelCacheValue = strategy.prepareForCache(true).apply(result3);
+    fromResultLevelCacheValue = objectMapper.readValue(
+        objectMapper.writeValueAsBytes(preparedResultLevelCacheValue),
+        strategy.getCacheObjectClazz()
+    );
+
+    fromResultLevelCacheRes = strategy.pullFromCache(true).apply(fromResultLevelCacheValue);
+    Assert.assertEquals(result3, fromResultLevelCacheRes);
   }
 
   @Test
diff --git a/server/src/main/java/org/apache/druid/query/ResultLevelCachingQueryRunner.java b/server/src/main/java/org/apache/druid/query/ResultLevelCachingQueryRunner.java
index 6a303b8..6a9a640 100644
--- a/server/src/main/java/org/apache/druid/query/ResultLevelCachingQueryRunner.java
+++ b/server/src/main/java/org/apache/druid/query/ResultLevelCachingQueryRunner.java
@@ -108,43 +108,46 @@ public class ResultLevelCachingQueryRunner<T> implements QueryRunner<T>
         }
         final Function<T, Object> cacheFn = strategy.prepareForCache(true);
 
-        return Sequences.wrap(Sequences.map(
-            resultFromClient,
-            new Function<T, T>()
+        return Sequences.wrap(
+            Sequences.map(
+                resultFromClient,
+                new Function<T, T>()
+                {
+                  @Override
+                  public T apply(T input)
+                  {
+                    if (resultLevelCachePopulator.isShouldPopulate()) {
+                      resultLevelCachePopulator.cacheResultEntry(input, cacheFn);
+                    }
+                    return input;
+                  }
+                }
+            ),
+            new SequenceWrapper()
             {
               @Override
-              public T apply(T input)
+              public void after(boolean isDone, Throwable thrown)
               {
-                if (resultLevelCachePopulator.isShouldPopulate()) {
-                  resultLevelCachePopulator.cacheResultEntry(resultLevelCachePopulator, input, cacheFn);
+                Preconditions.checkNotNull(
+                    resultLevelCachePopulator,
+                    "ResultLevelCachePopulator cannot be null during cache population"
+                );
+                if (thrown != null) {
+                  log.error(
+                      thrown,
+                      "Error while preparing for result level caching for query %s with error %s ",
+                      query.getId(),
+                      thrown.getMessage()
+                  );
+                } else if (resultLevelCachePopulator.isShouldPopulate()) {
+                  // The resultset identifier and its length is cached along with the resultset
+                  resultLevelCachePopulator.populateResults();
+                  log.debug("Cache population complete for query %s", query.getId());
                 }
-                return input;
+                resultLevelCachePopulator.stopPopulating();
               }
             }
-        ), new SequenceWrapper()
-        {
-          @Override
-          public void after(boolean isDone, Throwable thrown)
-          {
-            Preconditions.checkNotNull(
-                resultLevelCachePopulator,
-                "ResultLevelCachePopulator cannot be null during cache population"
-            );
-            if (thrown != null) {
-              log.error(
-                  thrown,
-                  "Error while preparing for result level caching for query %s with error %s ",
-                  query.getId(),
-                  thrown.getMessage()
-              );
-            } else if (resultLevelCachePopulator.isShouldPopulate()) {
-              // The resultset identifier and its length is cached along with the resultset
-              resultLevelCachePopulator.populateResults();
-              log.debug("Cache population complete for query %s", query.getId());
-            }
-            resultLevelCachePopulator.cacheObjectStream = null;
-          }
-        });
+        );
       }
     } else {
       return baseRunner.run(
@@ -234,20 +237,14 @@ public class ResultLevelCachingQueryRunner<T> implements QueryRunner<T>
     }
   }
 
-  public class ResultLevelCachePopulator
+  private class ResultLevelCachePopulator
   {
     private final Cache cache;
     private final ObjectMapper mapper;
     private final Cache.NamedKey key;
     private final CacheConfig cacheConfig;
-    private ByteArrayOutputStream cacheObjectStream = new ByteArrayOutputStream();
-
-    public boolean isShouldPopulate()
-    {
-      return shouldPopulate;
-    }
-
-    private boolean shouldPopulate;
+    @Nullable
+    private ByteArrayOutputStream cacheObjectStream;
 
     private ResultLevelCachePopulator(
         Cache cache,
@@ -261,29 +258,35 @@ public class ResultLevelCachingQueryRunner<T> implements QueryRunner<T>
       this.mapper = mapper;
       this.key = key;
       this.cacheConfig = cacheConfig;
-      this.shouldPopulate = shouldPopulate;
+      this.cacheObjectStream = shouldPopulate ? new ByteArrayOutputStream() : null;
+    }
+
+    boolean isShouldPopulate()
+    {
+      return cacheObjectStream != null;
+    }
+
+    void stopPopulating()
+    {
+      cacheObjectStream = null;
     }
 
     private void cacheResultEntry(
-        ResultLevelCachePopulator resultLevelCachePopulator,
         T resultEntry,
         Function<T, Object> cacheFn
     )
     {
-
+      Preconditions.checkNotNull(cacheObjectStream, "cacheObjectStream");
       int cacheLimit = cacheConfig.getResultLevelCacheLimit();
-      try (JsonGenerator gen = mapper.getFactory().createGenerator(resultLevelCachePopulator.cacheObjectStream)) {
+      try (JsonGenerator gen = mapper.getFactory().createGenerator(cacheObjectStream)) {
         gen.writeObject(cacheFn.apply(resultEntry));
-        if (cacheLimit > 0 && resultLevelCachePopulator.cacheObjectStream.size() > cacheLimit) {
-          shouldPopulate = false;
-          resultLevelCachePopulator.cacheObjectStream = null;
-          return;
+        if (cacheLimit > 0 && cacheObjectStream.size() > cacheLimit) {
+          stopPopulating();
         }
       }
       catch (IOException ex) {
         log.error(ex, "Failed to retrieve entry to be cached. Result Level caching will not be performed!");
-        shouldPopulate = false;
-        resultLevelCachePopulator.cacheObjectStream = null;
+        stopPopulating();
       }
     }
 
@@ -292,7 +295,7 @@ public class ResultLevelCachingQueryRunner<T> implements QueryRunner<T>
       ResultLevelCacheUtil.populate(
           cache,
           key,
-          cacheObjectStream.toByteArray()
+          Preconditions.checkNotNull(cacheObjectStream, "cacheObjectStream").toByteArray()
       );
     }
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org