You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by ab...@apache.org on 2021/11/13 06:01:21 UTC
[druid] branch master updated: SQL: Add type headers to response formats. (#11914)
This is an automated email from the ASF dual-hosted git repository.
abhishek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git
The following commit(s) were added to refs/heads/master by this push:
new 6f6e88e SQL: Add type headers to response formats. (#11914)
6f6e88e is described below
commit 6f6e88e02ed0a767ed3d08bfeecca9ad16ce1cda
Author: Gian Merlino <gi...@gmail.com>
AuthorDate: Fri Nov 12 22:00:57 2021 -0800
SQL: Add type headers to response formats. (#11914)
This allows clients to interpret the results of SQL queries without having to guess types.
---
docs/querying/sql.md | 57 +++--
.../apache/druid/tests/query/ITSqlCancelTest.java | 4 +-
.../server/AsyncQueryForwardingServletTest.java | 2 +-
.../ManualTieredBrokerSelectorStrategyTest.java | 2 +
.../router/TieredBrokerHostSelectorTest.java | 2 +
.../org/apache/druid/sql/SqlRowTransformer.java | 7 +
.../apache/druid/sql/http/ArrayLinesWriter.java | 16 +-
.../org/apache/druid/sql/http/ArrayWriter.java | 50 +++-
.../java/org/apache/druid/sql/http/CsvWriter.java | 33 ++-
.../apache/druid/sql/http/ObjectLinesWriter.java | 16 +-
.../org/apache/druid/sql/http/ObjectWriter.java | 61 ++++-
.../org/apache/druid/sql/http/ResultFormat.java | 4 +-
.../java/org/apache/druid/sql/http/SqlQuery.java | 37 ++-
.../org/apache/druid/sql/http/SqlResource.java | 25 +-
.../druid/sql/calcite/http/SqlQueryTest.java | 2 +
.../org/apache/druid/sql/http/SqlResourceTest.java | 258 ++++++++++++++++++---
16 files changed, 474 insertions(+), 102 deletions(-)
diff --git a/docs/querying/sql.md b/docs/querying/sql.md
index 35dfc35..cb1ecb5 100644
--- a/docs/querying/sql.md
+++ b/docs/querying/sql.md
@@ -879,6 +879,8 @@ Submit your query as the value of a "query" field in the JSON object within the
|`query`|SQL query string.| none (required)|
|`resultFormat`|Format of query results. See [Responses](#responses) for details.|`"object"`|
|`header`|Whether or not to include a header row for the query result. See [Responses](#responses) for details.|`false`|
+|`typesHeader`|Whether or not to include type information in the header. Can only be set when `header` is also `true`. See [Responses](#responses) for details.|`false`|
+|`sqlTypesHeader`|Whether or not to include SQL type information in the header. Can only be set when `header` is also `true`. See [Responses](#responses) for details.|`false`|
|`context`|JSON object containing [connection context parameters](#connection-context).|`{}` (empty)|
|`parameters`|List of query parameters for parameterized queries. Each parameter in the list should be a JSON object like `{"type": "VARCHAR", "value": "foo"}`. The type should be a SQL type; see [Data types](#data-types) for a list of supported SQL types.|`[]` (empty)|
@@ -920,44 +922,60 @@ Metadata is available over HTTP POST by querying [metadata tables](#metadata-tab
#### Responses
+##### Result formats
+
Druid SQL's HTTP POST API supports a variety of result formats. You can specify these by adding a "resultFormat"
parameter, like:
```json
{
"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'",
- "resultFormat" : "object"
+ "resultFormat" : "array"
}
```
-The supported result formats are:
-
-|Format|Description|Content-Type|
-|------|-----------|------------|
-|`object`|The default, a JSON array of JSON objects. Each object's field names match the columns returned by the SQL query, and are provided in the same order as the SQL query.|application/json|
-|`array`|JSON array of JSON arrays. Each inner array has elements matching the columns returned by the SQL query, in order.|application/json|
-|`objectLines`|Like "object", but the JSON objects are separated by newlines instead of being wrapped in a JSON array. This can make it easier to parse the entire response set as a stream, if you do not have ready access to a streaming JSON parser. To make it possible to detect a truncated response, this format includes a trailer of one blank line.|text/plain|
-|`arrayLines`|Like "array", but the JSON arrays are separated by newlines instead of being wrapped in a JSON array. This can make it easier to parse the entire response set as a stream, if you do not have ready access to a streaming JSON parser. To make it possible to detect a truncated response, this format includes a trailer of one blank line.|text/plain|
-|`csv`|Comma-separated values, with one row per line. Individual field values may be escaped by being surrounded in double quotes. If double quotes appear in a field value, they will be escaped by replacing them with double-double-quotes like `""this""`. To make it possible to detect a truncated response, this format includes a trailer of one blank line.|text/csv|
-
-You can additionally request a header by setting "header" to true in your request, like:
+You can additionally request a header with information about column names by setting `header` to true in your request.
+When you set `header` to true, you can optionally include `typesHeader` and `sqlTypesHeader` as well, which gives
+you information about [Druid runtime and SQL types](#data-types) respectively. You can request all these headers
+with a request like:
```json
{
"query" : "SELECT COUNT(*) FROM data_source WHERE foo = 'bar' AND __time > TIMESTAMP '2000-01-01 00:00:00'",
- "resultFormat" : "arrayLines",
- "header" : true
+ "resultFormat" : "array",
+ "header" : true,
+ "typesHeader" : true,
+ "sqlTypesHeader" : true
}
```
-In this case, the first result of the response body is the header row. For the `csv`, `array`, and `arrayLines` formats, the header
-will be a list of column names. For the `object` and `objectLines` formats, the header will be an object where the
-keys are column names, and the values are null.
+The supported result formats are:
+
+|Format|Description|Header description|Content-Type|
+|------|-----------|------------------|------------|
+|`object`|The default, a JSON array of JSON objects. Each object's field names match the columns returned by the SQL query, and are provided in the same order as the SQL query.|If `header` is true, the first row is an object where the fields are column names. Each field's value is either null (if `typesHeader` and `sqlTypesHeader` are false) or an object that contains the Druid type as `type` (if `typesHeader` is true) and the SQL type as `sqlType` (if `sqlTypesHeader` is true).|applicat [...]
+|`array`|JSON array of JSON arrays. Each inner array has elements matching the columns returned by the SQL query, in order.|If `header` is true, the first row is an array of column names. If `typesHeader` is true, the next row is an array of Druid types. If `sqlTypesHeader` is true, the next row is an array of SQL types.|application/json|
+|`objectLines`|Like "object", but the JSON objects are separated by newlines instead of being wrapped in a JSON array. This can make it easier to parse the entire response set as a stream, if you do not have ready access to a streaming JSON parser. To make it possible to detect a truncated response, this format includes a trailer of one blank line.|Same as "object".|text/plain|
+|`arrayLines`|Like "array", but the JSON arrays are separated by newlines instead of being wrapped in a JSON array. This can make it easier to parse the entire response set as a stream, if you do not have ready access to a streaming JSON parser. To make it possible to detect a truncated response, this format includes a trailer of one blank line.|Same as "array", except the rows are separated by newlines.|text/plain|
+|`csv`|Comma-separated values, with one row per line. Individual field values may be escaped by being surrounded in double quotes. If double quotes appear in a field value, they will be escaped by replacing them with double-double-quotes like `""this""`. To make it possible to detect a truncated response, this format includes a trailer of one blank line.|Same as "array", except the lists are in CSV format.|text/csv|
+
+If `typesHeader` is set to true, [Druid type](#data-types) information is included in the response. Complex types,
+like sketches, will be reported as `COMPLEX<typeName>` if a particular complex type name is known for that field,
+or as `COMPLEX` if the particular type name is unknown or mixed. If `sqlTypesHeader` is set to true,
+[SQL type](#data-types) information is included in the response. It is possible to set both `typesHeader` and
+`sqlTypesHeader` at once. Both parameters require that `header` is also set.
+
+To aid in building clients that are compatible with older Druid versions, Druid returns the HTTP header
+`X-Druid-SQL-Header-Included: yes` if `header` was set to true and if the version of Druid the client is connected to
+understands the `typesHeader` and `sqlTypesHeader` parameters. This HTTP response header is present irrespective of
+whether `typesHeader` or `sqlTypesHeader` are set or not.
Druid returns the SQL query identifier in the `X-Druid-SQL-Query-Id` HTTP header.
This query id will be assigned the value of `sqlQueryId` from the [connection context parameters](#connection-context)
if specified, else Druid will generate a SQL query id for you.
+##### Errors
+
Errors that occur before the response body is sent will be reported in JSON, with an HTTP 500 status code, in the
same format as [native Druid query errors](../querying/querying.md#query-errors). If an error occurs while the response body is
being sent, at that point it is too late to change the HTTP status code or report a JSON error, so the response will
@@ -969,8 +987,9 @@ trailer they all include: one blank line at the end of the result set. If you de
through a JSON parsing error or through a missing trailing newline, you should assume the response was not fully
delivered due to an error.
-### HTTP DELETE
-You can use the HTTP `DELETE` method to cancel a SQL query on either the Router or the Broker. When you cancel a query, Druid handles the cancellation in a best-effort manner. It marks the query canceled immediately and aborts the query execution as soon as possible. However, your query may run for a short time after your cancellation request.
+### Cancelling queries
+
+You can use the HTTP DELETE method to cancel a SQL query on either the Router or the Broker. When you cancel a query, Druid handles the cancellation in a best-effort manner. It marks the query canceled immediately and aborts the query execution as soon as possible. However, your query may run for a short time after your cancellation request.
Druid SQL's HTTP DELETE method uses the following syntax:
```
diff --git a/integration-tests/src/test/java/org/apache/druid/tests/query/ITSqlCancelTest.java b/integration-tests/src/test/java/org/apache/druid/tests/query/ITSqlCancelTest.java
index 73b7746..d77a911 100644
--- a/integration-tests/src/test/java/org/apache/druid/tests/query/ITSqlCancelTest.java
+++ b/integration-tests/src/test/java/org/apache/druid/tests/query/ITSqlCancelTest.java
@@ -88,7 +88,7 @@ public class ITSqlCancelTest
queryResponseFutures.add(
sqlClient.queryAsync(
sqlHelper.getQueryURL(config.getRouterUrl()),
- new SqlQuery(QUERY, null, false, ImmutableMap.of("sqlQueryId", queryId), null)
+ new SqlQuery(QUERY, null, false, false, false, ImmutableMap.of("sqlQueryId", queryId), null)
)
);
}
@@ -125,7 +125,7 @@ public class ITSqlCancelTest
final Future<StatusResponseHolder> queryResponseFuture = sqlClient
.queryAsync(
sqlHelper.getQueryURL(config.getRouterUrl()),
- new SqlQuery(QUERY, null, false, ImmutableMap.of("sqlQueryId", "validId"), null)
+ new SqlQuery(QUERY, null, false, false, false, ImmutableMap.of("sqlQueryId", "validId"), null)
);
// Wait until the sqlLifecycle is authorized and registered
diff --git a/services/src/test/java/org/apache/druid/server/AsyncQueryForwardingServletTest.java b/services/src/test/java/org/apache/druid/server/AsyncQueryForwardingServletTest.java
index 634f8a4..526a934 100644
--- a/services/src/test/java/org/apache/druid/server/AsyncQueryForwardingServletTest.java
+++ b/services/src/test/java/org/apache/druid/server/AsyncQueryForwardingServletTest.java
@@ -205,7 +205,7 @@ public class AsyncQueryForwardingServletTest extends BaseJettyTest
@Test
public void testSqlQueryProxy() throws Exception
{
- final SqlQuery query = new SqlQuery("SELECT * FROM foo", ResultFormat.ARRAY, false, null, null);
+ final SqlQuery query = new SqlQuery("SELECT * FROM foo", ResultFormat.ARRAY, false, false, false, null, null);
final QueryHostFinder hostFinder = EasyMock.createMock(QueryHostFinder.class);
EasyMock.expect(hostFinder.findServerSql(query))
.andReturn(new TestServer("http", "1.2.3.4", 9999)).once();
diff --git a/services/src/test/java/org/apache/druid/server/router/ManualTieredBrokerSelectorStrategyTest.java b/services/src/test/java/org/apache/druid/server/router/ManualTieredBrokerSelectorStrategyTest.java
index 68855fc..6bb0958 100644
--- a/services/src/test/java/org/apache/druid/server/router/ManualTieredBrokerSelectorStrategyTest.java
+++ b/services/src/test/java/org/apache/druid/server/router/ManualTieredBrokerSelectorStrategyTest.java
@@ -247,6 +247,8 @@ public class ManualTieredBrokerSelectorStrategyTest
"SELECT * FROM test",
null,
false,
+ false,
+ false,
queryContext,
null
);
diff --git a/services/src/test/java/org/apache/druid/server/router/TieredBrokerHostSelectorTest.java b/services/src/test/java/org/apache/druid/server/router/TieredBrokerHostSelectorTest.java
index bf1bfed..670b617 100644
--- a/services/src/test/java/org/apache/druid/server/router/TieredBrokerHostSelectorTest.java
+++ b/services/src/test/java/org/apache/druid/server/router/TieredBrokerHostSelectorTest.java
@@ -391,6 +391,8 @@ public class TieredBrokerHostSelectorTest
"SELECT * FROM test",
null,
false,
+ false,
+ false,
queryContext,
null
);
diff --git a/sql/src/main/java/org/apache/druid/sql/SqlRowTransformer.java b/sql/src/main/java/org/apache/druid/sql/SqlRowTransformer.java
index 5570c42..125ea9a 100644
--- a/sql/src/main/java/org/apache/druid/sql/SqlRowTransformer.java
+++ b/sql/src/main/java/org/apache/druid/sql/SqlRowTransformer.java
@@ -36,6 +36,7 @@ import java.util.List;
public class SqlRowTransformer
{
private final DateTimeZone timeZone;
+ private final RelDataType rowType;
private final List<String> fieldList;
// Remember which columns are time-typed, so we can emit ISO8601 instead of millis values.
@@ -45,6 +46,7 @@ public class SqlRowTransformer
SqlRowTransformer(DateTimeZone timeZone, RelDataType rowType)
{
this.timeZone = timeZone;
+ this.rowType = rowType;
this.fieldList = new ArrayList<>(rowType.getFieldCount());
this.timeColumns = new boolean[rowType.getFieldCount()];
this.dateColumns = new boolean[rowType.getFieldCount()];
@@ -56,6 +58,11 @@ public class SqlRowTransformer
}
}
+ public RelDataType getRowType()
+ {
+ return rowType;
+ }
+
public List<String> getFieldList()
{
return fieldList;
diff --git a/sql/src/main/java/org/apache/druid/sql/http/ArrayLinesWriter.java b/sql/src/main/java/org/apache/druid/sql/http/ArrayLinesWriter.java
index 29eae27..dc5849d 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/ArrayLinesWriter.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/ArrayLinesWriter.java
@@ -22,11 +22,11 @@ package org.apache.druid.sql.http;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.io.SerializedString;
import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.rel.type.RelDataType;
import javax.annotation.Nullable;
import java.io.IOException;
import java.io.OutputStream;
-import java.util.List;
public class ArrayLinesWriter implements ResultFormat.Writer
{
@@ -57,15 +57,13 @@ public class ArrayLinesWriter implements ResultFormat.Writer
}
@Override
- public void writeHeader(final List<String> columnNames) throws IOException
+ public void writeHeader(
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ ) throws IOException
{
- jsonGenerator.writeStartArray();
-
- for (String columnName : columnNames) {
- jsonGenerator.writeString(columnName);
- }
-
- jsonGenerator.writeEndArray();
+ ArrayWriter.writeHeader(jsonGenerator, rowType, includeTypes, includeSqlTypes);
}
@Override
diff --git a/sql/src/main/java/org/apache/druid/sql/http/ArrayWriter.java b/sql/src/main/java/org/apache/druid/sql/http/ArrayWriter.java
index 678df49..67db8a2 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/ArrayWriter.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/ArrayWriter.java
@@ -21,11 +21,13 @@ package org.apache.druid.sql.http;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.druid.segment.column.RowSignature;
+import org.apache.druid.sql.calcite.table.RowSignatures;
import javax.annotation.Nullable;
import java.io.IOException;
import java.io.OutputStream;
-import java.util.List;
public class ArrayWriter implements ResultFormat.Writer
{
@@ -58,15 +60,13 @@ public class ArrayWriter implements ResultFormat.Writer
}
@Override
- public void writeHeader(final List<String> columnNames) throws IOException
+ public void writeHeader(
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ ) throws IOException
{
- jsonGenerator.writeStartArray();
-
- for (String columnName : columnNames) {
- jsonGenerator.writeString(columnName);
- }
-
- jsonGenerator.writeEndArray();
+ writeHeader(jsonGenerator, rowType, includeTypes, includeSqlTypes);
}
@Override
@@ -92,4 +92,36 @@ public class ArrayWriter implements ResultFormat.Writer
{
jsonGenerator.close();
}
+
+ static void writeHeader(
+ final JsonGenerator jsonGenerator,
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ ) throws IOException
+ {
+ final RowSignature signature = RowSignatures.fromRelDataType(rowType.getFieldNames(), rowType);
+
+ jsonGenerator.writeStartArray();
+ for (String columnName : signature.getColumnNames()) {
+ jsonGenerator.writeString(columnName);
+ }
+ jsonGenerator.writeEndArray();
+
+ if (includeTypes) {
+ jsonGenerator.writeStartArray();
+ for (int i = 0; i < signature.size(); i++) {
+ jsonGenerator.writeString(signature.getColumnType(i).get().asTypeString());
+ }
+ jsonGenerator.writeEndArray();
+ }
+
+ if (includeSqlTypes) {
+ jsonGenerator.writeStartArray();
+ for (int i = 0; i < signature.size(); i++) {
+ jsonGenerator.writeString(rowType.getFieldList().get(i).getType().getSqlTypeName().getName());
+ }
+ jsonGenerator.writeEndArray();
+ }
+ }
}
diff --git a/sql/src/main/java/org/apache/druid/sql/http/CsvWriter.java b/sql/src/main/java/org/apache/druid/sql/http/CsvWriter.java
index d89c752..253c957 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/CsvWriter.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/CsvWriter.java
@@ -20,6 +20,9 @@
package org.apache.druid.sql.http;
import com.opencsv.CSVWriter;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.druid.segment.column.RowSignature;
+import org.apache.druid.sql.calcite.table.RowSignatures;
import javax.annotation.Nullable;
import java.io.BufferedWriter;
@@ -59,9 +62,35 @@ public class CsvWriter implements ResultFormat.Writer
}
@Override
- public void writeHeader(final List<String> columnNames)
+ public void writeHeader(
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ )
{
- writer.writeNext(columnNames.toArray(new String[0]), false);
+ final RowSignature signature = RowSignatures.fromRelDataType(rowType.getFieldNames(), rowType);
+
+ writer.writeNext(signature.getColumnNames().toArray(new String[0]), false);
+
+ if (includeTypes) {
+ final String[] types = new String[rowType.getFieldCount()];
+
+ for (int i = 0; i < signature.size(); i++) {
+ types[i] = signature.getColumnType(i).get().asTypeString();
+ }
+
+ writer.writeNext(types, false);
+ }
+
+ if (includeSqlTypes) {
+ final String[] sqlTypes = new String[rowType.getFieldCount()];
+
+ for (int i = 0; i < signature.size(); i++) {
+ sqlTypes[i] = rowType.getFieldList().get(i).getType().getSqlTypeName().getName();
+ }
+
+ writer.writeNext(sqlTypes, false);
+ }
}
@Override
diff --git a/sql/src/main/java/org/apache/druid/sql/http/ObjectLinesWriter.java b/sql/src/main/java/org/apache/druid/sql/http/ObjectLinesWriter.java
index 887b272..0d91514 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/ObjectLinesWriter.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/ObjectLinesWriter.java
@@ -22,11 +22,11 @@ package org.apache.druid.sql.http;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.io.SerializedString;
import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.rel.type.RelDataType;
import javax.annotation.Nullable;
import java.io.IOException;
import java.io.OutputStream;
-import java.util.List;
public class ObjectLinesWriter implements ResultFormat.Writer
{
@@ -57,15 +57,13 @@ public class ObjectLinesWriter implements ResultFormat.Writer
}
@Override
- public void writeHeader(final List<String> columnNames) throws IOException
+ public void writeHeader(
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ ) throws IOException
{
- jsonGenerator.writeStartObject();
-
- for (String columnName : columnNames) {
- jsonGenerator.writeNullField(columnName);
- }
-
- jsonGenerator.writeEndObject();
+ ObjectWriter.writeHeader(jsonGenerator, rowType, includeTypes, includeSqlTypes);
}
@Override
diff --git a/sql/src/main/java/org/apache/druid/sql/http/ObjectWriter.java b/sql/src/main/java/org/apache/druid/sql/http/ObjectWriter.java
index ac7b0cf..0f7d9f2 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/ObjectWriter.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/ObjectWriter.java
@@ -21,14 +21,19 @@ package org.apache.druid.sql.http;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.druid.segment.column.RowSignature;
+import org.apache.druid.sql.calcite.table.RowSignatures;
import javax.annotation.Nullable;
import java.io.IOException;
import java.io.OutputStream;
-import java.util.List;
public class ObjectWriter implements ResultFormat.Writer
{
+ static final String TYPE_HEADER_NAME = "type";
+ static final String SQL_TYPE_HEADER_NAME = "sqlType";
+
private final JsonGenerator jsonGenerator;
private final OutputStream outputStream;
@@ -58,15 +63,13 @@ public class ObjectWriter implements ResultFormat.Writer
}
@Override
- public void writeHeader(final List<String> columnNames) throws IOException
+ public void writeHeader(
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ ) throws IOException
{
- jsonGenerator.writeStartObject();
-
- for (String columnName : columnNames) {
- jsonGenerator.writeNullField(columnName);
- }
-
- jsonGenerator.writeEndObject();
+ writeHeader(jsonGenerator, rowType, includeTypes, includeSqlTypes);
}
@Override
@@ -93,4 +96,44 @@ public class ObjectWriter implements ResultFormat.Writer
{
jsonGenerator.close();
}
+
+ static void writeHeader(
+ final JsonGenerator jsonGenerator,
+ final RelDataType rowType,
+ final boolean includeTypes,
+ final boolean includeSqlTypes
+ ) throws IOException
+ {
+ final RowSignature signature = RowSignatures.fromRelDataType(rowType.getFieldNames(), rowType);
+
+ jsonGenerator.writeStartObject();
+
+ for (int i = 0; i < signature.size(); i++) {
+ jsonGenerator.writeFieldName(signature.getColumnName(i));
+
+ if (!includeTypes && !includeSqlTypes) {
+ jsonGenerator.writeNull();
+ } else {
+ jsonGenerator.writeStartObject();
+
+ if (includeTypes) {
+ jsonGenerator.writeStringField(
+ ObjectWriter.TYPE_HEADER_NAME,
+ signature.getColumnType(i).get().asTypeString()
+ );
+ }
+
+ if (includeSqlTypes) {
+ jsonGenerator.writeStringField(
+ ObjectWriter.SQL_TYPE_HEADER_NAME,
+ rowType.getFieldList().get(i).getType().getSqlTypeName().getName()
+ );
+ }
+
+ jsonGenerator.writeEndObject();
+ }
+ }
+
+ jsonGenerator.writeEndObject();
+ }
}
diff --git a/sql/src/main/java/org/apache/druid/sql/http/ResultFormat.java b/sql/src/main/java/org/apache/druid/sql/http/ResultFormat.java
index 3e77076..0c450b5 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/ResultFormat.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/ResultFormat.java
@@ -22,6 +22,7 @@ package org.apache.druid.sql.http;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonValue;
import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.calcite.rel.type.RelDataType;
import org.apache.druid.java.util.common.StringUtils;
import javax.annotation.Nullable;
@@ -29,7 +30,6 @@ import javax.ws.rs.core.MediaType;
import java.io.Closeable;
import java.io.IOException;
import java.io.OutputStream;
-import java.util.List;
public enum ResultFormat
{
@@ -128,7 +128,7 @@ public enum ResultFormat
*/
void writeResponseStart() throws IOException;
- void writeHeader(List<String> columnNames) throws IOException;
+ void writeHeader(RelDataType rowType, boolean includeTypes, boolean includeSqlTypes) throws IOException;
/**
* Start of each result row.
diff --git a/sql/src/main/java/org/apache/druid/sql/http/SqlQuery.java b/sql/src/main/java/org/apache/druid/sql/http/SqlQuery.java
index 893f90f..460a8be 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/SqlQuery.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/SqlQuery.java
@@ -20,11 +20,13 @@
package org.apache.druid.sql.http;
import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonInclude;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import org.apache.calcite.avatica.remote.TypedValue;
+import org.apache.druid.java.util.common.ISE;
import java.util.List;
import java.util.Map;
@@ -46,6 +48,8 @@ public class SqlQuery
private final String query;
private final ResultFormat resultFormat;
private final boolean header;
+ private final boolean typesHeader;
+ private final boolean sqlTypesHeader;
private final Map<String, Object> context;
private final List<SqlParameter> parameters;
@@ -54,6 +58,8 @@ public class SqlQuery
@JsonProperty("query") final String query,
@JsonProperty("resultFormat") final ResultFormat resultFormat,
@JsonProperty("header") final boolean header,
+ @JsonProperty("typesHeader") final boolean typesHeader,
+ @JsonProperty("sqlTypesHeader") final boolean sqlTypesHeader,
@JsonProperty("context") final Map<String, Object> context,
@JsonProperty("parameters") final List<SqlParameter> parameters
)
@@ -61,8 +67,18 @@ public class SqlQuery
this.query = Preconditions.checkNotNull(query, "query");
this.resultFormat = resultFormat == null ? ResultFormat.OBJECT : resultFormat;
this.header = header;
+ this.typesHeader = typesHeader;
+ this.sqlTypesHeader = sqlTypesHeader;
this.context = context == null ? ImmutableMap.of() : context;
this.parameters = parameters == null ? ImmutableList.of() : parameters;
+
+ if (typesHeader && !header) {
+ throw new ISE("Cannot include 'typesHeader' without 'header'");
+ }
+
+ if (sqlTypesHeader && !header) {
+ throw new ISE("Cannot include 'sqlTypesHeader' without 'header'");
+ }
}
@JsonProperty
@@ -78,11 +94,26 @@ public class SqlQuery
}
@JsonProperty("header")
+ @JsonInclude(JsonInclude.Include.NON_DEFAULT)
public boolean includeHeader()
{
return header;
}
+ @JsonProperty("typesHeader")
+ @JsonInclude(JsonInclude.Include.NON_DEFAULT)
+ public boolean includeTypesHeader()
+ {
+ return typesHeader;
+ }
+
+ @JsonProperty("sqlTypesHeader")
+ @JsonInclude(JsonInclude.Include.NON_DEFAULT)
+ public boolean includeSqlTypesHeader()
+ {
+ return sqlTypesHeader;
+ }
+
@JsonProperty
public Map<String, Object> getContext()
{
@@ -111,6 +142,8 @@ public class SqlQuery
}
final SqlQuery sqlQuery = (SqlQuery) o;
return header == sqlQuery.header &&
+ typesHeader == sqlQuery.typesHeader &&
+ sqlTypesHeader == sqlQuery.sqlTypesHeader &&
Objects.equals(query, sqlQuery.query) &&
resultFormat == sqlQuery.resultFormat &&
Objects.equals(context, sqlQuery.context) &&
@@ -120,7 +153,7 @@ public class SqlQuery
@Override
public int hashCode()
{
- return Objects.hash(query, resultFormat, header, context, parameters);
+ return Objects.hash(query, resultFormat, header, typesHeader, sqlTypesHeader, context, parameters);
}
@Override
@@ -130,6 +163,8 @@ public class SqlQuery
"query='" + query + '\'' +
", resultFormat=" + resultFormat +
", header=" + header +
+ ", typesHeader=" + typesHeader +
+ ", sqlTypesHeader=" + sqlTypesHeader +
", context=" + context +
", parameters=" + parameters +
'}';
diff --git a/sql/src/main/java/org/apache/druid/sql/http/SqlResource.java b/sql/src/main/java/org/apache/druid/sql/http/SqlResource.java
index 3e76696..5c92051 100644
--- a/sql/src/main/java/org/apache/druid/sql/http/SqlResource.java
+++ b/sql/src/main/java/org/apache/druid/sql/http/SqlResource.java
@@ -74,6 +74,8 @@ import java.util.stream.Collectors;
public class SqlResource
{
public static final String SQL_QUERY_ID_RESPONSE_HEADER = "X-Druid-SQL-Query-Id";
+ public static final String SQL_HEADER_RESPONSE_HEADER = "X-Druid-SQL-Header-Included";
+ public static final String SQL_HEADER_VALUE = "yes";
private static final Logger log = new Logger(SqlResource.class);
private final ObjectMapper jsonMapper;
@@ -126,7 +128,7 @@ public class SqlResource
final Yielder<Object[]> yielder0 = Yielders.each(sequence);
try {
- return Response
+ final Response.ResponseBuilder responseBuilder = Response
.ok(
(StreamingOutput) outputStream -> {
Exception e = null;
@@ -138,7 +140,11 @@ public class SqlResource
writer.writeResponseStart();
if (sqlQuery.includeHeader()) {
- writer.writeHeader(rowTransformer.getFieldList());
+ writer.writeHeader(
+ rowTransformer.getRowType(),
+ sqlQuery.includeTypesHeader(),
+ sqlQuery.includeSqlTypesHeader()
+ );
}
while (!yielder.isDone()) {
@@ -165,8 +171,13 @@ public class SqlResource
}
}
)
- .header(SQL_QUERY_ID_RESPONSE_HEADER, sqlQueryId)
- .build();
+ .header(SQL_QUERY_ID_RESPONSE_HEADER, sqlQueryId);
+
+ if (sqlQuery.includeHeader()) {
+ responseBuilder.header(SQL_HEADER_RESPONSE_HEADER, SQL_HEADER_VALUE);
+ }
+
+ return responseBuilder.build();
}
catch (Throwable e) {
// make sure to close yielder if anything happened before starting to serialize the response.
@@ -192,7 +203,8 @@ public class SqlResource
}
catch (ForbiddenException e) {
endLifecycleWithoutEmittingMetrics(sqlQueryId, lifecycle);
- throw (ForbiddenException) serverConfig.getErrorResponseTransformStrategy().transformIfNeeded(e); // let ForbiddenExceptionMapper handle this
+ throw (ForbiddenException) serverConfig.getErrorResponseTransformStrategy()
+ .transformIfNeeded(e); // let ForbiddenExceptionMapper handle this
}
// calcite throws a java.lang.AssertionError which is type error not exception. using throwable will catch all
catch (Throwable e) {
@@ -238,7 +250,8 @@ public class SqlResource
sqlLifecycleManager.remove(sqlQueryId, lifecycle);
}
- private Response buildNonOkResponse(int status, SanitizableException e, String sqlQueryId) throws JsonProcessingException
+ private Response buildNonOkResponse(int status, SanitizableException e, String sqlQueryId)
+ throws JsonProcessingException
{
return Response.status(status)
.type(MediaType.APPLICATION_JSON_TYPE)
diff --git a/sql/src/test/java/org/apache/druid/sql/calcite/http/SqlQueryTest.java b/sql/src/test/java/org/apache/druid/sql/calcite/http/SqlQueryTest.java
index 65275f0..8d0507b 100644
--- a/sql/src/test/java/org/apache/druid/sql/calcite/http/SqlQueryTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/calcite/http/SqlQueryTest.java
@@ -42,6 +42,8 @@ public class SqlQueryTest extends CalciteTestBase
"SELECT ?",
ResultFormat.ARRAY,
true,
+ true,
+ true,
ImmutableMap.of("useCache", false),
ImmutableList.of(new SqlParameter(SqlType.INTEGER, 1))
);
diff --git a/sql/src/test/java/org/apache/druid/sql/http/SqlResourceTest.java b/sql/src/test/java/org/apache/druid/sql/http/SqlResourceTest.java
index 8ab76d2..c09a611 100644
--- a/sql/src/test/java/org/apache/druid/sql/http/SqlResourceTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/http/SqlResourceTest.java
@@ -98,6 +98,7 @@ import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Arrays;
+import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
@@ -113,6 +114,15 @@ public class SqlResourceTest extends CalciteTestBase
private static final ObjectMapper JSON_MAPPER = new DefaultObjectMapper();
private static final String DUMMY_SQL_QUERY_ID = "dummy";
+ private static final List<String> EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS =
+ Arrays.asList("__time", "cnt", "dim1", "dim2", "dim3", "m1", "m2", "unique_dim1", "EXPR$8");
+
+ private static final List<String> EXPECTED_TYPES_FOR_RESULT_FORMAT_TESTS =
+ Arrays.asList("LONG", "LONG", "STRING", "STRING", "STRING", "FLOAT", "DOUBLE", "COMPLEX<hyperUnique>", "STRING");
+
+ private static final List<String> EXPECTED_SQL_TYPES_FOR_RESULT_FORMAT_TESTS =
+ Arrays.asList("TIMESTAMP", "BIGINT", "VARCHAR", "VARCHAR", "VARCHAR", "FLOAT", "DOUBLE", "OTHER", "VARCHAR");
+
private static QueryRunnerFactoryConglomerate conglomerate;
private static Closer resourceCloser;
@Rule
@@ -362,6 +372,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT __time, CAST(__time AS DATE) AS t2 FROM druid.foo LIMIT 1",
ResultFormat.OBJECT,
false,
+ false,
+ false,
null,
null
)
@@ -383,6 +395,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT __time, CAST(__time AS DATE) AS t2 FROM druid.foo LIMIT ?",
ResultFormat.OBJECT,
false,
+ false,
+ false,
null,
ImmutableList.of(new SqlParameter(SqlType.INTEGER, 1))
)
@@ -404,6 +418,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT __time, CAST(__time AS DATE) AS t2 FROM druid.foo LIMIT 1",
ResultFormat.OBJECT,
false,
+ false,
+ false,
ImmutableMap.of(PlannerContext.CTX_SQL_TIME_ZONE, "America/Los_Angeles"),
null
)
@@ -425,6 +441,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT MAX(__time) as t1, MAX(__time) FILTER(WHERE dim1 = 'non_existing') as t2 FROM druid.foo",
ResultFormat.OBJECT,
false,
+ false,
+ false,
null,
null
)
@@ -433,10 +451,14 @@ public class SqlResourceTest extends CalciteTestBase
Assert.assertEquals(
NullHandling.replaceWithDefault() ?
ImmutableList.of(
- ImmutableMap.of("t1", "2001-01-03T00:00:00.000Z", "t2", "-292275055-05-16T16:47:04.192Z") // t2 represents Long.MIN converted to a timestamp
+ ImmutableMap.of("t1", "2001-01-03T00:00:00.000Z", "t2", "-292275055-05-16T16:47:04.192Z")
+ // t2 represents Long.MIN converted to a timestamp
) :
ImmutableList.of(
- Maps.transformValues(ImmutableMap.of("t1", "2001-01-03T00:00:00.000Z", "t2", ""), (val) -> "".equals(val) ? null : val)
+ Maps.transformValues(
+ ImmutableMap.of("t1", "2001-01-03T00:00:00.000Z", "t2", ""),
+ (val) -> "".equals(val) ? null : val
+ )
),
rows
);
@@ -446,7 +468,15 @@ public class SqlResourceTest extends CalciteTestBase
public void testFieldAliasingSelect() throws Exception
{
final List<Map<String, Object>> rows = doPost(
- new SqlQuery("SELECT dim2 \"x\", dim2 \"y\" FROM druid.foo LIMIT 1", ResultFormat.OBJECT, false, null, null)
+ new SqlQuery(
+ "SELECT dim2 \"x\", dim2 \"y\" FROM druid.foo LIMIT 1",
+ ResultFormat.OBJECT,
+ false,
+ false,
+ false,
+ null,
+ null
+ )
).rhs;
Assert.assertEquals(
@@ -461,7 +491,15 @@ public class SqlResourceTest extends CalciteTestBase
public void testFieldAliasingGroupBy() throws Exception
{
final List<Map<String, Object>> rows = doPost(
- new SqlQuery("SELECT dim2 \"x\", dim2 \"y\" FROM druid.foo GROUP BY dim2", ResultFormat.OBJECT, false, null, null)
+ new SqlQuery(
+ "SELECT dim2 \"x\", dim2 \"y\" FROM druid.foo GROUP BY dim2",
+ ResultFormat.OBJECT,
+ false,
+ false,
+ false,
+ null,
+ null
+ )
).rhs;
Assert.assertEquals(
@@ -513,7 +551,10 @@ public class SqlResourceTest extends CalciteTestBase
nullStr
)
),
- doPost(new SqlQuery(query, ResultFormat.ARRAY, false, null, null), new TypeReference<List<List<Object>>>() {}).rhs
+ doPost(
+ new SqlQuery(query, ResultFormat.ARRAY, false, false, false, null, null),
+ new TypeReference<List<List<Object>>>() {}
+ ).rhs
);
}
@@ -524,7 +565,7 @@ public class SqlResourceTest extends CalciteTestBase
final String query = "SELECT cnt FROM foo";
final Pair<QueryException, String> response =
- doPostRaw(new SqlQuery(query, ResultFormat.ARRAY, false, null, null), req);
+ doPostRaw(new SqlQuery(query, ResultFormat.ARRAY, false, false, false, null, null), req);
// Truncated response: missing final ]
Assert.assertNull(response.lhs);
@@ -538,7 +579,7 @@ public class SqlResourceTest extends CalciteTestBase
final String query = "SELECT cnt FROM foo";
final Pair<QueryException, String> response =
- doPostRaw(new SqlQuery(query, ResultFormat.OBJECT, false, null, null), req);
+ doPostRaw(new SqlQuery(query, ResultFormat.OBJECT, false, false, false, null, null), req);
// Truncated response: missing final ]
Assert.assertNull(response.lhs);
@@ -552,7 +593,7 @@ public class SqlResourceTest extends CalciteTestBase
final String query = "SELECT cnt FROM foo";
final Pair<QueryException, String> response =
- doPostRaw(new SqlQuery(query, ResultFormat.ARRAYLINES, false, null, null), req);
+ doPostRaw(new SqlQuery(query, ResultFormat.ARRAYLINES, false, false, false, null, null), req);
// Truncated response: missing final LFLF
Assert.assertNull(response.lhs);
@@ -566,7 +607,7 @@ public class SqlResourceTest extends CalciteTestBase
final String query = "SELECT cnt FROM foo";
final Pair<QueryException, String> response =
- doPostRaw(new SqlQuery(query, ResultFormat.OBJECTLINES, false, null, null), req);
+ doPostRaw(new SqlQuery(query, ResultFormat.OBJECTLINES, false, false, false, null, null), req);
// Truncated response: missing final LFLF
Assert.assertNull(response.lhs);
@@ -580,7 +621,7 @@ public class SqlResourceTest extends CalciteTestBase
final String query = "SELECT cnt FROM foo";
final Pair<QueryException, String> response =
- doPostRaw(new SqlQuery(query, ResultFormat.CSV, false, null, null), req);
+ doPostRaw(new SqlQuery(query, ResultFormat.CSV, false, false, false, null, null), req);
// Truncated response: missing final LFLF
Assert.assertNull(response.lhs);
@@ -595,7 +636,9 @@ public class SqlResourceTest extends CalciteTestBase
Assert.assertEquals(
ImmutableList.of(
- Arrays.asList("__time", "cnt", "dim1", "dim2", "dim3", "m1", "m2", "unique_dim1", "EXPR$8"),
+ EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS,
+ EXPECTED_TYPES_FOR_RESULT_FORMAT_TESTS,
+ EXPECTED_SQL_TYPES_FOR_RESULT_FORMAT_TESTS,
Arrays.asList(
"2000-01-01T00:00:00.000Z",
1,
@@ -619,7 +662,10 @@ public class SqlResourceTest extends CalciteTestBase
nullStr
)
),
- doPost(new SqlQuery(query, ResultFormat.ARRAY, true, null, null), new TypeReference<List<List<Object>>>() {}).rhs
+ doPost(
+ new SqlQuery(query, ResultFormat.ARRAY, true, true, true, null, null),
+ new TypeReference<List<List<Object>>>() {}
+ ).rhs
);
}
@@ -628,7 +674,7 @@ public class SqlResourceTest extends CalciteTestBase
{
final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
final Pair<QueryException, String> pair = doPostRaw(
- new SqlQuery(query, ResultFormat.ARRAYLINES, false, null, null)
+ new SqlQuery(query, ResultFormat.ARRAYLINES, false, false, false, null, null)
);
Assert.assertNull(pair.lhs);
final String response = pair.rhs;
@@ -673,18 +719,17 @@ public class SqlResourceTest extends CalciteTestBase
{
final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
final Pair<QueryException, String> pair = doPostRaw(
- new SqlQuery(query, ResultFormat.ARRAYLINES, true, null, null)
+ new SqlQuery(query, ResultFormat.ARRAYLINES, true, true, true, null, null)
);
Assert.assertNull(pair.lhs);
final String response = pair.rhs;
final String nullStr = NullHandling.replaceWithDefault() ? "" : null;
final List<String> lines = Splitter.on('\n').splitToList(response);
- Assert.assertEquals(5, lines.size());
- Assert.assertEquals(
- Arrays.asList("__time", "cnt", "dim1", "dim2", "dim3", "m1", "m2", "unique_dim1", "EXPR$8"),
- JSON_MAPPER.readValue(lines.get(0), List.class)
- );
+ Assert.assertEquals(7, lines.size());
+ Assert.assertEquals(EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS, JSON_MAPPER.readValue(lines.get(0), List.class));
+ Assert.assertEquals(EXPECTED_TYPES_FOR_RESULT_FORMAT_TESTS, JSON_MAPPER.readValue(lines.get(1), List.class));
+ Assert.assertEquals(EXPECTED_SQL_TYPES_FOR_RESULT_FORMAT_TESTS, JSON_MAPPER.readValue(lines.get(2), List.class));
Assert.assertEquals(
Arrays.asList(
"2000-01-01T00:00:00.000Z",
@@ -697,7 +742,7 @@ public class SqlResourceTest extends CalciteTestBase
"org.apache.druid.hll.VersionOneHyperLogLogCollector",
nullStr
),
- JSON_MAPPER.readValue(lines.get(1), List.class)
+ JSON_MAPPER.readValue(lines.get(3), List.class)
);
Assert.assertEquals(
Arrays.asList(
@@ -711,10 +756,10 @@ public class SqlResourceTest extends CalciteTestBase
"org.apache.druid.hll.VersionOneHyperLogLogCollector",
nullStr
),
- JSON_MAPPER.readValue(lines.get(2), List.class)
+ JSON_MAPPER.readValue(lines.get(4), List.class)
);
- Assert.assertEquals("", lines.get(3));
- Assert.assertEquals("", lines.get(4));
+ Assert.assertEquals("", lines.get(5));
+ Assert.assertEquals("", lines.get(6));
}
@Test
@@ -757,7 +802,7 @@ public class SqlResourceTest extends CalciteTestBase
.build()
).stream().map(transformer).collect(Collectors.toList()),
doPost(
- new SqlQuery(query, ResultFormat.OBJECT, false, null, null),
+ new SqlQuery(query, ResultFormat.OBJECT, false, false, false, null, null),
new TypeReference<List<Map<String, Object>>>() {}
).rhs
);
@@ -768,7 +813,7 @@ public class SqlResourceTest extends CalciteTestBase
{
final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
final Pair<QueryException, String> pair = doPostRaw(
- new SqlQuery(query, ResultFormat.OBJECTLINES, false, null, null)
+ new SqlQuery(query, ResultFormat.OBJECTLINES, false, false, false, null, null)
);
Assert.assertNull(pair.lhs);
final String response = pair.rhs;
@@ -821,11 +866,137 @@ public class SqlResourceTest extends CalciteTestBase
}
@Test
+ public void testObjectLinesResultFormatWithMinimalHeader() throws Exception
+ {
+ final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
+ final Pair<QueryException, String> pair =
+ doPostRaw(new SqlQuery(query, ResultFormat.OBJECTLINES, true, false, false, null, null));
+ Assert.assertNull(pair.lhs);
+ final String response = pair.rhs;
+ final String nullStr = NullHandling.replaceWithDefault() ? "" : null;
+ final Function<Map<String, Object>, Map<String, Object>> transformer = m -> Maps.transformEntries(
+ m,
+ (k, v) -> "EXPR$8".equals(k) || ("dim2".equals(k) && v.toString().isEmpty()) ? nullStr : v
+ );
+ final List<String> lines = Splitter.on('\n').splitToList(response);
+
+ final Map<String, Object> expectedHeader = new HashMap<>();
+ for (final String column : EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS) {
+ expectedHeader.put(column, null);
+ }
+
+ Assert.assertEquals(5, lines.size());
+ Assert.assertEquals(expectedHeader, JSON_MAPPER.readValue(lines.get(0), Object.class));
+ Assert.assertEquals(
+ transformer.apply(
+ ImmutableMap
+ .<String, Object>builder()
+ .put("__time", "2000-01-01T00:00:00.000Z")
+ .put("cnt", 1)
+ .put("dim1", "")
+ .put("dim2", "a")
+ .put("dim3", "[\"a\",\"b\"]")
+ .put("m1", 1.0)
+ .put("m2", 1.0)
+ .put("unique_dim1", "org.apache.druid.hll.VersionOneHyperLogLogCollector")
+ .put("EXPR$8", "")
+ .build()
+ ),
+ JSON_MAPPER.readValue(lines.get(1), Object.class)
+ );
+ Assert.assertEquals(
+ transformer.apply(
+ ImmutableMap
+ .<String, Object>builder()
+ .put("__time", "2000-01-02T00:00:00.000Z")
+ .put("cnt", 1)
+ .put("dim1", "10.1")
+ .put("dim2", "")
+ .put("dim3", "[\"b\",\"c\"]")
+ .put("m1", 2.0)
+ .put("m2", 2.0)
+ .put("unique_dim1", "org.apache.druid.hll.VersionOneHyperLogLogCollector")
+ .put("EXPR$8", "")
+ .build()
+ ),
+ JSON_MAPPER.readValue(lines.get(2), Object.class)
+ );
+ Assert.assertEquals("", lines.get(3));
+ Assert.assertEquals("", lines.get(4));
+ }
+
+ @Test
+ public void testObjectLinesResultFormatWithFullHeader() throws Exception
+ {
+ final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
+ final Pair<QueryException, String> pair =
+ doPostRaw(new SqlQuery(query, ResultFormat.OBJECTLINES, true, true, true, null, null));
+ Assert.assertNull(pair.lhs);
+ final String response = pair.rhs;
+ final String nullStr = NullHandling.replaceWithDefault() ? "" : null;
+ final Function<Map<String, Object>, Map<String, Object>> transformer = m -> Maps.transformEntries(
+ m,
+ (k, v) -> "EXPR$8".equals(k) || ("dim2".equals(k) && v.toString().isEmpty()) ? nullStr : v
+ );
+ final List<String> lines = Splitter.on('\n').splitToList(response);
+
+ final Map<String, Object> expectedHeader = new HashMap<>();
+ for (int i = 0; i < EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS.size(); i++) {
+ expectedHeader.put(
+ EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS.get(i),
+ ImmutableMap.of(
+ ObjectWriter.TYPE_HEADER_NAME, EXPECTED_TYPES_FOR_RESULT_FORMAT_TESTS.get(i),
+ ObjectWriter.SQL_TYPE_HEADER_NAME, EXPECTED_SQL_TYPES_FOR_RESULT_FORMAT_TESTS.get(i)
+ )
+ );
+ }
+
+ Assert.assertEquals(5, lines.size());
+ Assert.assertEquals(expectedHeader, JSON_MAPPER.readValue(lines.get(0), Object.class));
+ Assert.assertEquals(
+ transformer.apply(
+ ImmutableMap
+ .<String, Object>builder()
+ .put("__time", "2000-01-01T00:00:00.000Z")
+ .put("cnt", 1)
+ .put("dim1", "")
+ .put("dim2", "a")
+ .put("dim3", "[\"a\",\"b\"]")
+ .put("m1", 1.0)
+ .put("m2", 1.0)
+ .put("unique_dim1", "org.apache.druid.hll.VersionOneHyperLogLogCollector")
+ .put("EXPR$8", "")
+ .build()
+ ),
+ JSON_MAPPER.readValue(lines.get(1), Object.class)
+ );
+ Assert.assertEquals(
+ transformer.apply(
+ ImmutableMap
+ .<String, Object>builder()
+ .put("__time", "2000-01-02T00:00:00.000Z")
+ .put("cnt", 1)
+ .put("dim1", "10.1")
+ .put("dim2", "")
+ .put("dim3", "[\"b\",\"c\"]")
+ .put("m1", 2.0)
+ .put("m2", 2.0)
+ .put("unique_dim1", "org.apache.druid.hll.VersionOneHyperLogLogCollector")
+ .put("EXPR$8", "")
+ .build()
+ ),
+ JSON_MAPPER.readValue(lines.get(2), Object.class)
+ );
+ Assert.assertEquals("", lines.get(3));
+ Assert.assertEquals("", lines.get(4));
+ }
+
+ @Test
public void testCsvResultFormat() throws Exception
{
final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
final Pair<QueryException, String> pair = doPostRaw(
- new SqlQuery(query, ResultFormat.CSV, false, null, null)
+ new SqlQuery(query, ResultFormat.CSV, false, false, false, null, null)
);
Assert.assertNull(pair.lhs);
final String response = pair.rhs;
@@ -847,7 +1018,7 @@ public class SqlResourceTest extends CalciteTestBase
{
final String query = "SELECT *, CASE dim2 WHEN '' THEN dim2 END FROM foo LIMIT 2";
final Pair<QueryException, String> pair = doPostRaw(
- new SqlQuery(query, ResultFormat.CSV, true, null, null)
+ new SqlQuery(query, ResultFormat.CSV, true, true, true, null, null)
);
Assert.assertNull(pair.lhs);
final String response = pair.rhs;
@@ -855,7 +1026,9 @@ public class SqlResourceTest extends CalciteTestBase
Assert.assertEquals(
ImmutableList.of(
- "__time,cnt,dim1,dim2,dim3,m1,m2,unique_dim1,EXPR$8",
+ String.join(",", EXPECTED_COLUMNS_FOR_RESULT_FORMAT_TESTS),
+ String.join(",", EXPECTED_TYPES_FOR_RESULT_FORMAT_TESTS),
+ String.join(",", EXPECTED_SQL_TYPES_FOR_RESULT_FORMAT_TESTS),
"2000-01-01T00:00:00.000Z,1,,a,\"[\"\"a\"\",\"\"b\"\"]\",1.0,1.0,org.apache.druid.hll.VersionOneHyperLogLogCollector,",
"2000-01-02T00:00:00.000Z,1,10.1,,\"[\"\"b\"\",\"\"c\"\"]\",2.0,2.0,org.apache.druid.hll.VersionOneHyperLogLogCollector,",
"",
@@ -870,7 +1043,15 @@ public class SqlResourceTest extends CalciteTestBase
{
Map<String, Object> queryContext = ImmutableMap.of(PlannerContext.CTX_SQL_QUERY_ID, DUMMY_SQL_QUERY_ID);
final List<Map<String, Object>> rows = doPost(
- new SqlQuery("EXPLAIN PLAN FOR SELECT COUNT(*) AS cnt FROM druid.foo", ResultFormat.OBJECT, false, queryContext, null)
+ new SqlQuery(
+ "EXPLAIN PLAN FOR SELECT COUNT(*) AS cnt FROM druid.foo",
+ ResultFormat.OBJECT,
+ false,
+ false,
+ false,
+ queryContext,
+ null
+ )
).rhs;
Assert.assertEquals(
@@ -946,6 +1127,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT DISTINCT dim1 FROM foo",
ResultFormat.OBJECT,
false,
+ false,
+ false,
ImmutableMap.of("maxMergingDictionarySize", 1, "sqlQueryId", "id"),
null
)
@@ -1018,12 +1201,14 @@ public class SqlResourceTest extends CalciteTestBase
CalciteTests.TEST_AUTHORIZER_MAPPER,
sqlLifecycleFactory,
lifecycleManager,
- new ServerConfig() {
+ new ServerConfig()
+ {
@Override
public boolean isShowDetailedJettyErrors()
{
return true;
}
+
@Override
public ErrorResponseTransformStrategy getErrorResponseTransformStrategy()
{
@@ -1056,12 +1241,14 @@ public class SqlResourceTest extends CalciteTestBase
CalciteTests.TEST_AUTHORIZER_MAPPER,
sqlLifecycleFactory,
lifecycleManager,
- new ServerConfig() {
+ new ServerConfig()
+ {
@Override
public boolean isShowDetailedJettyErrors()
{
return true;
}
+
@Override
public ErrorResponseTransformStrategy getErrorResponseTransformStrategy()
{
@@ -1102,6 +1289,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT COUNT(*) AS cnt, 'foo' AS TheFoo FROM druid.foo",
null,
false,
+ false,
+ false,
ImmutableMap.of("priority", -5, "sqlQueryId", sqlQueryId),
null
),
@@ -1149,6 +1338,8 @@ public class SqlResourceTest extends CalciteTestBase
"SELECT CAST(__time AS DATE), dim1, dim2, dim3 FROM druid.foo GROUP by __time, dim1, dim2, dim3 ORDER BY dim2 DESC",
ResultFormat.OBJECT,
false,
+ false,
+ false,
queryContext,
null
)
@@ -1292,7 +1483,7 @@ public class SqlResourceTest extends CalciteTestBase
private static SqlQuery createSimpleQueryWithId(String sqlQueryId, String sql)
{
- return new SqlQuery(sql, null, false, ImmutableMap.of("sqlQueryId", sqlQueryId), null);
+ return new SqlQuery(sql, null, false, false, false, ImmutableMap.of("sqlQueryId", sqlQueryId), null);
}
private Pair<QueryException, List<Map<String, Object>>> doPost(final SqlQuery query) throws Exception
@@ -1316,7 +1507,8 @@ public class SqlResourceTest extends CalciteTestBase
return doPostRaw(query, req);
}
- private Pair<QueryException, List<Map<String, Object>>> doPost(final SqlQuery query, HttpServletRequest req) throws Exception
+ private Pair<QueryException, List<Map<String, Object>>> doPost(final SqlQuery query, HttpServletRequest req)
+ throws Exception
{
return doPost(query, req, new TypeReference<List<Map<String, Object>>>()
{
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org