You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by kevinyu98 <gi...@git.apache.org> on 2017/02/13 19:07:59 UTC
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
GitHub user kevinyu98 opened a pull request:
https://github.com/apache/spark/pull/16915
[SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN subquery 4th batch
## What changes were proposed in this pull request?
This is 4th batch of test case for IN/NOT IN subquery. In this PR, it has these test files:
`in-set-operations.sql`
`in-with-cte.sql`
`not-in-joins.sql`
Here are the queries and results from running on DB2.
[in-set-operations DB2 version](https://github.com/apache/spark/files/772107/in-set-operations.sql.db2.txt)
[Output of in-set-operations](https://github.com/apache/spark/files/772109/in-set-operations.sql.db2.out.txt)
[in-with-cte DB2 version](https://github.com/apache/spark/files/772111/in-with-cte.sql.db2.txt)
[Output of in-with-cte](https://github.com/apache/spark/files/772113/in-with-cte.sql.db2.out.txt)
[not-in-joins DB2 version](https://github.com/apache/spark/files/772114/not-in-joins.sql.db2.txt)
[Output of not-in-joins](https://github.com/apache/spark/files/772116/not-in-joins.sql.db2.out.txt)
## How was this patch tested?
This pr is adding new test cases. We compare the result from spark with the result from another RDBMS(We used DB2 LUW). If the results are the same, we assume the result is correct.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/kevinyu98/spark spark-18871-44
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/16915.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #16915
----
commit 3b44c5978bd44db986621d3e8511e9165b66926b
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-20T18:06:30Z
adding testcase
commit 18b4a31c687b264b50aa5f5a74455956911f738a
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-22T21:48:00Z
Merge remote-tracking branch 'upstream/master'
commit 4f4d1c8f2801b1e662304ab2b33351173e71b427
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-23T16:50:19Z
Merge remote-tracking branch 'upstream/master'
get latest code from upstream
commit f5f0cbed1eb5754c04c36933b374c3b3d2ae4f4e
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-23T22:20:53Z
Merge remote-tracking branch 'upstream/master'
adding trim characters support
commit d8b2edbd13ee9a4f057bca7dcb0c0940e8e867b8
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-25T20:24:33Z
Merge remote-tracking branch 'upstream/master'
get latest code for pr12646
commit 196b6c66b0d55232f427c860c0e7c6876c216a67
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-25T23:45:57Z
Merge remote-tracking branch 'upstream/master'
merge latest code
commit f37a01e005f3e27ae2be056462d6eb6730933ba5
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-27T14:15:06Z
Merge remote-tracking branch 'upstream/master'
merge upstream/master
commit bb5b01fd3abeea1b03315eccf26762fcc23f80c0
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-04-30T23:49:31Z
Merge remote-tracking branch 'upstream/master'
commit bde5820a181cf84e0879038ad8c4cebac63c1e24
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-04T03:52:31Z
Merge remote-tracking branch 'upstream/master'
commit 5f7cd96d495f065cd04e8e4cc58461843e45bc8d
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-10T21:14:50Z
Merge remote-tracking branch 'upstream/master'
commit 893a49af0bfd153ccb59ba50b63a232660e0eada
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-13T18:20:39Z
Merge remote-tracking branch 'upstream/master'
commit 4bbe1fd4a3ebd50338ccbe07dc5887fe289cd53d
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-17T21:58:14Z
Merge remote-tracking branch 'upstream/master'
commit b2dd795e23c36cbbd022f07a10c0cf21c85eb421
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-18T06:37:13Z
Merge remote-tracking branch 'upstream/master'
commit 8c3e5da458dbff397ed60fcb68f2a46d87ab7ba4
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-18T16:18:16Z
Merge remote-tracking branch 'upstream/master'
commit a0eaa408e847fbdc3ac5b26348588ee0a1e276c7
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-19T04:28:20Z
Merge remote-tracking branch 'upstream/master'
commit d03c940ed89795fa7fe1d1e9f511363b22cdf19d
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-19T21:24:33Z
Merge remote-tracking branch 'upstream/master'
commit d728d5e002082e571ac47292226eb8b2614f479f
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-24T20:32:57Z
Merge remote-tracking branch 'upstream/master'
commit ea104ddfbf7d180ed1bc53dd9a1005010264aa1f
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-25T22:52:57Z
Merge remote-tracking branch 'upstream/master'
commit 6ab1215b781ad0cccf1752f3a625b4e4e371c38e
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-05-27T17:18:46Z
Merge remote-tracking branch 'upstream/master'
commit 0c566533705331697eb1b287b30c8b16111f6fa2
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-01T06:48:57Z
Merge remote-tracking branch 'upstream/master'
commit d7a187490b31185d0a803cbbdeda67cb26c40056
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-01T22:55:17Z
Merge remote-tracking branch 'upstream/master'
commit 85d35002ce864d5ce6fd3be7215a868a8867caf9
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-02T14:08:30Z
Merge remote-tracking branch 'upstream/master'
commit c056f91036ec75d1e2c93f6f47ad842eb28a3e0b
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-03T06:06:51Z
Merge remote-tracking branch 'upstream/master'
commit 0b8189dd454897ae73bb3a5ffc245b2c65f6b226
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-03T23:33:39Z
Merge remote-tracking branch 'upstream/master'
commit c2ea31de654bd8c32ab0dd9a0362ee90b47c8756
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-06T21:04:09Z
Merge remote-tracking branch 'upstream/master'
commit a2d3056e148a1b38b1b2bf84eabf13962a452e36
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-08T02:52:43Z
Merge remote-tracking branch 'upstream/master'
commit 39e564887667b57ec15b87f9ee6c760fd1938a15
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-06-08T18:05:50Z
Merge remote-tracking branch 'upstream/master'
commit b9370a35acbc47d1ed30417d843284ac0cf1a6d5
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-07-25T18:41:03Z
Merge remote-tracking branch 'upstream/master'
commit 01224a4c133fa2b47b921ff9fff170ff22f5de44
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-08-03T05:21:29Z
Merge remote-tracking branch 'upstream/master'
commit d05d39a0e51d087ff2102a0685b66a85d0e5a984
Author: Kevin Yu <qy...@us.ibm.com>
Date: 2016-08-19T21:55:21Z
Merge remote-tracking branch 'upstream/master'
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16915
**[Test build #72952 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72952/testReport)** for PR 16915 at commit [`3dd57fd`](https://github.com/apache/spark/commit/3dd57fd7017e173cb00a53280a41783be634d6fe).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
Posted by kevinyu98 <gi...@git.apache.org>.
Github user kevinyu98 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16915#discussion_r100942352
--- Diff: sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-with-cte.sql.out ---
@@ -0,0 +1,368 @@
+-- Automatically generated by SQLQueryTestSuite
+-- Number of queries: 13
+
+
+-- !query 0
+create temporary view t1 as select * from values
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:00:00.000', date '2014-04-04'),
+ ("val1b", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1a", 16S, 12, 21L, float(15.0), 20D, 20E2, timestamp '2014-06-04 01:02:00.001', date '2014-06-04'),
+ ("val1a", 16S, 12, 10L, float(15.0), 20D, 20E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:02:00.001', date '2014-05-05'),
+ ("val1d", null, 16, 22L, float(17.0), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', null),
+ ("val1d", null, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-07-04 01:02:00.001', null),
+ ("val1e", 10S, null, 25L, float(17.0), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-09-04 01:02:00.001', date '2014-09-04'),
+ ("val1d", 10S, null, 12L, float(17.0), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:02:00.001', date '2014-04-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04')
+ as t1(t1a, t1b, t1c, t1d, t1e, t1f, t1g, t1h, t1i)
+-- !query 0 schema
+struct<>
+-- !query 0 output
+
+
+
+-- !query 1
+create temporary view t2 as select * from values
+ ("val2a", 6S, 12, 14L, float(15), 20D, 20E2, timestamp '2014-04-04 01:01:00.000', date '2014-04-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 119L, float(17), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1c", 12S, 16, 219L, float(17), 25D, 26E2, timestamp '2016-05-04 01:01:00.000', date '2016-05-04'),
+ ("val1b", null, 16, 319L, float(17), 25D, 26E2, timestamp '2017-05-04 01:01:00.000', null),
+ ("val2e", 8S, null, 419L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1f", 19S, null, 519L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 12S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-05'),
+ ("val1e", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:01:00.000', date '2014-09-04'),
+ ("val1f", 19S, null, 19L, float(17), 25D, 26E2, timestamp '2014-10-04 01:01:00.000', date '2014-10-04'),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', null)
+ as t2(t2a, t2b, t2c, t2d, t2e, t2f, t2g, t2h, t2i)
+-- !query 1 schema
+struct<>
+-- !query 1 output
+
+
+
+-- !query 2
+create temporary view t3 as select * from values
+ ("val3a", 6S, 12, 110L, float(15), 20D, 20E2, timestamp '2014-04-04 01:02:00.000', date '2014-04-04'),
+ ("val3a", 6S, 12, 10L, float(15), 20D, 20E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 219L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 319L, float(17), 25D, 26E2, timestamp '2014-06-04 01:02:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:02:00.000', date '2014-07-04'),
+ ("val3c", 17S, 16, 519L, float(17), 25D, 26E2, timestamp '2014-08-04 01:02:00.000', date '2014-08-04'),
+ ("val3c", 17S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:02:00.000', date '2014-09-05'),
+ ("val1b", null, 16, 419L, float(17), 25D, 26E2, timestamp '2014-10-04 01:02:00.000', null),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-11-04 01:02:00.000', null),
+ ("val3b", 8S, null, 719L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val3b", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2015-05-04 01:02:00.000', date '2015-05-04')
+ as t3(t3a, t3b, t3c, t3d, t3e, t3f, t3g, t3h, t3i)
+-- !query 2 schema
+struct<>
+-- !query 2 output
+
+
+
+-- !query 3
+WITH cte1
+ AS (SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1a = "val1a")
+SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1h
+FROM t1
+WHERE t1b IN (SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 0)
+-- !query 3 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1d:bigint,t1h:timestamp>
+-- !query 3 output
+val1a 16 12 10 2014-07-04 01:01:00
+val1a 16 12 21 2014-06-04 01:02:00.001
+val1a 6 8 10 2014-04-04 01:00:00
+val1a 6 8 10 2014-04-04 01:02:00.001
+
+
+-- !query 4
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b
+ FROM t1)
+SELECT count(distinct(t1a)), t1b, t1c
+FROM t1
+WHERE t1b IN
+ (
+ SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 0
+ UNION
+ SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 5
+ UNION ALL
+ SELECT cte1.t1b
+ FROM cte1
+ INTERSECT
+ SELECT cte1.t1b
+ FROM cte1
+ UNION
+ SELECT cte1.t1b
+ FROM cte1 )
+GROUP BY t1a, t1b, t1c
+HAVING t1c IS NOT NULL
+-- !query 4 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int>
+-- !query 4 output
+1 16 12
+1 6 8
+1 8 16
+1 8 16
+
+
+-- !query 5
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1e
+ FROM t1)
+SELECT t1a,
+ t1b,
+ t1c,
+ t1h
+FROM t1
+WHERE t1c IN
+ (
+ SELECT cte1.t1c
+ FROM cte1
+ JOIN cte1 cte2
+ on cte1.t1b > cte2.t1b
+ FULL OUTER JOIN cte1 cte3
+ ON cte1.t1c = cte3.t1c
+ LEFT JOIN cte1 cte4
+ ON cte1.t1d = cte4.t1d
+ INNER JOIN cte1 cte5
+ ON cte1.t1b < cte5.t1b
+ LEFT OUTER JOIN cte1 cte6
+ ON cte1.t1d > cte6.t1d)
+-- !query 5 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1h:timestamp>
+-- !query 5 output
+val1b 8 16 2014-05-04 01:01:00
+val1c 8 16 2014-05-04 01:02:00.001
+val1d NULL 16 2014-06-04 01:01:00
+val1d NULL 16 2014-07-04 01:02:00.001
+
+
+-- !query 6
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1b IN
+ (
+ SELECT t2b
+ FROM t2
+ RIGHT JOIN t1
+ ON t1c = t2c
+ LEFT JOIN t3
+ ON t2d = t3d ) AND
+ t1a = "val1b")
--- End diff --
sure, I have adjust the style and resubmit. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
Posted by nsyca <gi...@git.apache.org>.
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/16915#discussion_r101375044
--- Diff: sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-with-cte.sql.out ---
@@ -0,0 +1,364 @@
+-- Automatically generated by SQLQueryTestSuite
+-- Number of queries: 13
+
+
+-- !query 0
+create temporary view t1 as select * from values
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:00:00.000', date '2014-04-04'),
+ ("val1b", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1a", 16S, 12, 21L, float(15.0), 20D, 20E2, timestamp '2014-06-04 01:02:00.001', date '2014-06-04'),
+ ("val1a", 16S, 12, 10L, float(15.0), 20D, 20E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:02:00.001', date '2014-05-05'),
+ ("val1d", null, 16, 22L, float(17.0), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', null),
+ ("val1d", null, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-07-04 01:02:00.001', null),
+ ("val1e", 10S, null, 25L, float(17.0), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-09-04 01:02:00.001', date '2014-09-04'),
+ ("val1d", 10S, null, 12L, float(17.0), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:02:00.001', date '2014-04-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04')
+ as t1(t1a, t1b, t1c, t1d, t1e, t1f, t1g, t1h, t1i)
+-- !query 0 schema
+struct<>
+-- !query 0 output
+
+
+
+-- !query 1
+create temporary view t2 as select * from values
+ ("val2a", 6S, 12, 14L, float(15), 20D, 20E2, timestamp '2014-04-04 01:01:00.000', date '2014-04-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 119L, float(17), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1c", 12S, 16, 219L, float(17), 25D, 26E2, timestamp '2016-05-04 01:01:00.000', date '2016-05-04'),
+ ("val1b", null, 16, 319L, float(17), 25D, 26E2, timestamp '2017-05-04 01:01:00.000', null),
+ ("val2e", 8S, null, 419L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1f", 19S, null, 519L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 12S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-05'),
+ ("val1e", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:01:00.000', date '2014-09-04'),
+ ("val1f", 19S, null, 19L, float(17), 25D, 26E2, timestamp '2014-10-04 01:01:00.000', date '2014-10-04'),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', null)
+ as t2(t2a, t2b, t2c, t2d, t2e, t2f, t2g, t2h, t2i)
+-- !query 1 schema
+struct<>
+-- !query 1 output
+
+
+
+-- !query 2
+create temporary view t3 as select * from values
+ ("val3a", 6S, 12, 110L, float(15), 20D, 20E2, timestamp '2014-04-04 01:02:00.000', date '2014-04-04'),
+ ("val3a", 6S, 12, 10L, float(15), 20D, 20E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 219L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 319L, float(17), 25D, 26E2, timestamp '2014-06-04 01:02:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:02:00.000', date '2014-07-04'),
+ ("val3c", 17S, 16, 519L, float(17), 25D, 26E2, timestamp '2014-08-04 01:02:00.000', date '2014-08-04'),
+ ("val3c", 17S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:02:00.000', date '2014-09-05'),
+ ("val1b", null, 16, 419L, float(17), 25D, 26E2, timestamp '2014-10-04 01:02:00.000', null),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-11-04 01:02:00.000', null),
+ ("val3b", 8S, null, 719L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val3b", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2015-05-04 01:02:00.000', date '2015-05-04')
+ as t3(t3a, t3b, t3c, t3d, t3e, t3f, t3g, t3h, t3i)
+-- !query 2 schema
+struct<>
+-- !query 2 output
+
+
+
+-- !query 3
+WITH cte1
+ AS (SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1a = "val1a")
+SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1h
+FROM t1
+WHERE t1b IN (SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 0)
+-- !query 3 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1d:bigint,t1h:timestamp>
+-- !query 3 output
+val1a 16 12 10 2014-07-04 01:01:00
+val1a 16 12 21 2014-06-04 01:02:00.001
+val1a 6 8 10 2014-04-04 01:00:00
+val1a 6 8 10 2014-04-04 01:02:00.001
+
+
+-- !query 4
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b
+ FROM t1)
+SELECT count(distinct(t1a)), t1b, t1c
+FROM t1
+WHERE t1b IN
+ (
+ SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 0
+ UNION
+ SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 5
+ UNION ALL
+ SELECT cte1.t1b
+ FROM cte1
+ INTERSECT
+ SELECT cte1.t1b
+ FROM cte1
+ UNION
+ SELECT cte1.t1b
+ FROM cte1 )
+GROUP BY t1a, t1b, t1c
+HAVING t1c IS NOT NULL
+-- !query 4 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int>
+-- !query 4 output
+1 16 12
+1 6 8
+1 8 16
+1 8 16
+
+
+-- !query 5
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1e
+ FROM t1)
+SELECT t1a,
+ t1b,
+ t1c,
+ t1h
+FROM t1
+WHERE t1c IN
+ (
+ SELECT cte1.t1c
+ FROM cte1
+ JOIN cte1 cte2
+ on cte1.t1b > cte2.t1b
+ FULL OUTER JOIN cte1 cte3
+ ON cte1.t1c = cte3.t1c
+ LEFT JOIN cte1 cte4
+ ON cte1.t1d = cte4.t1d
+ INNER JOIN cte1 cte5
+ ON cte1.t1b < cte5.t1b
+ LEFT OUTER JOIN cte1 cte6
+ ON cte1.t1d > cte6.t1d)
+-- !query 5 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1h:timestamp>
+-- !query 5 output
+val1b 8 16 2014-05-04 01:01:00
+val1c 8 16 2014-05-04 01:02:00.001
+val1d NULL 16 2014-06-04 01:01:00
+val1d NULL 16 2014-07-04 01:02:00.001
+
+
+-- !query 6
+WITH cte1
+ AS (SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1b IN (SELECT t2b
+ FROM t2
+ RIGHT JOIN t1
+ ON t1c = t2c
+ LEFT JOIN t3
+ ON t2d = t3d)
+ AND t1a = "val1b")
+SELECT *
+FROM (SELECT *
+ FROM cte1
+ JOIN cte1 cte2
+ ON cte1.t1b > 5
+ AND cte1.t1a = cte2.t1a
+ FULL OUTER JOIN cte1 cte3
+ ON cte1.t1a = cte3.t1a
+ INNER JOIN cte1 cte4
+ ON cte1.t1b = cte4.t1b) s
+-- !query 6 schema
+struct<t1a:string,t1b:smallint,t1a:string,t1b:smallint,t1a:string,t1b:smallint,t1a:string,t1b:smallint>
+-- !query 6 output
+val1b 8 val1b 8 val1b 8 val1b 8
+
+
+-- !query 7
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b,
+ t1h
+ FROM t1
+ WHERE t1a IN
+ (
+ SELECT t2a
+ FROM t2
+ WHERE t1b < t2b))
+SELECT Count(DISTINCT t1a),
+ t1b
+FROM (
+ SELECT cte1.t1a,
+ cte1.t1b
+ FROM cte1
+ JOIN cte1 cte2
+ on cte1.t1h >= cte2.t1h) s
+WHERE t1b IN
+ (
+ SELECT t1b
+ FROM t1)
+GROUP BY t1b
+-- !query 7 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint>
+-- !query 7 output
+2 8
+
+
+-- !query 8
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b,
+ t1c
+ FROM t1
+ WHERE t1b IN
+ (
+ SELECT t2b
+ FROM t2 FULL OUTER JOIN T3 on t2a = t3a
+ WHERE t1c = t2c) AND
+ t1a = "val1b")
+SELECT *
+FROM (
+ SELECT *
+ FROM cte1
+ INNER JOIN cte1 cte2 ON cte1.t1a = cte2.t1a
+ RIGHT OUTER JOIN cte1 cte3 ON cte1.t1b = cte3.t1b
+ LEFT OUTER JOIN cte1 cte4 ON cte1.t1c = cte4.t1c
+ ) s
+-- !query 8 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1a:string,t1b:smallint,t1c:int,t1a:string,t1b:smallint,t1c:int,t1a:string,t1b:smallint,t1c:int>
+-- !query 8 output
+val1b 8 16 val1b 8 16 val1b 8 16 val1b 8 16
+
+
+-- !query 9
+WITH cte1
+ AS (SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1b IN (SELECT t2b
+ FROM t2
+ WHERE t1c = t2c))
+SELECT Count(DISTINCT( s.t1a )),
+ s.t1b
+FROM (SELECT cte1.t1a,
+ cte1.t1b
+ FROM cte1
+ RIGHT OUTER JOIN cte1 cte2
+ ON cte1.t1a = cte2.t1a) s
+GROUP BY s.t1b
+-- !query 9 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint>
+-- !query 9 output
+2 8
+
+
+-- !query 10
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1b IN
+ (
+ SELECT t2b
+ FROM t2
+ WHERE t1c = t2c))
+SELECT DISTINCT(s.t1b)
+FROM (
+ SELECT cte1.t1b
+ FROM cte1
+ LEFT OUTER JOIN cte1 cte2
+ ON cte1.t1b = cte2.t1b) s
+WHERE s.t1b IN
+ (
+ SELECT t1.t1b
+ FROM t1 INNER
+ JOIN cte1
+ ON t1.t1a = cte1.t1a)
+-- !query 10 schema
+struct<t1b:smallint>
+-- !query 10 output
+8
+
+
+-- !query 11
+WITH cte1
+ AS (SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1a = "val1d")
+SELECT t1a,
+ t1b,
+ t1c,
+ t1h
+FROM t1
+WHERE t1b NOT IN (SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b < 0) AND
+ t1c > 10
+-- !query 11 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1h:timestamp>
+-- !query 11 output
+val1a 16 12 2014-06-04 01:02:00.001
+val1a 16 12 2014-07-04 01:01:00
+val1b 8 16 2014-05-04 01:01:00
+val1c 8 16 2014-05-04 01:02:00.001
+val1d NULL 16 2014-06-04 01:01:00
+val1d NULL 16 2014-07-04 01:02:00.001
+
+
+-- !query 12
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1h
+ FROM t1
+ WHERE t1d NOT IN
+ (
+ SELECT t2d
+ FROM t2
+ FULL OUTER JOIN t3 ON t2a = t3a
+ JOIN t1 on t1b = t2b))
+SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1h
+FROM t1
+WHERE t1b NOT IN
+ (
+ SELECT cte1.t1b
+ FROM cte1 INNER
+ JOIN cte1 cte2 ON cte1.t1a = cte2.t1a
+ RIGHT JOIN cte1 cte3 ON cte1.t1b = cte3.t1b
+ JOIN cte1 cte4 ON cte1.t1c = cte4.t1c) AND
+ t1c IS NOT NULL
+ORDER BY t1c DESC
+-- !query 12 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1d:bigint,t1h:timestamp>
+-- !query 12 output
+val1b 8 16 19 2014-05-04 01:01:00
+val1c 8 16 19 2014-05-04 01:02:00.001
--- End diff --
All the results are equivalent with the ones from DB2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72952/
Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72846/
Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16915#discussion_r100905620
--- Diff: sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-with-cte.sql.out ---
@@ -0,0 +1,368 @@
+-- Automatically generated by SQLQueryTestSuite
+-- Number of queries: 13
+
+
+-- !query 0
+create temporary view t1 as select * from values
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:00:00.000', date '2014-04-04'),
+ ("val1b", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1a", 16S, 12, 21L, float(15.0), 20D, 20E2, timestamp '2014-06-04 01:02:00.001', date '2014-06-04'),
+ ("val1a", 16S, 12, 10L, float(15.0), 20D, 20E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:02:00.001', date '2014-05-05'),
+ ("val1d", null, 16, 22L, float(17.0), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', null),
+ ("val1d", null, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-07-04 01:02:00.001', null),
+ ("val1e", 10S, null, 25L, float(17.0), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-09-04 01:02:00.001', date '2014-09-04'),
+ ("val1d", 10S, null, 12L, float(17.0), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:02:00.001', date '2014-04-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04')
+ as t1(t1a, t1b, t1c, t1d, t1e, t1f, t1g, t1h, t1i)
+-- !query 0 schema
+struct<>
+-- !query 0 output
+
+
+
+-- !query 1
+create temporary view t2 as select * from values
+ ("val2a", 6S, 12, 14L, float(15), 20D, 20E2, timestamp '2014-04-04 01:01:00.000', date '2014-04-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 119L, float(17), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1c", 12S, 16, 219L, float(17), 25D, 26E2, timestamp '2016-05-04 01:01:00.000', date '2016-05-04'),
+ ("val1b", null, 16, 319L, float(17), 25D, 26E2, timestamp '2017-05-04 01:01:00.000', null),
+ ("val2e", 8S, null, 419L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1f", 19S, null, 519L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 12S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-05'),
+ ("val1e", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:01:00.000', date '2014-09-04'),
+ ("val1f", 19S, null, 19L, float(17), 25D, 26E2, timestamp '2014-10-04 01:01:00.000', date '2014-10-04'),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', null)
+ as t2(t2a, t2b, t2c, t2d, t2e, t2f, t2g, t2h, t2i)
+-- !query 1 schema
+struct<>
+-- !query 1 output
+
+
+
+-- !query 2
+create temporary view t3 as select * from values
+ ("val3a", 6S, 12, 110L, float(15), 20D, 20E2, timestamp '2014-04-04 01:02:00.000', date '2014-04-04'),
+ ("val3a", 6S, 12, 10L, float(15), 20D, 20E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 219L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 319L, float(17), 25D, 26E2, timestamp '2014-06-04 01:02:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:02:00.000', date '2014-07-04'),
+ ("val3c", 17S, 16, 519L, float(17), 25D, 26E2, timestamp '2014-08-04 01:02:00.000', date '2014-08-04'),
+ ("val3c", 17S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:02:00.000', date '2014-09-05'),
+ ("val1b", null, 16, 419L, float(17), 25D, 26E2, timestamp '2014-10-04 01:02:00.000', null),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-11-04 01:02:00.000', null),
+ ("val3b", 8S, null, 719L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val3b", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2015-05-04 01:02:00.000', date '2015-05-04')
+ as t3(t3a, t3b, t3c, t3d, t3e, t3f, t3g, t3h, t3i)
+-- !query 2 schema
+struct<>
+-- !query 2 output
+
+
+
+-- !query 3
+WITH cte1
+ AS (SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1a = "val1a")
+SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1h
+FROM t1
+WHERE t1b IN (SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 0)
+-- !query 3 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1d:bigint,t1h:timestamp>
+-- !query 3 output
+val1a 16 12 10 2014-07-04 01:01:00
+val1a 16 12 21 2014-06-04 01:02:00.001
+val1a 6 8 10 2014-04-04 01:00:00
+val1a 6 8 10 2014-04-04 01:02:00.001
+
+
+-- !query 4
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b
+ FROM t1)
+SELECT count(distinct(t1a)), t1b, t1c
+FROM t1
+WHERE t1b IN
+ (
+ SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 0
+ UNION
+ SELECT cte1.t1b
+ FROM cte1
+ WHERE cte1.t1b > 5
+ UNION ALL
+ SELECT cte1.t1b
+ FROM cte1
+ INTERSECT
+ SELECT cte1.t1b
+ FROM cte1
+ UNION
+ SELECT cte1.t1b
+ FROM cte1 )
+GROUP BY t1a, t1b, t1c
+HAVING t1c IS NOT NULL
+-- !query 4 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int>
+-- !query 4 output
+1 16 12
+1 6 8
+1 8 16
+1 8 16
+
+
+-- !query 5
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1e
+ FROM t1)
+SELECT t1a,
+ t1b,
+ t1c,
+ t1h
+FROM t1
+WHERE t1c IN
+ (
+ SELECT cte1.t1c
+ FROM cte1
+ JOIN cte1 cte2
+ on cte1.t1b > cte2.t1b
+ FULL OUTER JOIN cte1 cte3
+ ON cte1.t1c = cte3.t1c
+ LEFT JOIN cte1 cte4
+ ON cte1.t1d = cte4.t1d
+ INNER JOIN cte1 cte5
+ ON cte1.t1b < cte5.t1b
+ LEFT OUTER JOIN cte1 cte6
+ ON cte1.t1d > cte6.t1d)
+-- !query 5 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1h:timestamp>
+-- !query 5 output
+val1b 8 16 2014-05-04 01:01:00
+val1c 8 16 2014-05-04 01:02:00.001
+val1d NULL 16 2014-06-04 01:01:00
+val1d NULL 16 2014-07-04 01:02:00.001
+
+
+-- !query 6
+WITH cte1 AS
+(
+ SELECT t1a,
+ t1b
+ FROM t1
+ WHERE t1b IN
+ (
+ SELECT t2b
+ FROM t2
+ RIGHT JOIN t1
+ ON t1c = t2c
+ LEFT JOIN t3
+ ON t2d = t3d ) AND
+ t1a = "val1b")
--- End diff --
The style looks strange. Could you adjust them?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by kevinyu98 <gi...@git.apache.org>.
Github user kevinyu98 commented on the issue:
https://github.com/apache/spark/pull/16915
@gatorsmile thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16915
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by nsyca <gi...@git.apache.org>.
Github user nsyca commented on the issue:
https://github.com/apache/spark/pull/16915
It's larger than typical test PRs we submitted for the subquery JIRA but since it's the last test PR, we think we wanted to avoid an additional round of administrative work.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by hvanhovell <gi...@git.apache.org>.
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/16915
@kevinyu98 @nsyca @dilipbiswal could someone confirm that these results match DB2?
I also think that this PR is almost too large.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16915
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16915
**[Test build #72846 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72846/testReport)** for PR 16915 at commit [`3dd57fd`](https://github.com/apache/spark/commit/3dd57fd7017e173cb00a53280a41783be634d6fe).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16915
**[Test build #72846 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72846/testReport)** for PR 16915 at commit [`3dd57fd`](https://github.com/apache/spark/commit/3dd57fd7017e173cb00a53280a41783be634d6fe).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
Posted by nsyca <gi...@git.apache.org>.
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/16915#discussion_r101375513
--- Diff: sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/not-in-joins.sql.out ---
@@ -0,0 +1,229 @@
+-- Automatically generated by SQLQueryTestSuite
+-- Number of queries: 9
+
+
+-- !query 0
+create temporary view t1 as select * from values
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:00:00.000', date '2014-04-04'),
+ ("val1b", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1a", 16S, 12, 21L, float(15.0), 20D, 20E2, timestamp '2014-06-04 01:02:00.001', date '2014-06-04'),
+ ("val1a", 16S, 12, 10L, float(15.0), 20D, 20E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:02:00.001', date '2014-05-05'),
+ ("val1d", null, 16, 22L, float(17.0), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', null),
+ ("val1d", null, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-07-04 01:02:00.001', null),
+ ("val1e", 10S, null, 25L, float(17.0), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-09-04 01:02:00.001', date '2014-09-04'),
+ ("val1d", 10S, null, 12L, float(17.0), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:02:00.001', date '2014-04-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04')
+ as t1(t1a, t1b, t1c, t1d, t1e, t1f, t1g, t1h, t1i)
+-- !query 0 schema
+struct<>
+-- !query 0 output
+
+
+
+-- !query 1
+create temporary view t2 as select * from values
+ ("val2a", 6S, 12, 14L, float(15), 20D, 20E2, timestamp '2014-04-04 01:01:00.000', date '2014-04-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 119L, float(17), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1c", 12S, 16, 219L, float(17), 25D, 26E2, timestamp '2016-05-04 01:01:00.000', date '2016-05-04'),
+ ("val1b", null, 16, 319L, float(17), 25D, 26E2, timestamp '2017-05-04 01:01:00.000', null),
+ ("val2e", 8S, null, 419L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1f", 19S, null, 519L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 12S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-05'),
+ ("val1e", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:01:00.000', date '2014-09-04'),
+ ("val1f", 19S, null, 19L, float(17), 25D, 26E2, timestamp '2014-10-04 01:01:00.000', date '2014-10-04'),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', null)
+ as t2(t2a, t2b, t2c, t2d, t2e, t2f, t2g, t2h, t2i)
+-- !query 1 schema
+struct<>
+-- !query 1 output
+
+
+
+-- !query 2
+create temporary view t3 as select * from values
+ ("val3a", 6S, 12, 110L, float(15), 20D, 20E2, timestamp '2014-04-04 01:02:00.000', date '2014-04-04'),
+ ("val3a", 6S, 12, 10L, float(15), 20D, 20E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 219L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 319L, float(17), 25D, 26E2, timestamp '2014-06-04 01:02:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:02:00.000', date '2014-07-04'),
+ ("val3c", 17S, 16, 519L, float(17), 25D, 26E2, timestamp '2014-08-04 01:02:00.000', date '2014-08-04'),
+ ("val3c", 17S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:02:00.000', date '2014-09-05'),
+ ("val1b", null, 16, 419L, float(17), 25D, 26E2, timestamp '2014-10-04 01:02:00.000', null),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-11-04 01:02:00.000', null),
+ ("val3b", 8S, null, 719L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val3b", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2015-05-04 01:02:00.000', date '2015-05-04')
+ as t3(t3a, t3b, t3c, t3d, t3e, t3f, t3g, t3h, t3i)
+-- !query 2 schema
+struct<>
+-- !query 2 output
+
+
+
+-- !query 3
+SELECT t1a,
+ t1b,
+ t1c,
+ t3a,
+ t3b,
+ t3c
+FROM t1
+ JOIN t3
+WHERE t1a NOT IN (SELECT t2a
+ FROM t2)
+ AND t1b = t3b
+-- !query 3 schema
+struct<t1a:string,t1b:smallint,t1c:int,t3a:string,t3b:smallint,t3c:int>
+-- !query 3 output
+val1a 6 8 val3a 6 12
+val1a 6 8 val3a 6 12
+val1a 6 8 val3a 6 12
+val1a 6 8 val3a 6 12
+val1d 10 NULL val1b 10 12
+val1d 10 NULL val1b 10 12
+
+
+-- !query 4
+SELECT t1a,
+ t1b,
+ t1c,
+ count(distinct(t3a)),
+ t3b,
+ t3c
+FROM t1
+FULL OUTER JOIN t3 on t1b != t3b
+RIGHT JOIN t2 on t1c = t2c
+where t1a NOT IN
+ (
+ SELECT t2a
+ FROM t2
+ WHERE t2c NOT IN
+ (
+ SELECT t1c
+ FROM t1
+ WHERE t1a = t2a))
+AND t1b != t3b
+AND t1d = t2d
+GROUP BY t1a, t1b, t1c, t3a, t3b, t3c
+HAVING count(distinct(t3a)) >= 1
+ORDER BY t1a
+-- !query 4 schema
+struct<t1a:string,t1b:smallint,t1c:int,count(DISTINCT t3a):bigint,t3b:smallint,t3c:int>
+-- !query 4 output
+val1c 8 16 1 10 12
+val1c 8 16 1 6 12
+val1c 8 16 1 17 16
+
+
+-- !query 5
+SELECT t1a,
+ t1b,
+ t1c,
+ t1d,
+ t1h
+FROM t1
+WHERE t1a NOT IN
+ (
+ SELECT t2a
+ FROM t2
+ LEFT JOIN t3 on t2b = t3b
+ WHERE t1d = t2d
+ )
+AND t1d NOT IN
+ (
+ SELECT t2d
+ FROM t2
+ RIGHT JOIN t1 on t2e = t1e
+ WHERE t1a = t2a)
+-- !query 5 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1d:bigint,t1h:timestamp>
+-- !query 5 output
+val1a 16 12 10 2014-07-04 01:01:00
+val1a 16 12 21 2014-06-04 01:02:00.001
+val1a 6 8 10 2014-04-04 01:00:00
+val1a 6 8 10 2014-04-04 01:02:00.001
+val1d 10 NULL 12 2015-05-04 01:01:00
+val1d NULL 16 22 2014-06-04 01:01:00
+val1e 10 NULL 25 2014-08-04 01:01:00
+
+
+-- !query 6
+SELECT Count(DISTINCT( t1a )),
+ t1b,
+ t1c,
+ t1d
+FROM t1
+WHERE t1a NOT IN (SELECT t2a
+ FROM t2
+ JOIN t1
+ WHERE t2b <> t1b)
+GROUP BY t1b,
+ t1c,
+ t1d
+HAVING t1d NOT IN (SELECT t2d
+ FROM t2
+ WHERE t1d = t2d)
+ORDER BY t1b DESC
+-- !query 6 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int,t1d:bigint>
+-- !query 6 output
+1 16 12 10
+1 16 12 21
+1 10 NULL 12
+1 6 8 10
+1 NULL 16 22
+
+
+-- !query 7
+SELECT COUNT(DISTINCT(t1a)),
+ t1b,
+ t1c,
+ t1d
+FROM t1
+WHERE t1a NOT IN
+ (
+ SELECT t2a
+ FROM t2 INNER
+ JOIN t1 ON t1a = t2a)
+GROUP BY t1b,
+ t1c,
+ t1d
+HAVING t1b < sum(t1c)
+-- !query 7 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int,t1d:bigint>
+-- !query 7 output
+1 6 8 10
+
+
+-- !query 8
+SELECT COUNT(DISTINCT(t1a)),
+ t1b,
+ t1c,
+ t1d
+FROM t1
+WHERE t1a NOT IN
+ (
+ SELECT t2a
+ FROM t2 INNER
+ JOIN t1
+ ON t1a = t2a)
+AND t1d NOT IN
+ (
+ SELECT t2d
+ FROM t2
+ INNER JOIN t3
+ ON t2b = t3b )
+GROUP BY t1b,
+ t1c,
+ t1d
+HAVING t1b < sum(t1c)
+-- !query 8 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int,t1d:bigint>
+-- !query 8 output
+1 6 8 10
--- End diff --
All the results are equivalent with the ones from DB2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16915
**[Test build #72952 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72952/testReport)** for PR 16915 at commit [`3dd57fd`](https://github.com/apache/spark/commit/3dd57fd7017e173cb00a53280a41783be634d6fe).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark issue #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/NOT IN s...
Posted by gatorsmile <gi...@git.apache.org>.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16915
OK to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org
[GitHub] spark pull request #16915: [SPARK-18871][SQL][TESTS] New test cases for IN/N...
Posted by nsyca <gi...@git.apache.org>.
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/16915#discussion_r101374593
--- Diff: sql/core/src/test/resources/sql-tests/results/subquery/in-subquery/in-set-operations.sql.out ---
@@ -0,0 +1,595 @@
+-- Automatically generated by SQLQueryTestSuite
+-- Number of queries: 16
+
+
+-- !query 0
+create temporary view t1 as select * from values
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:00:00.000', date '2014-04-04'),
+ ("val1b", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1a", 16S, 12, 21L, float(15.0), 20D, 20E2, timestamp '2014-06-04 01:02:00.001', date '2014-06-04'),
+ ("val1a", 16S, 12, 10L, float(15.0), 20D, 20E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 8S, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:02:00.001', date '2014-05-05'),
+ ("val1d", null, 16, 22L, float(17.0), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', null),
+ ("val1d", null, 16, 19L, float(17.0), 25D, 26E2, timestamp '2014-07-04 01:02:00.001', null),
+ ("val1e", 10S, null, 25L, float(17.0), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-09-04 01:02:00.001', date '2014-09-04'),
+ ("val1d", 10S, null, 12L, float(17.0), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1a", 6S, 8, 10L, float(15.0), 20D, 20E2, timestamp '2014-04-04 01:02:00.001', date '2014-04-04'),
+ ("val1e", 10S, null, 19L, float(17.0), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04')
+ as t1(t1a, t1b, t1c, t1d, t1e, t1f, t1g, t1h, t1i)
+-- !query 0 schema
+struct<>
+-- !query 0 output
+
+
+
+-- !query 1
+create temporary view t2 as select * from values
+ ("val2a", 6S, 12, 14L, float(15), 20D, 20E2, timestamp '2014-04-04 01:01:00.000', date '2014-04-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 119L, float(17), 25D, 26E2, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),
+ ("val1c", 12S, 16, 219L, float(17), 25D, 26E2, timestamp '2016-05-04 01:01:00.000', date '2016-05-04'),
+ ("val1b", null, 16, 319L, float(17), 25D, 26E2, timestamp '2017-05-04 01:01:00.000', null),
+ ("val2e", 8S, null, 419L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1f", 19S, null, 519L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),
+ ("val1c", 12S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-08-04 01:01:00.000', date '2014-08-05'),
+ ("val1e", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:01:00.000', date '2014-09-04'),
+ ("val1f", 19S, null, 19L, float(17), 25D, 26E2, timestamp '2014-10-04 01:01:00.000', date '2014-10-04'),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:01:00.000', null)
+ as t2(t2a, t2b, t2c, t2d, t2e, t2f, t2g, t2h, t2i)
+-- !query 1 schema
+struct<>
+-- !query 1 output
+
+
+
+-- !query 2
+create temporary view t3 as select * from values
+ ("val3a", 6S, 12, 110L, float(15), 20D, 20E2, timestamp '2014-04-04 01:02:00.000', date '2014-04-04'),
+ ("val3a", 6S, 12, 10L, float(15), 20D, 20E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 219L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 10S, 12, 19L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val1b", 8S, 16, 319L, float(17), 25D, 26E2, timestamp '2014-06-04 01:02:00.000', date '2014-06-04'),
+ ("val1b", 8S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-07-04 01:02:00.000', date '2014-07-04'),
+ ("val3c", 17S, 16, 519L, float(17), 25D, 26E2, timestamp '2014-08-04 01:02:00.000', date '2014-08-04'),
+ ("val3c", 17S, 16, 19L, float(17), 25D, 26E2, timestamp '2014-09-04 01:02:00.000', date '2014-09-05'),
+ ("val1b", null, 16, 419L, float(17), 25D, 26E2, timestamp '2014-10-04 01:02:00.000', null),
+ ("val1b", null, 16, 19L, float(17), 25D, 26E2, timestamp '2014-11-04 01:02:00.000', null),
+ ("val3b", 8S, null, 719L, float(17), 25D, 26E2, timestamp '2014-05-04 01:02:00.000', date '2014-05-04'),
+ ("val3b", 8S, null, 19L, float(17), 25D, 26E2, timestamp '2015-05-04 01:02:00.000', date '2015-05-04')
+ as t3(t3a, t3b, t3c, t3d, t3e, t3f, t3g, t3h, t3i)
+-- !query 2 schema
+struct<>
+-- !query 2 output
+
+
+
+-- !query 3
+SELECT t2a,
+ t2b,
+ t2c,
+ t2h,
+ t2i
+FROM (SELECT *
+ FROM t2
+ WHERE t2a IN (SELECT t1a
+ FROM t1)
+ UNION ALL
+ SELECT *
+ FROM t3
+ WHERE t3a IN (SELECT t1a
+ FROM t1)) AS t3
+WHERE t2i IS NOT NULL AND
+ 2 * t2b = t2c
+ORDER BY t2c DESC nulls first
+-- !query 3 schema
+struct<t2a:string,t2b:smallint,t2c:int,t2h:timestamp,t2i:date>
+-- !query 3 output
+val1b 8 16 2015-05-04 01:01:00 2015-05-04
+val1b 8 16 2014-07-04 01:01:00 2014-07-04
+val1b 8 16 2014-06-04 01:02:00 2014-06-04
+val1b 8 16 2014-07-04 01:02:00 2014-07-04
+
+
+-- !query 4
+SELECT t2a,
+ t2b,
+ t2d,
+ Count(DISTINCT( t2h )),
+ t2i
+FROM (SELECT *
+ FROM t2
+ WHERE t2a IN (SELECT t1a
+ FROM t1
+ WHERE t2b = t1b)
+ UNION
+ SELECT *
+ FROM t1
+ WHERE t1a IN (SELECT t3a
+ FROM t3
+ WHERE t1c = t3c)) AS t3
+GROUP BY t2a,
+ t2b,
+ t2d,
+ t2i
+ORDER BY t2d DESC
+-- !query 4 schema
+struct<t2a:string,t2b:smallint,t2d:bigint,count(DISTINCT t2h):bigint,t2i:date>
+-- !query 4 output
+val1b 8 119 1 2015-05-04
+val1b 8 19 1 2014-07-04
+val1b 8 19 1 2014-05-04
+
+
+-- !query 5
+SELECT t2a,
+ t2b,
+ t2c,
+ Min(t2d)
+FROM t2
+WHERE t2a IN (SELECT t1a
+ FROM t1
+ WHERE t1b = t2b)
+GROUP BY t2a, t2b, t2c
+UNION ALL
+SELECT t2a,
+ t2b,
+ t2c,
+ Max(t2d)
+FROM t2
+WHERE t2a IN (SELECT t1a
+ FROM t1
+ WHERE t2c = t1c)
+GROUP BY t2a, t2b, t2c
+UNION
+SELECT t3a,
+ t3b,
+ t3c,
+ Min(t3d)
+FROM t3
+WHERE t3a IN (SELECT t2a
+ FROM t2
+ WHERE t3c = t2c)
+GROUP BY t3a, t3b, t3c
+UNION DISTINCT
+SELECT t1a,
+ t1b,
+ t1c,
+ Max(t1d)
+FROM t1
+WHERE t1a IN (SELECT t3a
+ FROM t3
+ WHERE t3d = t1d)
+GROUP BY t1a, t1b, t1c
+-- !query 5 schema
+struct<t2a:string,t2b:smallint,t2c:int,min(t2d):bigint>
+-- !query 5 output
+val1b 10 12 19
+val1b 8 16 119
+val1b 8 16 19
+val1b NULL 16 19
+val1b NULL 16 319
+val1c 12 16 219
+
+
+-- !query 6
+SELECT DISTINCT( t2a ),
+ t2b,
+ Count(t2c),
+ t2d,
+ t2h,
+ t2i
+FROM t2
+WHERE t2a IN (SELECT t1a
+ FROM t1
+ WHERE t1b = t2b)
+GROUP BY t2a,
+ t2b,
+ t2c,
+ t2d,
+ t2h,
+ t2i
+UNION
+SELECT DISTINCT( t2a ),
+ t2b,
+ Count(t2c),
+ t2d,
+ t2h,
+ t2i
+FROM t2
+WHERE t2a IN (SELECT t1a
+ FROM t1
+ WHERE t2c = t1c)
+GROUP BY t2a,
+ t2b,
+ t2c,
+ t2d,
+ t2h,
+ t2i
+HAVING t2b IS NOT NULL
+-- !query 6 schema
+struct<t2a:string,t2b:smallint,count(t2c):bigint,t2d:bigint,t2h:timestamp,t2i:date>
+-- !query 6 output
+val1b 8 1 119 2015-05-04 01:01:00 2015-05-04
+val1b 8 1 19 2014-07-04 01:01:00 2014-07-04
+val1c 12 1 19 2014-08-04 01:01:00 2014-08-05
+val1c 12 1 219 2016-05-04 01:01:00 2016-05-04
+
+
+-- !query 7
+SELECT t2a,
+ t2b,
+ Count(t2c),
+ t2d,
+ t2h,
+ t2i
+FROM t2
+WHERE t2a IN (SELECT DISTINCT(t1a)
+ FROM t1
+ WHERE t1b = t2b)
+GROUP BY t2a,
+ t2b,
+ t2c,
+ t2d,
+ t2h,
+ t2i
+
+UNION
+SELECT DISTINCT( t2a ),
+ t2b,
+ Count(t2c),
+ t2d,
+ t2h,
+ t2i
+FROM t2
+WHERE t2b IN (SELECT Max(t1b)
+ FROM t1
+ WHERE t2c = t1c)
+GROUP BY t2a,
+ t2b,
+ t2c,
+ t2d,
+ t2h,
+ t2i
+HAVING t2b IS NOT NULL
+UNION DISTINCT
+SELECT t2a,
+ t2b,
+ t2c,
+ t2d,
+ t2h,
+ t2i
+FROM t2
+WHERE t2d IN (SELECT min(t1d)
+ FROM t1
+ WHERE t2c = t1c)
+-- !query 7 schema
+struct<t2a:string,t2b:smallint,count(t2c):bigint,t2d:bigint,t2h:timestamp,t2i:date>
+-- !query 7 output
+val1b 8 1 119 2015-05-04 01:01:00 2015-05-04
+val1b 8 1 19 2014-07-04 01:01:00 2014-07-04
+val1b 8 16 19 2014-07-04 01:01:00 2014-07-04
+val1b NULL 16 19 2014-05-04 01:01:00 NULL
+val1c 12 16 19 2014-08-04 01:01:00 2014-08-05
+
+
+-- !query 8
+SELECT t2a,
+ t2b,
+ t2c,
+ t2d
+FROM t2
+WHERE t2a IN (SELECT t1a
+ FROM t1
+ WHERE t1b = t2b AND
+ t1d < t2d)
+INTERSECT
+SELECT t2a,
+ t2b,
+ t2c,
+ t2d
+FROM t2
+WHERE t2b IN (SELECT Max(t1b)
+ FROM t1
+ WHERE t2c = t1c)
+EXCEPT
+SELECT t2a,
+ t2b,
+ t2c,
+ t2d
+FROM t2
+WHERE t2d IN (SELECT Min(t3d)
+ FROM t3
+ WHERE t2c = t3c)
+UNION ALL
+SELECT t2a,
+ t2b,
+ t2c,
+ t2d
+FROM t2
+WHERE t2c IN (SELECT Max(t1c)
+ FROM t1
+ WHERE t1d = t2d)
+-- !query 8 schema
+struct<t2a:string,t2b:smallint,t2c:int,t2d:bigint>
+-- !query 8 output
+val1b 8 16 119
+val1b 8 16 19
+val1b NULL 16 19
+val1c 12 16 19
+
+
+-- !query 9
+SELECT DISTINCT(t1a),
+ t1b,
+ t1c,
+ t1d
+FROM t1
+WHERE t1a IN (SELECT t3a
+ FROM (SELECT t2a t3a
+ FROM t2
+ UNION ALL
+ SELECT t2a t3a
+ FROM t2) AS t3
+ UNION
+ SELECT t2a
+ FROM (SELECT t2a
+ FROM t2
+ WHERE t2b > 6
+ UNION
+ SELECT t2a
+ FROM t2
+ WHERE t2b > 6) AS t4
+ UNION DISTINCT
+ SELECT t2a
+ FROM (SELECT t2a
+ FROM t2
+ WHERE t2b > 6
+ UNION DISTINCT
+ SELECT t1a
+ FROM t1
+ WHERE t1b > 6) AS t5)
+GROUP BY t1a, t1b, t1c, t1d
+HAVING t1c IS NOT NULL AND t1b IS NOT NULL
+ORDER BY t1c DESC
+-- !query 9 schema
+struct<t1a:string,t1b:smallint,t1c:int,t1d:bigint>
+-- !query 9 output
+val1c 8 16 19
+val1b 8 16 19
+val1a 16 12 21
+val1a 16 12 10
+val1a 6 8 10
+
+
+-- !query 10
+SELECT t1a,
+ t1b,
+ t1c
+FROM t1
+WHERE t1b IN (SELECT t2b
+ FROM (SELECT t2b
+ FROM t2
+ WHERE t2b > 6
+ INTERSECT
+ SELECT t1b
+ FROM t1
+ WHERE t1b > 6) AS t3
+ WHERE t2b = t1b)
+-- !query 10 schema
+struct<t1a:string,t1b:smallint,t1c:int>
+-- !query 10 output
+val1b 8 16
+val1c 8 16
+val1d 10 NULL
+val1e 10 NULL
+val1e 10 NULL
+val1e 10 NULL
+
+
+-- !query 11
+SELECT t1a,
+ t1b,
+ t1c
+FROM t1
+WHERE t1h IN (SELECT t2h
+ FROM (SELECT t2h
+ FROM t2
+ EXCEPT
+ SELECT t3h
+ FROM t3) AS t3)
+ORDER BY t1b DESC NULLs first, t1c DESC NULLs last
+-- !query 11 schema
+struct<t1a:string,t1b:smallint,t1c:int>
+-- !query 11 output
+val1d NULL 16
+val1a 16 12
+val1e 10 NULL
+val1d 10 NULL
+val1e 10 NULL
+val1b 8 16
+
+
+-- !query 12
+SELECT t1a,
+ t1b,
+ t1c
+FROM t1
+WHERE t1b IN
+ (
+ SELECT t2b
+ FROM (
+ SELECT t2b
+ FROM t2
+ WHERE t2b > 6
+ INTERSECT
+ SELECT t1b
+ FROM t1
+ WHERE t1b > 6) AS t3)
+UNION DISTINCT
+SELECT t1a,
+ t1b,
+ t1c
+FROM t1
+WHERE t1b IN
+ (
+ SELECT t2b
+ FROM (
+ SELECT t2b
+ FROM t2
+ WHERE t2b > 6
+ EXCEPT
+ SELECT t1b
+ FROM t1
+ WHERE t1b > 6) AS t4
+ WHERE t2b = t1b)
+ORDER BY t1c DESC NULLS last
+-- !query 12 schema
+struct<t1a:string,t1b:smallint,t1c:int>
+-- !query 12 output
+val1c 8 16
+val1b 8 16
+val1e 10 NULL
+val1d 10 NULL
+
+
+-- !query 13
+SELECT *
+FROM (SELECT *
+ FROM (SELECT *
+ FROM t2
+ WHERE t2h IN (SELECT t1h
+ FROM t1
+ WHERE t1a = t2a)
+ UNION DISTINCT
+ SELECT *
+ FROM t1
+ WHERE t1h IN (SELECT t3h
+ FROM t3
+ UNION
+ SELECT t1h
+ FROM t1)
+ UNION
+ SELECT *
+ FROM t3
+ WHERE t3a IN (SELECT t2a
+ FROM t2
+ UNION ALL
+ SELECT t1a
+ FROM t1
+ WHERE t1b > 0)
+ INTERSECT
+ SELECT *
+ FROM T1
+ WHERE t1b IN (SELECT t3b
+ FROM t3
+ UNION DISTINCT
+ SELECT t2b
+ FROM t2
+ )
+ EXCEPT
+ SELECT *
+ FROM t2
+ WHERE t2h IN (SELECT t1i
+ FROM t1)) t4
+ WHERE t4.t2b IN (SELECT Min(t3b)
+ FROM t3
+ WHERE t4.t2a = t3a))
+-- !query 13 schema
+struct<t2a:string,t2b:smallint,t2c:int,t2d:bigint,t2e:float,t2f:double,t2g:decimal(2,-2),t2h:timestamp,t2i:date>
+-- !query 13 output
+val1b 8 16 19 17.0 25.0 2600 2014-05-04 01:01:00 2014-05-04
+
+
+-- !query 14
+SELECT t2a,
+ t2b,
+ t2c,
+ t2i
+FROM (SELECT *
+ FROM t2
+ WHERE t2a NOT IN (SELECT t1a
+ FROM t1
+ UNION
+ SELECT t3a
+ FROM t3)
+ UNION ALL
+ SELECT *
+ FROM t2
+ WHERE t2a NOT IN (SELECT t1a
+ FROM t1
+ INTERSECT
+ SELECT t2a
+ FROM t2)) AS t3
+WHERE t3.t2a NOT IN (SELECT t1a
+ FROM t1
+ INTERSECT
+ SELECT t2a
+ FROM t2)
+ AND t2c IS NOT NULL
+ORDER BY t2a
+-- !query 14 schema
+struct<t2a:string,t2b:smallint,t2c:int,t2i:date>
+-- !query 14 output
+val2a 6 12 2014-04-04
+val2a 6 12 2014-04-04
+
+
+-- !query 15
+SELECT Count(DISTINCT(t1a)),
+ t1b,
+ t1c,
+ t1i
+FROM t1
+WHERE t1b NOT IN
+ (
+ SELECT t2b
+ FROM (
+ SELECT t2b
+ FROM t2
+ WHERE t2b NOT IN
+ (
+ SELECT t1b
+ FROM t1)
+ UNION
+ SELECT t1b
+ FROM t1
+ WHERE t1b NOT IN
+ (
+ SELECT t3b
+ FROM t3)
+ UNION
+ distinct SELECT t3b
+ FROM t3
+ WHERE t3b NOT IN
+ (
+ SELECT t2b
+ FROM t2)) AS t3
+ WHERE t2b = t1b)
+GROUP BY t1a,
+ t1b,
+ t1c,
+ t1i
+HAVING t1b NOT IN
+ (
+ SELECT t2b
+ FROM t2
+ WHERE t2c IS NULL
+ EXCEPT
+ SELECT t3b
+ FROM t3)
+ORDER BY t1c DESC NULLS LAST
+-- !query 15 schema
+struct<count(DISTINCT t1a):bigint,t1b:smallint,t1c:int,t1i:date>
+-- !query 15 output
+1 8 16 2014-05-05
+1 8 16 2014-05-04
+1 16 12 2014-06-04
+1 16 12 2014-07-04
+1 6 8 2014-04-04
+1 10 NULL 2014-08-04
+1 10 NULL 2014-09-04
+1 10 NULL 2015-05-04
+1 10 NULL 2014-05-04
--- End diff --
All the results are equivalent with the ones from DB2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org