You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by jo...@apache.org on 2021/09/01 18:04:08 UTC

[impala] branch master updated (beb8019 -> 45d3edd)

This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git.


    from beb8019  IMPALA-10888: sort partitions by name before comparing in test
     new 4f9f8c3  IMPALA-10840: Add support for "FOR SYSTEM_TIME AS OF" and "FOR SYSTEM_VERSION AS OF" for Iceberg tables
     new 731bb80  IMPALA-10885: Deflake test_get_table_req_without_fallback
     new 8a64397  IMPALA-10884: Improve pretty-printing of fragment instance name
     new 45d3edd  IMPALA-8680: Docker-based tests fail to archive the minicluster component logs

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 be/src/common/init.cc                              |   6 +-
 be/src/util/runtime-profile.cc                     |   8 +-
 docker/entrypoint.sh                               |  87 +++++++++---
 fe/src/main/cup/sql-parser.cup                     |  52 ++++---
 .../analysis/AlterTableSetTblProperties.java       |   2 +-
 .../org/apache/impala/analysis/BaseTableRef.java   |   6 +-
 .../java/org/apache/impala/analysis/ColumnDef.java |   3 +-
 .../org/apache/impala/analysis/RangePartition.java |   3 +-
 .../java/org/apache/impala/analysis/TableRef.java  |  40 +++++-
 .../org/apache/impala/analysis/TimeTravelSpec.java | 149 +++++++++++++++++++++
 .../org/apache/impala/catalog/FeIcebergTable.java  |   4 +-
 .../org/apache/impala/planner/IcebergScanNode.java |  61 +++++++--
 .../org/apache/impala/planner/KuduScanNode.java    |   6 +-
 .../main/java/org/apache/impala/util/ExprUtil.java |  68 ++++++++++
 .../java/org/apache/impala/util/IcebergUtil.java   |  26 +++-
 .../main/java/org/apache/impala/util/KuduUtil.java |  17 ---
 fe/src/main/jflex/sql-scanner.flex                 |   3 +
 .../apache/impala/analysis/AnalyzeStmtsTest.java   |  37 ++++-
 .../org/apache/impala/analysis/ParserTest.java     |  23 ++++
 ...le_log_tpcds_compute_stats_default.expected.txt |  48 ++++++-
 ...e_log_tpcds_compute_stats_extended.expected.txt |  48 ++++++-
 ...log_tpcds_compute_stats_v2_default.expected.txt |  48 ++++++-
 ...og_tpcds_compute_stats_v2_extended.expected.txt |  48 ++++++-
 .../queries/QueryTest/iceberg-negative.test        |  10 ++
 tests/custom_cluster/test_metastore_service.py     |   7 +-
 tests/query_test/test_iceberg.py                   | 124 +++++++++++++++++
 26 files changed, 823 insertions(+), 111 deletions(-)
 create mode 100644 fe/src/main/java/org/apache/impala/analysis/TimeTravelSpec.java
 create mode 100644 fe/src/main/java/org/apache/impala/util/ExprUtil.java

[impala] 02/04: IMPALA-10885: Deflake test_get_table_req_without_fallback

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 731bb8029e9511db609206aaa5c3693f26caf8e4
Author: Vihang Karajgaonkar <vi...@apache.org>
AuthorDate: Mon Aug 30 15:41:10 2021 -0700

    IMPALA-10885: Deflake test_get_table_req_without_fallback
    
    The test originally was written when events processing
    was not turned on by default. However, after
    IMPALA-8795 the events processing is turned on by
    default and this test fails intermittently.
    
    The error was reproduced by adding a simple
    sleep statement in the test just before issuing a
    get_table API call which was expected to fail.
    
    Testing:
    1. Looped the test for 25 times with the change and with
    the sleep statement which reproduced the failure.
    
    Change-Id: I684ec07cc23617d64355df25420c45b0cbedd5a3
    Reviewed-on: http://gerrit.cloudera.org:8080/17817
    Reviewed-by: Vihang Karajgaonkar <vi...@cloudera.com>
    Reviewed-by: Quanlong Huang <hu...@gmail.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 tests/custom_cluster/test_metastore_service.py | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/tests/custom_cluster/test_metastore_service.py b/tests/custom_cluster/test_metastore_service.py
index eabe9a2..62585c4 100644
--- a/tests/custom_cluster/test_metastore_service.py
+++ b/tests/custom_cluster/test_metastore_service.py
@@ -223,13 +223,16 @@ class TestMetastoreService(CustomClusterTestSuite):
         catalogd_args="--catalog_topic_mode=minimal "
                       "--start_hms_server=true "
                       "--hms_port=5899 "
-                      "--fallback_to_hms_on_errors=false"
+                      "--fallback_to_hms_on_errors=false "
+                      "--hms_event_polling_interval_s=0"
     )
     def test_get_table_req_without_fallback(self):
       """
       Test the get_table_req APIs with fallback to HMS enabled. These calls
       throw exceptions since we do not fallback to HMS if the db/table is not
-      in Catalog cache.
+      in Catalog cache. We specifically disable events polling because this test
+      exercises the error code paths when table is not found in the catalogd. With
+      events processing turned on, it leads to flakiness.
       """
       catalog_hms_client = None
       db_name = ImpalaTestSuite.get_random_name(

[impala] 03/04: IMPALA-10884: Improve pretty-printing of fragment instance name

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 8a6439715f3d7780a408fe97b5f80a7a9a0c57c8
Author: Riza Suminto <ri...@cloudera.com>
AuthorDate: Wed Aug 25 22:59:05 2021 -0700

    IMPALA-10884: Improve pretty-printing of fragment instance name
    
    The dense format of runtime profile print instance names in single long
    lines. It is hard to observe the instance names, especially when a
    fragment has many instances. This patch fixes the issue by breaking the
    list into multiple lines, one line per instance. We also prefix the
    instance names with an index number for easy matching against
    pretty-printed counters, events, and info strings.
    
    Testing:
    - Fix and pass observability/test_profile_tool.py.
    - Manually verify that impala-profile-tool prints the instance names in
      multiple lines.
    
    Change-Id: I03908ed2b29e43e133bff92c0d6480f8c5342f31
    Reviewed-on: http://gerrit.cloudera.org:8080/17816
    Reviewed-by: Joe McDonnell <jo...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 be/src/util/runtime-profile.cc                     |  8 ++--
 ...le_log_tpcds_compute_stats_default.expected.txt | 48 +++++++++++++++++++---
 ...e_log_tpcds_compute_stats_extended.expected.txt | 48 +++++++++++++++++++---
 ...log_tpcds_compute_stats_v2_default.expected.txt | 48 +++++++++++++++++++---
 ...og_tpcds_compute_stats_v2_extended.expected.txt | 48 +++++++++++++++++++---
 5 files changed, 176 insertions(+), 24 deletions(-)

diff --git a/be/src/util/runtime-profile.cc b/be/src/util/runtime-profile.cc
index b6a200d..1585f8b 100644
--- a/be/src/util/runtime-profile.cc
+++ b/be/src/util/runtime-profile.cc
@@ -2639,10 +2639,10 @@ void AggregatedRuntimeProfile::PrettyPrintInfoStrings(
   {
     lock_guard<SpinLock> l(input_profile_name_lock_);
     if (!input_profile_names_.empty()) {
-      // TODO: IMPALA-9846: improve pretty-printing here
-      stream << prefix
-             << "Instances: " << boost::algorithm::join(input_profile_names_, ", ")
-             << endl;
+      stream << prefix << "Instances:" << endl;
+      for (int i = 0; i < input_profile_names_.size(); ++i) {
+          stream << prefix << "  [" << i << "] " << input_profile_names_[i] << endl;
+      }
     }
   }
 
diff --git a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_default.expected.txt b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_default.expected.txt
index fea852a..6f58622 100644
--- a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_default.expected.txt
+++ b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_default.expected.txt
@@ -513,7 +513,8 @@ F00:EXCHANGE SENDER        3     12    1.666ms   3.999ms                     166
       completion times: min:1s203ms  max:1s203ms  mean: 1s203ms  stddev:0.000ns
       execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
       num instances: 1
-    Instances: Instance 454fb5fa46498311:3afee31900000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance 454fb5fa46498311:3afee31900000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2020-12-15 20:07:23.281
        - MemoryUsage[0] (500.000ms): 8.00 KB, 8.00 KB, 8.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 1s151ms
@@ -717,7 +718,19 @@ F00:EXCHANGE SENDER        3     12    1.666ms   3.999ms                     166
       completion times: min:1s191ms  max:1s203ms  mean: 1s197ms  stddev:4.149ms
       execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
       num instances: 12
-    Instances: Instance 454fb5fa46498311:3afee3190000000d (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee3190000000e (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee3190000000f (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000010 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000011 (host=tarmstrong-Precision-7540:27000), Instance 454fb5fa46498311:3afee31900000012 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 454fb5fa46498311:3afee3190000000d (host=tarmstrong-Precision-7540:27001)
+      [1] Instance 454fb5fa46498311:3afee3190000000e (host=tarmstrong-Precision-7540:27001)
+      [2] Instance 454fb5fa46498311:3afee3190000000f (host=tarmstrong-Precision-7540:27001)
+      [3] Instance 454fb5fa46498311:3afee31900000010 (host=tarmstrong-Precision-7540:27001)
+      [4] Instance 454fb5fa46498311:3afee31900000011 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 454fb5fa46498311:3afee31900000012 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 454fb5fa46498311:3afee31900000013 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 454fb5fa46498311:3afee31900000014 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 454fb5fa46498311:3afee31900000015 (host=tarmstrong-Precision-7540:27002)
+      [9] Instance 454fb5fa46498311:3afee31900000016 (host=tarmstrong-Precision-7540:27002)
+      [10] Instance 454fb5fa46498311:3afee31900000017 (host=tarmstrong-Precision-7540:27002)
+      [11] Instance 454fb5fa46498311:3afee31900000018 (host=tarmstrong-Precision-7540:27002)
       Last report received time[8-11]: 2020-12-15 20:07:23.272
       Last report received time[0-3]: 2020-12-15 20:07:23.276
       Last report received time[4-7]: 2020-12-15 20:07:23.281
@@ -3092,7 +3105,19 @@ F00:EXCHANGE SENDER        3     12    1.666ms   3.999ms                     166
       completion times: min:1s191ms  max:1s203ms  mean: 1s197ms  stddev:4.149ms
       execution rates: min:13.22 MB/sec  max:17.54 MB/sec  mean:13.70 MB/sec  stddev:1.16 MB/sec
       num instances: 12
-    Instances: Instance 454fb5fa46498311:3afee31900000001 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000002 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000003 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000004 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000005 (host=tarmstrong-Precision-7540:27000), Instance 454fb5fa46498311:3afee31900000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 454fb5fa46498311:3afee31900000001 (host=tarmstrong-Precision-7540:27001)
+      [1] Instance 454fb5fa46498311:3afee31900000002 (host=tarmstrong-Precision-7540:27001)
+      [2] Instance 454fb5fa46498311:3afee31900000003 (host=tarmstrong-Precision-7540:27001)
+      [3] Instance 454fb5fa46498311:3afee31900000004 (host=tarmstrong-Precision-7540:27001)
+      [4] Instance 454fb5fa46498311:3afee31900000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 454fb5fa46498311:3afee31900000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 454fb5fa46498311:3afee31900000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 454fb5fa46498311:3afee31900000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 454fb5fa46498311:3afee31900000009 (host=tarmstrong-Precision-7540:27002)
+      [9] Instance 454fb5fa46498311:3afee3190000000a (host=tarmstrong-Precision-7540:27002)
+      [10] Instance 454fb5fa46498311:3afee3190000000b (host=tarmstrong-Precision-7540:27002)
+      [11] Instance 454fb5fa46498311:3afee3190000000c (host=tarmstrong-Precision-7540:27002)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
@@ -6261,7 +6286,8 @@ F00:EXCHANGE SENDER        3     12    0.000ns    0.000ns
       completion times: min:3s863ms  max:3s863ms  mean: 3s863ms  stddev:0.000ns
       execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
       num instances: 1
-    Instances: Instance da4f8f953162a9b0:9a4d93eb00000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance da4f8f953162a9b0:9a4d93eb00000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2020-12-15 20:07:27.327
        - MemoryUsage[0] (500.000ms): 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 3s819ms
@@ -6504,7 +6530,19 @@ F00:EXCHANGE SENDER        3     12    0.000ns    0.000ns
       completion times: min:3s731ms  max:3s863ms  mean: 3s816ms  stddev:58.197ms
       execution rates: min:4.12 MB/sec  max:5.47 MB/sec  mean:4.30 MB/sec  stddev:365.33 KB/sec
       num instances: 12
-    Instances: Instance da4f8f953162a9b0:9a4d93eb00000001 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000002 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000003 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000004 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000005 (host=tarmstrong-Precision-7540:27000), Instance da4f8f953162a9b0:9a4d93eb00000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance da4f8f953162a9b0:9a4d93eb00000001 (host=tarmstrong-Precision-7540:27001)
+      [1] Instance da4f8f953162a9b0:9a4d93eb00000002 (host=tarmstrong-Precision-7540:27001)
+      [2] Instance da4f8f953162a9b0:9a4d93eb00000003 (host=tarmstrong-Precision-7540:27001)
+      [3] Instance da4f8f953162a9b0:9a4d93eb00000004 (host=tarmstrong-Precision-7540:27001)
+      [4] Instance da4f8f953162a9b0:9a4d93eb00000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance da4f8f953162a9b0:9a4d93eb00000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance da4f8f953162a9b0:9a4d93eb00000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance da4f8f953162a9b0:9a4d93eb00000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance da4f8f953162a9b0:9a4d93eb00000009 (host=tarmstrong-Precision-7540:27002)
+      [9] Instance da4f8f953162a9b0:9a4d93eb0000000a (host=tarmstrong-Precision-7540:27002)
+      [10] Instance da4f8f953162a9b0:9a4d93eb0000000b (host=tarmstrong-Precision-7540:27002)
+      [11] Instance da4f8f953162a9b0:9a4d93eb0000000c (host=tarmstrong-Precision-7540:27002)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
diff --git a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_extended.expected.txt b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_extended.expected.txt
index b320057..327ead6 100644
--- a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_extended.expected.txt
+++ b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_extended.expected.txt
@@ -513,7 +513,8 @@ F00:EXCHANGE SENDER        3     12    1.666ms   3.999ms                     166
       completion times: min:1s203ms  max:1s203ms  mean: 1s203ms  stddev:0.000ns
       execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
       num instances: 1
-    Instances: Instance 454fb5fa46498311:3afee31900000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance 454fb5fa46498311:3afee31900000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2020-12-15 20:07:23.281
        - MemoryUsage[0] (500.000ms): 8.00 KB, 8.00 KB, 8.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 1s151ms
@@ -717,7 +718,19 @@ F00:EXCHANGE SENDER        3     12    1.666ms   3.999ms                     166
       completion times: min:1s191ms  max:1s203ms  mean: 1s197ms  stddev:4.149ms
       execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
       num instances: 12
-    Instances: Instance 454fb5fa46498311:3afee3190000000d (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee3190000000e (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee3190000000f (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000010 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000011 (host=tarmstrong-Precision-7540:27000), Instance 454fb5fa46498311:3afee31900000012 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 454fb5fa46498311:3afee3190000000d (host=tarmstrong-Precision-7540:27001)
+      [1] Instance 454fb5fa46498311:3afee3190000000e (host=tarmstrong-Precision-7540:27001)
+      [2] Instance 454fb5fa46498311:3afee3190000000f (host=tarmstrong-Precision-7540:27001)
+      [3] Instance 454fb5fa46498311:3afee31900000010 (host=tarmstrong-Precision-7540:27001)
+      [4] Instance 454fb5fa46498311:3afee31900000011 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 454fb5fa46498311:3afee31900000012 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 454fb5fa46498311:3afee31900000013 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 454fb5fa46498311:3afee31900000014 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 454fb5fa46498311:3afee31900000015 (host=tarmstrong-Precision-7540:27002)
+      [9] Instance 454fb5fa46498311:3afee31900000016 (host=tarmstrong-Precision-7540:27002)
+      [10] Instance 454fb5fa46498311:3afee31900000017 (host=tarmstrong-Precision-7540:27002)
+      [11] Instance 454fb5fa46498311:3afee31900000018 (host=tarmstrong-Precision-7540:27002)
       Last report received time[8-11]: 2020-12-15 20:07:23.272
       Last report received time[0-3]: 2020-12-15 20:07:23.276
       Last report received time[4-7]: 2020-12-15 20:07:23.281
@@ -3290,7 +3303,19 @@ F00:EXCHANGE SENDER        3     12    1.666ms   3.999ms                     166
       completion times: min:1s191ms  max:1s203ms  mean: 1s197ms  stddev:4.149ms
       execution rates: min:13.22 MB/sec  max:17.54 MB/sec  mean:13.70 MB/sec  stddev:1.16 MB/sec
       num instances: 12
-    Instances: Instance 454fb5fa46498311:3afee31900000001 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000002 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000003 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000004 (host=tarmstrong-Precision-7540:27001), Instance 454fb5fa46498311:3afee31900000005 (host=tarmstrong-Precision-7540:27000), Instance 454fb5fa46498311:3afee31900000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 454fb5fa46498311:3afee31900000001 (host=tarmstrong-Precision-7540:27001)
+      [1] Instance 454fb5fa46498311:3afee31900000002 (host=tarmstrong-Precision-7540:27001)
+      [2] Instance 454fb5fa46498311:3afee31900000003 (host=tarmstrong-Precision-7540:27001)
+      [3] Instance 454fb5fa46498311:3afee31900000004 (host=tarmstrong-Precision-7540:27001)
+      [4] Instance 454fb5fa46498311:3afee31900000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 454fb5fa46498311:3afee31900000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 454fb5fa46498311:3afee31900000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 454fb5fa46498311:3afee31900000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 454fb5fa46498311:3afee31900000009 (host=tarmstrong-Precision-7540:27002)
+      [9] Instance 454fb5fa46498311:3afee3190000000a (host=tarmstrong-Precision-7540:27002)
+      [10] Instance 454fb5fa46498311:3afee3190000000b (host=tarmstrong-Precision-7540:27002)
+      [11] Instance 454fb5fa46498311:3afee3190000000c (host=tarmstrong-Precision-7540:27002)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
@@ -6763,7 +6788,8 @@ F00:EXCHANGE SENDER        3     12    0.000ns    0.000ns
       completion times: min:3s863ms  max:3s863ms  mean: 3s863ms  stddev:0.000ns
       execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
       num instances: 1
-    Instances: Instance da4f8f953162a9b0:9a4d93eb00000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance da4f8f953162a9b0:9a4d93eb00000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2020-12-15 20:07:27.327
        - MemoryUsage[0] (500.000ms): 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 3s819ms
@@ -7006,7 +7032,19 @@ F00:EXCHANGE SENDER        3     12    0.000ns    0.000ns
       completion times: min:3s731ms  max:3s863ms  mean: 3s816ms  stddev:58.197ms
       execution rates: min:4.12 MB/sec  max:5.47 MB/sec  mean:4.30 MB/sec  stddev:365.33 KB/sec
       num instances: 12
-    Instances: Instance da4f8f953162a9b0:9a4d93eb00000001 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000002 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000003 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000004 (host=tarmstrong-Precision-7540:27001), Instance da4f8f953162a9b0:9a4d93eb00000005 (host=tarmstrong-Precision-7540:27000), Instance da4f8f953162a9b0:9a4d93eb00000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance da4f8f953162a9b0:9a4d93eb00000001 (host=tarmstrong-Precision-7540:27001)
+      [1] Instance da4f8f953162a9b0:9a4d93eb00000002 (host=tarmstrong-Precision-7540:27001)
+      [2] Instance da4f8f953162a9b0:9a4d93eb00000003 (host=tarmstrong-Precision-7540:27001)
+      [3] Instance da4f8f953162a9b0:9a4d93eb00000004 (host=tarmstrong-Precision-7540:27001)
+      [4] Instance da4f8f953162a9b0:9a4d93eb00000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance da4f8f953162a9b0:9a4d93eb00000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance da4f8f953162a9b0:9a4d93eb00000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance da4f8f953162a9b0:9a4d93eb00000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance da4f8f953162a9b0:9a4d93eb00000009 (host=tarmstrong-Precision-7540:27002)
+      [9] Instance da4f8f953162a9b0:9a4d93eb0000000a (host=tarmstrong-Precision-7540:27002)
+      [10] Instance da4f8f953162a9b0:9a4d93eb0000000b (host=tarmstrong-Precision-7540:27002)
+      [11] Instance da4f8f953162a9b0:9a4d93eb0000000c (host=tarmstrong-Precision-7540:27002)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
diff --git a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_default.expected.txt b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_default.expected.txt
index 9d15fbc..13c42fd 100644
--- a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_default.expected.txt
+++ b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_default.expected.txt
@@ -509,7 +509,8 @@ F00:EXCHANGE SENDER        3     12    1.356ms    6.539ms                      1
              - PrepareTime: 27.139ms
              - TotalTime: 112.612ms
     Coordinator Fragment F02:
-    Instances: Instance 874b65988023a32b:71d209e900000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance 874b65988023a32b:71d209e900000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2021-02-09 08:53:03.675
        - MemoryUsage[0] (500.000ms): 8.00 KB, 8.00 KB, 8.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 1s208ms
@@ -608,7 +609,19 @@ F00:EXCHANGE SENDER        3     12    1.356ms    6.539ms                      1
            - TotalRPCsDeferred: 0 (0)
            - TotalTime: 0.000ns
     Fragment F01 [12 instances]:
-    Instances: Instance 874b65988023a32b:71d209e90000000d (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e90000000e (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e90000000f (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000010 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000011 (host=tarmstrong-Precision-7540:27000), Instance 874b65988023a32b:71d209e900000012 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 874b65988023a32b:71d209e90000000d (host=tarmstrong-Precision-7540:27002)
+      [1] Instance 874b65988023a32b:71d209e90000000e (host=tarmstrong-Precision-7540:27002)
+      [2] Instance 874b65988023a32b:71d209e90000000f (host=tarmstrong-Precision-7540:27002)
+      [3] Instance 874b65988023a32b:71d209e900000010 (host=tarmstrong-Precision-7540:27002)
+      [4] Instance 874b65988023a32b:71d209e900000011 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 874b65988023a32b:71d209e900000012 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 874b65988023a32b:71d209e900000013 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 874b65988023a32b:71d209e900000014 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 874b65988023a32b:71d209e900000015 (host=tarmstrong-Precision-7540:27001)
+      [9] Instance 874b65988023a32b:71d209e900000016 (host=tarmstrong-Precision-7540:27001)
+      [10] Instance 874b65988023a32b:71d209e900000017 (host=tarmstrong-Precision-7540:27001)
+      [11] Instance 874b65988023a32b:71d209e900000018 (host=tarmstrong-Precision-7540:27001)
       Last report received time[0-3]: 2021-02-09 08:53:03.669
       Last report received time[8-11]: 2021-02-09 08:53:03.674
       Last report received time[4-7]: 2021-02-09 08:53:03.675
@@ -996,7 +1009,19 @@ F00:EXCHANGE SENDER        3     12    1.356ms    6.539ms                      1
            - TotalRPCsDeferred: mean=0 (0) min=0 (0) max=0 (0)
            - TotalTime: mean=0.000ns min=0.000ns max=0.000ns
     Fragment F00 [12 instances]:
-    Instances: Instance 874b65988023a32b:71d209e900000001 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000002 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000003 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000004 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000005 (host=tarmstrong-Precision-7540:27000), Instance 874b65988023a32b:71d209e900000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 874b65988023a32b:71d209e900000001 (host=tarmstrong-Precision-7540:27002)
+      [1] Instance 874b65988023a32b:71d209e900000002 (host=tarmstrong-Precision-7540:27002)
+      [2] Instance 874b65988023a32b:71d209e900000003 (host=tarmstrong-Precision-7540:27002)
+      [3] Instance 874b65988023a32b:71d209e900000004 (host=tarmstrong-Precision-7540:27002)
+      [4] Instance 874b65988023a32b:71d209e900000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 874b65988023a32b:71d209e900000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 874b65988023a32b:71d209e900000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 874b65988023a32b:71d209e900000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 874b65988023a32b:71d209e900000009 (host=tarmstrong-Precision-7540:27001)
+      [9] Instance 874b65988023a32b:71d209e90000000a (host=tarmstrong-Precision-7540:27001)
+      [10] Instance 874b65988023a32b:71d209e90000000b (host=tarmstrong-Precision-7540:27001)
+      [11] Instance 874b65988023a32b:71d209e90000000c (host=tarmstrong-Precision-7540:27001)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
@@ -1861,7 +1886,8 @@ F00:EXCHANGE SENDER        3     12  265.136us  490.561us
              - PrepareTime: 16.210ms
              - TotalTime: 369.983ms
     Coordinator Fragment F01:
-    Instances: Instance 504e9a5d292585e8:7c56749700000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance 504e9a5d292585e8:7c56749700000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2021-02-09 08:53:08.375
        - MemoryUsage[0] (500.000ms): 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 4s434ms
@@ -1979,7 +2005,19 @@ F00:EXCHANGE SENDER        3     12  265.136us  490.561us
            - TotalRPCsDeferred: 0 (0)
            - TotalTime: 0.000ns
     Fragment F00 [12 instances]:
-    Instances: Instance 504e9a5d292585e8:7c56749700000001 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000002 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000003 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000004 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000005 (host=tarmstrong-Precision-7540:27000), Instance 504e9a5d292585e8:7c56749700000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 504e9a5d292585e8:7c56749700000001 (host=tarmstrong-Precision-7540:27002)
+      [1] Instance 504e9a5d292585e8:7c56749700000002 (host=tarmstrong-Precision-7540:27002)
+      [2] Instance 504e9a5d292585e8:7c56749700000003 (host=tarmstrong-Precision-7540:27002)
+      [3] Instance 504e9a5d292585e8:7c56749700000004 (host=tarmstrong-Precision-7540:27002)
+      [4] Instance 504e9a5d292585e8:7c56749700000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 504e9a5d292585e8:7c56749700000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 504e9a5d292585e8:7c56749700000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 504e9a5d292585e8:7c56749700000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 504e9a5d292585e8:7c56749700000009 (host=tarmstrong-Precision-7540:27001)
+      [9] Instance 504e9a5d292585e8:7c5674970000000a (host=tarmstrong-Precision-7540:27001)
+      [10] Instance 504e9a5d292585e8:7c5674970000000b (host=tarmstrong-Precision-7540:27001)
+      [11] Instance 504e9a5d292585e8:7c5674970000000c (host=tarmstrong-Precision-7540:27001)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
diff --git a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_extended.expected.txt b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_extended.expected.txt
index 28d0340..99b674f 100644
--- a/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_extended.expected.txt
+++ b/testdata/impala-profiles/impala_profile_log_tpcds_compute_stats_v2_extended.expected.txt
@@ -509,7 +509,8 @@ F00:EXCHANGE SENDER        3     12    1.356ms    6.539ms                      1
              - PrepareTime: 27.139ms
              - TotalTime: 112.612ms
     Coordinator Fragment F02:
-    Instances: Instance 874b65988023a32b:71d209e900000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance 874b65988023a32b:71d209e900000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2021-02-09 08:53:03.675
        - MemoryUsage[0] (500.000ms): 8.00 KB, 8.00 KB, 8.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 1s208ms
@@ -608,7 +609,19 @@ F00:EXCHANGE SENDER        3     12    1.356ms    6.539ms                      1
            - TotalRPCsDeferred: 0 (0)
            - TotalTime: 0.000ns
     Fragment F01 [12 instances]:
-    Instances: Instance 874b65988023a32b:71d209e90000000d (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e90000000e (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e90000000f (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000010 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000011 (host=tarmstrong-Precision-7540:27000), Instance 874b65988023a32b:71d209e900000012 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 874b65988023a32b:71d209e90000000d (host=tarmstrong-Precision-7540:27002)
+      [1] Instance 874b65988023a32b:71d209e90000000e (host=tarmstrong-Precision-7540:27002)
+      [2] Instance 874b65988023a32b:71d209e90000000f (host=tarmstrong-Precision-7540:27002)
+      [3] Instance 874b65988023a32b:71d209e900000010 (host=tarmstrong-Precision-7540:27002)
+      [4] Instance 874b65988023a32b:71d209e900000011 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 874b65988023a32b:71d209e900000012 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 874b65988023a32b:71d209e900000013 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 874b65988023a32b:71d209e900000014 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 874b65988023a32b:71d209e900000015 (host=tarmstrong-Precision-7540:27001)
+      [9] Instance 874b65988023a32b:71d209e900000016 (host=tarmstrong-Precision-7540:27001)
+      [10] Instance 874b65988023a32b:71d209e900000017 (host=tarmstrong-Precision-7540:27001)
+      [11] Instance 874b65988023a32b:71d209e900000018 (host=tarmstrong-Precision-7540:27001)
       Last report received time[0-3]: 2021-02-09 08:53:03.669
       Last report received time[8-11]: 2021-02-09 08:53:03.674
       Last report received time[4-7]: 2021-02-09 08:53:03.675
@@ -1202,7 +1215,19 @@ F00:EXCHANGE SENDER        3     12    1.356ms    6.539ms                      1
            - TotalRPCsDeferred: mean=0 (0) min=0 (0) p50=0 (0) p75=0 (0) p90=0 (0) p95=0 (0) max=0 (0)
            - TotalTime: mean=0.000ns min=0.000ns p50=0.000ns p75=0.000ns p90=0.000ns p95=0.000ns max=0.000ns
     Fragment F00 [12 instances]:
-    Instances: Instance 874b65988023a32b:71d209e900000001 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000002 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000003 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000004 (host=tarmstrong-Precision-7540:27002), Instance 874b65988023a32b:71d209e900000005 (host=tarmstrong-Precision-7540:27000), Instance 874b65988023a32b:71d209e900000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 874b65988023a32b:71d209e900000001 (host=tarmstrong-Precision-7540:27002)
+      [1] Instance 874b65988023a32b:71d209e900000002 (host=tarmstrong-Precision-7540:27002)
+      [2] Instance 874b65988023a32b:71d209e900000003 (host=tarmstrong-Precision-7540:27002)
+      [3] Instance 874b65988023a32b:71d209e900000004 (host=tarmstrong-Precision-7540:27002)
+      [4] Instance 874b65988023a32b:71d209e900000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 874b65988023a32b:71d209e900000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 874b65988023a32b:71d209e900000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 874b65988023a32b:71d209e900000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 874b65988023a32b:71d209e900000009 (host=tarmstrong-Precision-7540:27001)
+      [9] Instance 874b65988023a32b:71d209e90000000a (host=tarmstrong-Precision-7540:27001)
+      [10] Instance 874b65988023a32b:71d209e90000000b (host=tarmstrong-Precision-7540:27001)
+      [11] Instance 874b65988023a32b:71d209e90000000c (host=tarmstrong-Precision-7540:27001)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB
@@ -2379,7 +2404,8 @@ F00:EXCHANGE SENDER        3     12  265.136us  490.561us
              - PrepareTime: 16.210ms
              - TotalTime: 369.983ms
     Coordinator Fragment F01:
-    Instances: Instance 504e9a5d292585e8:7c56749700000000 (host=tarmstrong-Precision-7540:27000)
+    Instances:
+      [0] Instance 504e9a5d292585e8:7c56749700000000 (host=tarmstrong-Precision-7540:27000)
       Last report received time[0]: 2021-02-09 08:53:08.375
        - MemoryUsage[0] (500.000ms): 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB, 12.00 KB
       Fragment Instance Lifecycle Event Timeline[0]: 4s434ms
@@ -2497,7 +2523,19 @@ F00:EXCHANGE SENDER        3     12  265.136us  490.561us
            - TotalRPCsDeferred: 0 (0)
            - TotalTime: 0.000ns
     Fragment F00 [12 instances]:
-    Instances: Instance 504e9a5d292585e8:7c56749700000001 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000002 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000003 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000004 (host=tarmstrong-Precision-7540:27002), Instance 504e9a5d292585e8:7c56749700000005 (host=tarmstrong-Precision-7540:27000), Instance 504e9a5d292585e8:7c56749700000006 (host=tarmstrong-Pr [...]
+    Instances:
+      [0] Instance 504e9a5d292585e8:7c56749700000001 (host=tarmstrong-Precision-7540:27002)
+      [1] Instance 504e9a5d292585e8:7c56749700000002 (host=tarmstrong-Precision-7540:27002)
+      [2] Instance 504e9a5d292585e8:7c56749700000003 (host=tarmstrong-Precision-7540:27002)
+      [3] Instance 504e9a5d292585e8:7c56749700000004 (host=tarmstrong-Precision-7540:27002)
+      [4] Instance 504e9a5d292585e8:7c56749700000005 (host=tarmstrong-Precision-7540:27000)
+      [5] Instance 504e9a5d292585e8:7c56749700000006 (host=tarmstrong-Precision-7540:27000)
+      [6] Instance 504e9a5d292585e8:7c56749700000007 (host=tarmstrong-Precision-7540:27000)
+      [7] Instance 504e9a5d292585e8:7c56749700000008 (host=tarmstrong-Precision-7540:27000)
+      [8] Instance 504e9a5d292585e8:7c56749700000009 (host=tarmstrong-Precision-7540:27001)
+      [9] Instance 504e9a5d292585e8:7c5674970000000a (host=tarmstrong-Precision-7540:27001)
+      [10] Instance 504e9a5d292585e8:7c5674970000000b (host=tarmstrong-Precision-7540:27001)
+      [11] Instance 504e9a5d292585e8:7c5674970000000c (host=tarmstrong-Precision-7540:27001)
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[11]: 0:152/15.76 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[9]: 0:152/15.86 MB
       Hdfs split stats (<volume id>:<# splits>/<split lengths>)[0]: 0:152/15.87 MB

[impala] 04/04: IMPALA-8680: Docker-based tests fail to archive the minicluster component logs

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 45d3eddc056bc28ee43efa6307cd673062936a41
Author: Zoltan Garaguly <zg...@cloudera.com>
AuthorDate: Fri May 8 15:34:44 2020 +0200

    IMPALA-8680: Docker-based tests fail to archive the minicluster component logs
    
    Inside docker container copy logs of cluster components hdfs, yarn, kudu
    from folder testdata/cluster/cdh<version-number>/node-<node-id>/var/log/
    to folder logs/cluster/
    
    Testing:
     - running docker-based tests and checked that minicluster logs are preserved and archived
     - test if minicluster logs get copied also in case when something gets wrong during build
    
    Change-Id: I23e25d42992cec47c593dc388bcf0bcef828c05e
    Reviewed-on: http://gerrit.cloudera.org:8080/15898
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 docker/entrypoint.sh | 87 +++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 66 insertions(+), 21 deletions(-)

diff --git a/docker/entrypoint.sh b/docker/entrypoint.sh
index 81750e6..3c3cfbb 100755
--- a/docker/entrypoint.sh
+++ b/docker/entrypoint.sh
@@ -262,38 +262,36 @@ function build_impdev() {
   # can be built when executing those tests. We use "-noclean" to
   # avoid deleting the log for this invocation which is in logs/,
   # and, this is a first build anyway.
-  ./buildall.sh -noclean -format -testdata -notests
+  if ! ./buildall.sh -noclean -format -testdata -notests; then
+    echo "Build + dataload failed!"
+    copy_cluster_logs
+    return 1
+  fi
 
   # We make one exception to "-notests":
   # test_insert_parquet.py, which is used in all the end-to-end test
   # shards, depends on this binary. We build it here once,
   # instead of building it during the startup of each container running
   # a subset of E2E tests. Building it here is also a lot faster.
-  make -j$(nproc) --load-average=$(nproc) parquet-reader impala-profile-tool
+  if ! make -j$(nproc) --load-average=$(nproc) parquet-reader impala-profile-tool; then
+    echo "Impala profile tool build failed!"
+    copy_cluster_logs
+    return 1
+  fi
 
   # Dump current memory usage to logs, before shutting things down.
-  memory_usage
+  memory_usage || true
 
   # Shut down things cleanly.
-  testdata/bin/kill-all.sh
+  testdata/bin/kill-all.sh || true
 
-  # "Compress" HDFS data by de-duplicating blocks. As a result of
-  # having three datanodes, our data load is 3x larger than it needs
-  # to be. To alleviate this (to the tune of ~20GB savings), we
-  # use hardlinks to link together the identical blocks. This is absolutely
-  # taking advantage of an implementation detail of HDFS.
-  echo "Hardlinking duplicate HDFS block data."
-  set +x
-  for x in $(find testdata/cluster/*/node-1/data/dfs/dn/current/ -name 'blk_*[0-9]'); do
-    for n in 2 3; do
-      xn=${x/node-1/node-$n}
-      if [ -f $xn ]; then
-        rm $xn
-        ln $x $xn
-      fi
-    done
-  done
-  set -x
+  if ! hardlink_duplicate_hdfs_data; then
+    echo "Hardlink duplicate HDFS data failed!"
+    copy_cluster_logs
+    return 1
+  fi
+
+  copy_cluster_logs
 
   # Shutting down PostgreSQL nicely speeds up it's start time for new containers.
   _pg_ctl stop
@@ -309,6 +307,26 @@ function build_impdev() {
   popd
 }
 
+# "Compress" HDFS data by de-duplicating blocks. As a result of
+# having three datanodes, our data load is 3x larger than it needs
+# to be. To alleviate this (to the tune of ~20GB savings), we
+# use hardlinks to link together the identical blocks. This is absolutely
+# taking advantage of an implementation detail of HDFS.
+function hardlink_duplicate_hdfs_data() {
+  echo "Hardlinking duplicate HDFS block data."
+  set +x
+  for x in $(find testdata/cluster/*/node-1/data/dfs/dn/current/ -name 'blk_*[0-9]'); do
+    for n in 2 3; do
+      xn=${x/node-1/node-$n}
+      if [ -f $xn ]; then
+        rm $xn
+        ln $x $xn
+      fi
+    done
+  done
+  set -x
+}
+
 # Prints top 20 RSS consumers (and other, total), in megabytes Common culprits
 # are Java processes without Xmx set. Since most things don't reclaim memory,
 # this is a decent proxy for peak memory usage by long-lived processes.
@@ -329,6 +347,30 @@ function memory_usage() {
   ) >& /logs/memory_usage.txt
 }
 
+# Some components like hdfs, yarn, kudu creates their log in
+# testdata/cluster/cdh<version-number>/node-<node-id>/var/log/ folder
+# these log folders are symlinked to logs/cluster/ folder
+# remove symlinks and copy these logs to logs/cluster/
+function copy_cluster_logs() {
+  echo ">>> Copy cluster logs..."
+  pushd /home/impdev/Impala
+
+  for x in testdata/cluster/cdh*/node-*/var/log/; do
+    echo $x
+    if [ -d $x ]; then
+
+      CDH_VERSION=`echo $x | sed  "s#testdata/cluster/\(.*\)/node-.*#\1#"`
+      NODE_NUMBER=`echo $x | sed  "s#testdata/cluster/cdh.*/\(.*\)/var.*#\1#"`
+
+      rm -rf logs/cluster/${CDH_VERSION}-${NODE_NUMBER}
+      mkdir -p logs/cluster/${CDH_VERSION}-${NODE_NUMBER}
+      cp -R $x/* logs/cluster/${CDH_VERSION}-${NODE_NUMBER}
+    fi
+  done
+
+  popd
+}
+
 # Runs a suite passed in as the first argument. Tightly
 # coupled with Impala's run-all-tests and the suite names.
 # from test-with-docker.py.
@@ -443,6 +485,9 @@ function test_suite() {
   # leading to test-with-docker.py hitting a timeout. Killing the minicluster
   # daemons fixes this.
   testdata/bin/kill-all.sh || true
+
+  copy_cluster_logs
+
   return $ret
 }
 

[impala] 01/04: IMPALA-10840: Add support for "FOR SYSTEM_TIME AS OF" and "FOR SYSTEM_VERSION AS OF" for Iceberg tables

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 4f9f8c33cade430cec07974e00dc1d9031e9609f
Author: Zoltan Borok-Nagy <bo...@cloudera.com>
AuthorDate: Mon Aug 9 10:14:05 2021 +0200

    IMPALA-10840: Add support for "FOR SYSTEM_TIME AS OF" and "FOR SYSTEM_VERSION AS OF" for Iceberg tables
    
    This patch adds support "FOR SYSTEM_TIME AS OF" and
    "FOR SYSTEM_VERSION AS OF" clauses for Iceberg tables. The new
    clauses are part of the table ref. FOR SYSTEM_TIME AS OF conforms to the
    SQL2011 standard:
    https://cs.ulb.ac.be/public/_media/teaching/infoh415/tempfeaturessql2011.pdf
    
    With FOR SYSTEM_TIME AS OF we can query a table at a specific time
    point, e.g. we can retrieve what was the table content 1 day ago.
    
    The timestamp given to "FOR SYSTEM_TIME AS OF" is interpreted in the
    local timezone. The local timezone can be set via the query option
    TIMEZONE. By default the timezone being used is the coordinator node's
    local timezone. The timestamp is translated to UTC because table
    snapshots are tagged with a UTC timestamps.
    
    "FOR SYSTEM_VERSION AS OF" is a non-standard extension. It works
    similarly to FOR SYSTEM_TIME AS OF, but with this clause we can query
    a table via a snapshot ID instead of a timestamp.
    
    HIVE-25344 also added support for these clauses to Hive.
    
    Table snapshot IDs and timestamp information can be queried with the
    help of the DESCRIBE HISTORY command.
    
    Sample queries:
    
     SELECT * FROM t FOR SYSTEM_TIME AS OF now();
     SELECT * FROM t FOR SYSTEM_TIME AS OF '2021-08-10 11:02:34';
     SELECT * FROM t FOR SYSTEM_TIME AS OF now() - interval 10 days + interval 3 hours;
    
     SELECT * FROM t FOR SYSTEM_VERSION AS OF 7080861547601448759;
    
     SELECT * FROM t FOR SYSTEM_TIME AS OF now()
     MINUS
     SELECT * FROM t FOR SYSTEM_TIME AS OF now() - interval 1 days;
    
    This patch uses some parts of the in-progress
    IMPALA-9773 (https://gerrit.cloudera.org/#/c/13342/) developed by
    Todd Lipcon and Grant Henke. This patch also resolves some TODOs of
    IMPALA-9773, i.e. after this patch it'll be easier to add
    time travel for Kudu tables as well.
    
    Testing:
     * added parser tests (ParserTest.java)
     * added analyzer tests (AnalyzeStmtsTest.java)
     * added e2e tests (test_iceberg.py)
    
    Change-Id: Ib523c5e47b8d9c377bea39a82fe20249177cf824
    Reviewed-on: http://gerrit.cloudera.org:8080/17765
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 be/src/common/init.cc                              |   6 +-
 fe/src/main/cup/sql-parser.cup                     |  52 ++++---
 .../analysis/AlterTableSetTblProperties.java       |   2 +-
 .../org/apache/impala/analysis/BaseTableRef.java   |   6 +-
 .../java/org/apache/impala/analysis/ColumnDef.java |   3 +-
 .../org/apache/impala/analysis/RangePartition.java |   3 +-
 .../java/org/apache/impala/analysis/TableRef.java  |  40 +++++-
 .../org/apache/impala/analysis/TimeTravelSpec.java | 149 +++++++++++++++++++++
 .../org/apache/impala/catalog/FeIcebergTable.java  |   4 +-
 .../org/apache/impala/planner/IcebergScanNode.java |  61 +++++++--
 .../org/apache/impala/planner/KuduScanNode.java    |   6 +-
 .../main/java/org/apache/impala/util/ExprUtil.java |  68 ++++++++++
 .../java/org/apache/impala/util/IcebergUtil.java   |  26 +++-
 .../main/java/org/apache/impala/util/KuduUtil.java |  17 ---
 fe/src/main/jflex/sql-scanner.flex                 |   3 +
 .../apache/impala/analysis/AnalyzeStmtsTest.java   |  37 ++++-
 .../org/apache/impala/analysis/ParserTest.java     |  23 ++++
 .../queries/QueryTest/iceberg-negative.test        |  10 ++
 tests/query_test/test_iceberg.py                   | 124 +++++++++++++++++
 19 files changed, 576 insertions(+), 64 deletions(-)

diff --git a/be/src/common/init.cc b/be/src/common/init.cc
index 1bea9ad..d6ab972 100644
--- a/be/src/common/init.cc
+++ b/be/src/common/init.cc
@@ -420,9 +420,9 @@ void impala::InitCommonRuntime(int argc, char** argv, bool init_jvm,
     CLEAN_EXIT_WITH_ERROR(error_msg.str());
   }
 
-  if (external_fe) {
-    // Explicitly load the timezone database for external FEs. Impala daemons load it
-    // through ImpaladMain
+  if (external_fe || test_mode == TestInfo::FE_TEST) {
+    // Explicitly load the timezone database for external FEs and FE tests.
+    // Impala daemons load it through ImpaladMain
     ABORT_IF_ERROR(TimezoneDatabase::Initialize());
   }
 }
diff --git a/fe/src/main/cup/sql-parser.cup b/fe/src/main/cup/sql-parser.cup
index c2cc95d..ce849d0 100644
--- a/fe/src/main/cup/sql-parser.cup
+++ b/fe/src/main/cup/sql-parser.cup
@@ -40,6 +40,7 @@ import org.apache.impala.analysis.AlterTableAddDropRangePartitionStmt.Operation;
 import org.apache.impala.analysis.IcebergPartitionSpec;
 import org.apache.impala.analysis.IcebergPartitionField;
 import org.apache.impala.analysis.IcebergPartitionTransform;
+import org.apache.impala.analysis.TimeTravelSpec;
 import org.apache.impala.catalog.ArrayType;
 import org.apache.impala.catalog.MapType;
 import org.apache.impala.catalog.RowFormat;
@@ -299,7 +300,8 @@ terminal
   KW_IS, KW_JOIN, KW_JSONFILE, KW_KUDU, KW_LAST, KW_LEFT, KW_LEXICAL, KW_LIKE, KW_LIMIT, KW_LINES,
   KW_LOAD, KW_LOCATION, KW_LOGICAL_OR,
   KW_MANAGED_LOCATION, KW_MAP, KW_MERGE_FN, KW_METADATA, KW_MINUS, KW_NORELY, KW_NOT,
-  KW_NOVALIDATE, KW_NULL, KW_NULLS, KW_OFFSET, KW_ON, KW_OR, KW_ORC, KW_ORDER, KW_OUTER,
+  KW_NOVALIDATE, KW_NULL, KW_NULLS, KW_OF, KW_OFFSET, KW_ON, KW_OR,
+  KW_ORC, KW_ORDER, KW_OUTER,
   KW_OVER, KW_OVERWRITE, KW_PARQUET, KW_PARQUETFILE, KW_PARTITION, KW_PARTITIONED,
   KW_PARTITIONS, KW_PRECEDING, KW_PREPARE_FN, KW_PRIMARY, KW_PRODUCED, KW_PURGE,
   KW_RANGE, KW_RCFILE, KW_RECOVER, KW_REFERENCES, KW_REFRESH, KW_REGEXP, KW_RELY,
@@ -307,8 +309,8 @@ terminal
   KW_REVOKE, KW_RIGHT, KW_RLIKE, KW_ROLE, KW_ROLES, KW_ROLLUP, KW_ROW, KW_ROWS, KW_SCHEMA,
   KW_SCHEMAS, KW_SELECT, KW_SEMI, KW_SEQUENCEFILE, KW_SERDEPROPERTIES, KW_SERIALIZE_FN,
   KW_SET, KW_SHOW, KW_SMALLINT, KW_SETS, KW_SORT, KW_SPEC, KW_STORED, KW_STRAIGHT_JOIN,
-  KW_STRING,
-  KW_STRUCT, KW_SYMBOL, KW_TABLE, KW_TABLES, KW_TABLESAMPLE, KW_TBLPROPERTIES,
+  KW_STRING, KW_STRUCT, KW_SYMBOL, KW_SYSTEM_TIME, KW_SYSTEM_VERSION,
+  KW_TABLE, KW_TABLES, KW_TABLESAMPLE, KW_TBLPROPERTIES,
   KW_TERMINATED, KW_TEXTFILE, KW_THEN, KW_TIMESTAMP, KW_TINYINT, KW_TRUNCATE, KW_STATS,
   KW_TO, KW_TRUE, KW_UNBOUNDED, KW_UNCACHED, KW_UNION, KW_UNKNOWN, KW_UNSET, KW_UPDATE,
   KW_UPDATE_FN, KW_UPSERT, KW_USE, KW_USING, KW_VALIDATE, KW_VALUES, KW_VARCHAR, KW_VIEW,
@@ -422,6 +424,7 @@ nonterminal WithClause opt_with_clause;
 nonterminal List<View> with_view_def_list;
 nonterminal View with_view_def;
 nonterminal TableRef table_ref;
+nonterminal TimeTravelSpec opt_asof;
 nonterminal Subquery subquery;
 nonterminal JoinOperator join_operator;
 nonterminal opt_inner, opt_outer;
@@ -3054,14 +3057,33 @@ table_ref_list ::=
   ;
 
 table_ref ::=
-  dotted_path:path opt_tablesample:tblsmpl
-  {: RESULT = new TableRef(path, null, tblsmpl); :}
-  | dotted_path:path alias_clause:alias opt_tablesample:tblsmpl
-  {: RESULT = new TableRef(path, alias, tblsmpl); :}
+  dotted_path:path opt_asof:asof opt_tablesample:tblsmpl
+  {: RESULT = new TableRef(path, null, tblsmpl, asof); :}
+  | dotted_path:path opt_asof:asof alias_clause:alias opt_tablesample:tblsmpl
+  {: RESULT = new TableRef(path, alias, tblsmpl, asof); :}
   | LPAREN query_stmt:query RPAREN alias_clause:alias opt_tablesample:tblsmpl
   {: RESULT = new InlineViewRef(alias, query, tblsmpl); :}
   ;
 
+opt_asof ::=
+  KW_FOR KW_SYSTEM_TIME KW_AS KW_OF expr:expr
+  {: RESULT = new TimeTravelSpec(TimeTravelSpec.Kind.TIME_AS_OF, expr); :}
+  | KW_FOR KW_SYSTEM_VERSION KW_AS KW_OF numeric_literal:expr
+  {: RESULT = new TimeTravelSpec(TimeTravelSpec.Kind.VERSION_AS_OF, expr); :}
+  | /* empty */
+  {: RESULT = null; :}
+  ;
+
+opt_tablesample ::=
+  KW_TABLESAMPLE system_ident LPAREN INTEGER_LITERAL:p RPAREN
+  {: RESULT = new TableSampleClause(p.longValue(), null); :}
+  | KW_TABLESAMPLE system_ident LPAREN INTEGER_LITERAL:p RPAREN
+    KW_REPEATABLE LPAREN INTEGER_LITERAL:s RPAREN
+  {: RESULT = new TableSampleClause(p.longValue(), Long.valueOf(s.longValue())); :}
+  | /* empty */
+  {: RESULT = null; :}
+  ;
+
 join_operator ::=
   opt_inner KW_JOIN
   {: RESULT = JoinOperator.INNER_JOIN; :}
@@ -3142,16 +3164,6 @@ plan_hint_list ::=
   :}
   ;
 
-opt_tablesample ::=
-  KW_TABLESAMPLE system_ident LPAREN INTEGER_LITERAL:p RPAREN
-  {: RESULT = new TableSampleClause(p.longValue(), null); :}
-  | KW_TABLESAMPLE system_ident LPAREN INTEGER_LITERAL:p RPAREN
-    KW_REPEATABLE LPAREN INTEGER_LITERAL:s RPAREN
-  {: RESULT = new TableSampleClause(p.longValue(), Long.valueOf(s.longValue())); :}
-  | /* empty */
-  {: RESULT = null; :}
-  ;
-
 ident_list ::=
   ident_or_default:ident
   {:
@@ -4134,6 +4146,8 @@ word ::=
   {: RESULT = r.toString(); :}
   | KW_NULLS:r
   {: RESULT = r.toString(); :}
+  | KW_OF:r
+  {: RESULT = r.toString(); :}
   | KW_OFFSET:r
   {: RESULT = r.toString(); :}
   | KW_ON:r
@@ -4248,6 +4262,10 @@ word ::=
   {: RESULT = r.toString(); :}
   | KW_SYMBOL:r
   {: RESULT = r.toString(); :}
+  | KW_SYSTEM_TIME:r
+  {: RESULT = r.toString(); :}
+  | KW_SYSTEM_VERSION:r
+  {: RESULT = r.toString(); :}
   | KW_TABLE:r
   {: RESULT = r.toString(); :}
   | KW_TABLES:r
diff --git a/fe/src/main/java/org/apache/impala/analysis/AlterTableSetTblProperties.java b/fe/src/main/java/org/apache/impala/analysis/AlterTableSetTblProperties.java
index 731ce7e..e11126f 100644
--- a/fe/src/main/java/org/apache/impala/analysis/AlterTableSetTblProperties.java
+++ b/fe/src/main/java/org/apache/impala/analysis/AlterTableSetTblProperties.java
@@ -182,7 +182,7 @@ public class AlterTableSetTblProperties extends AlterTableSetStmt {
     try {
       FeIcebergTable iceTable = (FeIcebergTable)getTargetTable();
       List<DataFile> dataFiles = IcebergUtil.getIcebergDataFiles(iceTable,
-          new ArrayList<>());
+          new ArrayList<>(), /*timeTravelSpec=*/null);
       if (dataFiles.isEmpty()) return;
       DataFile firstFile = dataFiles.get(0);
       String errorMsg = "Attempt to set Iceberg data file format to %s, but found data " +
diff --git a/fe/src/main/java/org/apache/impala/analysis/BaseTableRef.java b/fe/src/main/java/org/apache/impala/analysis/BaseTableRef.java
index 64d2799..2c76aa5 100644
--- a/fe/src/main/java/org/apache/impala/analysis/BaseTableRef.java
+++ b/fe/src/main/java/org/apache/impala/analysis/BaseTableRef.java
@@ -66,6 +66,7 @@ public class BaseTableRef extends TableRef {
     isAnalyzed_ = true;
     analyzer.checkTableCapability(getTable(), Analyzer.OperationType.ANY);
     analyzeTableSample(analyzer);
+    analyzeTimeTravel(analyzer);
     analyzeHints(analyzer);
     analyzeJoin(analyzer);
     analyzeSkipHeaderLineCount();
@@ -78,10 +79,13 @@ public class BaseTableRef extends TableRef {
     String aliasSql = "";
     String alias = getExplicitAlias();
     if (alias != null) aliasSql = " " + ToSqlUtils.getIdentSql(alias);
+    String timeTravelSql = "";
+    if (timeTravelSpec_ != null) timeTravelSql = " " + timeTravelSpec_.toSql();
     String tableSampleSql = "";
     if (sampleParams_ != null) tableSampleSql = " " + sampleParams_.toSql(options);
     String tableHintsSql = ToSqlUtils.getPlanHintsSql(options, tableHints_);
-    return getTable().getTableName().toSql() + aliasSql + tableSampleSql + tableHintsSql;
+    return getTable().getTableName().toSql() +
+        timeTravelSql + aliasSql + tableSampleSql + tableHintsSql;
   }
 
   @Override
diff --git a/fe/src/main/java/org/apache/impala/analysis/ColumnDef.java b/fe/src/main/java/org/apache/impala/analysis/ColumnDef.java
index 8b4d7f5..75b1319 100644
--- a/fe/src/main/java/org/apache/impala/analysis/ColumnDef.java
+++ b/fe/src/main/java/org/apache/impala/analysis/ColumnDef.java
@@ -32,6 +32,7 @@ import org.apache.impala.common.AnalysisException;
 import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.service.FeSupport;
 import org.apache.impala.thrift.TColumn;
+import org.apache.impala.util.ExprUtil;
 import org.apache.impala.util.KuduUtil;
 import org.apache.impala.util.MetaStoreUtil;
 import org.apache.kudu.ColumnSchema.CompressionAlgorithm;
@@ -302,7 +303,7 @@ public class ColumnDef {
       // TODO: Remove when Impala supports a 64-bit TIMESTAMP type.
       if (type_.isTimestamp()) {
         try {
-          long unixTimeMicros = KuduUtil.timestampToUnixTimeMicros(analyzer,
+          long unixTimeMicros = ExprUtil.utcTimestampToUnixTimeMicros(analyzer,
               defaultValLiteral);
           outputDefaultValue_ = new NumericLiteral(BigInteger.valueOf(unixTimeMicros),
               Type.BIGINT);
diff --git a/fe/src/main/java/org/apache/impala/analysis/RangePartition.java b/fe/src/main/java/org/apache/impala/analysis/RangePartition.java
index ac1972a..ace3b37 100644
--- a/fe/src/main/java/org/apache/impala/analysis/RangePartition.java
+++ b/fe/src/main/java/org/apache/impala/analysis/RangePartition.java
@@ -28,6 +28,7 @@ import org.apache.impala.common.InternalException;
 import org.apache.impala.common.Pair;
 import org.apache.impala.service.FeSupport;
 import org.apache.impala.thrift.TRangePartition;
+import org.apache.impala.util.ExprUtil;
 import org.apache.impala.util.KuduUtil;
 
 import com.google.common.base.Preconditions;
@@ -210,7 +211,7 @@ public class RangePartition extends StmtNode {
     // TODO: Remove when Impala supports a 64-bit TIMESTAMP type.
     if (colType.isTimestamp()) {
       try {
-        long unixTimeMicros = KuduUtil.timestampToUnixTimeMicros(analyzer, literal);
+        long unixTimeMicros = ExprUtil.utcTimestampToUnixTimeMicros(analyzer, literal);
         literal = new NumericLiteral(BigInteger.valueOf(unixTimeMicros), Type.BIGINT);
       } catch (InternalException e) {
         throw new AnalysisException(
diff --git a/fe/src/main/java/org/apache/impala/analysis/TableRef.java b/fe/src/main/java/org/apache/impala/analysis/TableRef.java
index d6b312c..4f008b1 100644
--- a/fe/src/main/java/org/apache/impala/analysis/TableRef.java
+++ b/fe/src/main/java/org/apache/impala/analysis/TableRef.java
@@ -27,9 +27,11 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import org.apache.impala.analysis.TimeTravelSpec.Kind;
 import org.apache.impala.authorization.Privilege;
 import org.apache.impala.catalog.Column;
 import org.apache.impala.catalog.FeFsTable;
+import org.apache.impala.catalog.FeIcebergTable;
 import org.apache.impala.catalog.FeTable;
 import org.apache.impala.common.AnalysisException;
 import org.apache.impala.planner.JoinNode.DistributionMode;
@@ -147,6 +149,10 @@ public class TableRef extends StmtNode {
   // Scalar columns referenced in the query. Used in resolving column mask.
   protected Map<String, Column> scalarColumns_ = new LinkedHashMap<>();
 
+  // Time travel spec of this table ref. It contains information specified in the
+  // FOR SYSTEM_TIME AS OF <timestamp> or FOR SYSTEM_TIME AS OF <version> clause.
+  protected TimeTravelSpec timeTravelSpec_;
+
   // END: Members that need to be reset()
   /////////////////////////////////////////
 
@@ -170,20 +176,25 @@ public class TableRef extends StmtNode {
   }
 
   public TableRef(List<String> path, String alias, TableSampleClause tableSample) {
-    this(path, alias, tableSample, Privilege.SELECT, false);
+    this(path, alias, tableSample, null);
+  }
+
+  public TableRef(List<String> path, String alias, TableSampleClause tableSample,
+      TimeTravelSpec timeTravel) {
+    this(path, alias, tableSample, timeTravel, Privilege.SELECT, false);
   }
 
   public TableRef(List<String> path, String alias, Privilege priv) {
-    this(path, alias, null, priv, false);
+    this(path, alias, null, null, priv, false);
   }
 
   public TableRef(List<String> path, String alias, Privilege priv,
       boolean requireGrantOption) {
-    this(path, alias, null, priv, requireGrantOption);
+    this(path, alias, null, null, priv, requireGrantOption);
   }
 
   public TableRef(List<String> path, String alias, TableSampleClause sampleParams,
-      Privilege priv, boolean requireGrantOption) {
+      TimeTravelSpec timeTravel, Privilege priv, boolean requireGrantOption) {
     rawPath_ = path;
     if (alias != null) {
       aliases_ = new String[] { alias.toLowerCase() };
@@ -198,6 +209,7 @@ public class TableRef extends StmtNode {
     replicaPreference_ = null;
     randomReplica_ = false;
     convertLimitToSampleHintPercent_ = -1;
+    timeTravelSpec_ = timeTravel;
   }
 
   /**
@@ -209,6 +221,8 @@ public class TableRef extends StmtNode {
     aliases_ = other.aliases_;
     hasExplicitAlias_ = other.hasExplicitAlias_;
     sampleParams_ = other.sampleParams_;
+    timeTravelSpec_ = other.timeTravelSpec_ != null ?
+                      other.timeTravelSpec_.clone() : null;
     priv_ = other.priv_;
     requireGrantOption_ = other.requireGrantOption_;
     joinOp_ = other.joinOp_;
@@ -331,6 +345,7 @@ public class TableRef extends StmtNode {
     return resolvedPath_.getRootTable();
   }
   public TableSampleClause getSampleParams() { return sampleParams_; }
+  public TimeTravelSpec getTimeTravelSpec() { return timeTravelSpec_; }
   public Privilege getPrivilege() { return priv_; }
   public boolean requireGrantOption() { return requireGrantOption_; }
   public List<PlanHint> getJoinHints() { return joinHints_; }
@@ -431,6 +446,20 @@ public class TableRef extends StmtNode {
     }
   }
 
+  protected void analyzeTimeTravel(Analyzer analyzer) throws AnalysisException {
+    if (timeTravelSpec_ != null) {
+      if (!(getTable() instanceof FeIcebergTable)) {
+        throw new AnalysisException(String.format(
+            "FOR %s AS OF clause is only supported for Iceberg tables. " +
+            "%s is not an Iceberg table.",
+            timeTravelSpec_.getKind() == Kind.TIME_AS_OF ? "SYSTEM_TIME" :
+                                                           "SYSTEM_VERSION",
+            getTable().getFullName()));
+      }
+      timeTravelSpec_.analyze(analyzer);
+    }
+  }
+
   protected void analyzeHints(Analyzer analyzer) throws AnalysisException {
     // We prefer adding warnings over throwing exceptions here to maintain view
     // compatibility with Hive.
@@ -717,6 +746,7 @@ public class TableRef extends StmtNode {
     allMaterializedTupleIds_.clear();
     correlatedTupleIds_.clear();
     desc_ = null;
+    if (timeTravelSpec_ != null) timeTravelSpec_.reset();
   }
 
   public boolean isTableMaskingView() { return false; }
@@ -736,6 +766,7 @@ public class TableRef extends StmtNode {
     other.joinOp_ = joinOp_;
     other.joinHints_ = joinHints_;
     other.tableHints_ = tableHints_;
+    other.timeTravelSpec_ = timeTravelSpec_;
     // Clear properties. Don't clear aliases_ since it's still used in resolving slots
     // in the query block of 'other'.
     onClause_ = null;
@@ -743,5 +774,6 @@ public class TableRef extends StmtNode {
     joinOp_ = null;
     joinHints_ = new ArrayList<>();
     tableHints_ = new ArrayList<>();
+    timeTravelSpec_ = null;
   }
 }
diff --git a/fe/src/main/java/org/apache/impala/analysis/TimeTravelSpec.java b/fe/src/main/java/org/apache/impala/analysis/TimeTravelSpec.java
new file mode 100644
index 0000000..7c42dbc
--- /dev/null
+++ b/fe/src/main/java/org/apache/impala/analysis/TimeTravelSpec.java
@@ -0,0 +1,149 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.analysis;
+
+import static org.apache.impala.analysis.ToSqlOptions.DEFAULT;
+
+import org.apache.impala.catalog.Type;
+import org.apache.impala.common.AnalysisException;
+import org.apache.impala.common.InternalException;
+import org.apache.impala.util.ExprUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Preconditions;
+
+// Contains information from the 'FOR SYSTEM_TIME AS OF', or 'FOR SYSTEM_VERSION AS OF'
+// clauses. Based on that information we can support time travel with table formats
+// that support it, e.g. Iceberg.
+// TODO(IMPALA-9773): Kudu
+public class TimeTravelSpec extends StmtNode {
+  private final static Logger LOG = LoggerFactory.getLogger(TimeTravelSpec.class);
+
+  public enum Kind {
+    TIME_AS_OF,
+    VERSION_AS_OF
+  }
+
+  // Time travel can be time-based or version-based.
+  private Kind kind_;
+
+  // Expression used in the 'FOR SYSTEM_* AS OF' clause.
+  private Expr asOfExpr_;
+
+  // For Iceberg tables this is the snapshot id.
+  private long asOfVersion_ = -1;
+
+  // Iceberg uses millis, Kudu uses micros for time travel, so using micros here.
+  private long asOfMicros_ = -1;
+
+  public Kind getKind() { return kind_; }
+
+  public long getAsOfVersion() { return asOfVersion_; }
+
+  public long getAsOfMillis() { return asOfMicros_ == -1 ? -1 : asOfMicros_ / 1000; }
+
+  public long getAsOfMicros() { return asOfMicros_; }
+
+  public TimeTravelSpec(Kind kind, Expr asOfExpr) {
+    Preconditions.checkNotNull(asOfExpr);
+    kind_ = kind;
+    asOfExpr_ = asOfExpr;
+  }
+
+  protected TimeTravelSpec(TimeTravelSpec other) {
+    kind_ = other.kind_;
+    asOfExpr_ = other.asOfExpr_.clone();
+    asOfVersion_ = other.asOfVersion_;
+    asOfMicros_ = other.asOfMicros_;
+  }
+
+  @Override
+  public TimeTravelSpec clone() { return new TimeTravelSpec(this); }
+
+  @Override
+  public void analyze(Analyzer analyzer) throws AnalysisException {
+    switch (kind_) {
+      case TIME_AS_OF: analyzeTimeBased(analyzer); break;
+      case VERSION_AS_OF: analyzeVersionBased(analyzer); break;
+    }
+  }
+
+  private void analyzeTimeBased(Analyzer analyzer) throws AnalysisException {
+    Preconditions.checkNotNull(asOfExpr_);
+    asOfExpr_.analyze(analyzer);
+    if (!asOfExpr_.isConstant()) {
+      throw new AnalysisException(
+          "FOR SYSTEM_TIME AS OF <expression> must be a constant expression: " + toSql());
+    }
+    if (asOfExpr_.getType().isStringType()) {
+      asOfExpr_ = new CastExpr(Type.TIMESTAMP, asOfExpr_);
+    }
+    if (!asOfExpr_.getType().isTimestamp()) {
+      throw new AnalysisException(
+          "FOR SYSTEM_TIME AS OF <expression> must be a timestamp type but is '" +
+              asOfExpr_.getType() + "': " + asOfExpr_.toSql());
+    }
+    try {
+      asOfMicros_ = ExprUtil.localTimestampToUnixTimeMicros(analyzer, asOfExpr_);
+      LOG.debug("FOR SYSTEM_TIME AS OF micros: " + String.valueOf(asOfMicros_));
+    } catch (InternalException ie) {
+      throw new AnalysisException(
+          "Invalid TIMESTAMP expression: " + ie.getMessage(), ie);
+    }
+  }
+
+  private void analyzeVersionBased(Analyzer analyzer) throws AnalysisException {
+    Preconditions.checkNotNull(asOfExpr_);
+    asOfExpr_.analyze(analyzer);
+    if (!(asOfExpr_ instanceof LiteralExpr)) {
+      throw new AnalysisException(
+          "FOR SYSTEM_VERSION AS OF <expression> must be an integer literal: "
+          + toSql());
+    }
+    if (!asOfExpr_.getType().isIntegerType()) {
+      throw new AnalysisException(
+          "FOR SYSTEM_VERSION AS OF <expression> must be an integer type but is '" +
+              asOfExpr_.getType() + "': " + asOfExpr_.toSql());
+    }
+    asOfVersion_ = asOfExpr_.evalToInteger(analyzer, "SYSTEM_VERSION AS OF");
+    if (asOfVersion_ < 0) {
+      throw new AnalysisException(
+          "Invalid version number has been given to SYSTEM_VERSION AS OF: " +
+          String.valueOf(asOfVersion_));
+    }
+    LOG.debug("FOR SYSTEM_VERSION AS OF version: " + String.valueOf(asOfVersion_));
+  }
+
+  public void reset() {
+    asOfVersion_ = -1;
+    asOfMicros_ = -1;
+  }
+
+  @Override
+  public String toSql(ToSqlOptions options) {
+    return String.format("FOR %s AS OF %s",
+        kind_ == Kind.TIME_AS_OF ? "SYSTEM_TIME" : "SYSTEM_VERSION",
+        asOfExpr_.toSql());
+  }
+
+  @Override
+  public final String toSql() {
+    return toSql(DEFAULT);
+  }
+}
diff --git a/fe/src/main/java/org/apache/impala/catalog/FeIcebergTable.java b/fe/src/main/java/org/apache/impala/catalog/FeIcebergTable.java
index d05a4fc..d2fb5fd 100644
--- a/fe/src/main/java/org/apache/impala/catalog/FeIcebergTable.java
+++ b/fe/src/main/java/org/apache/impala/catalog/FeIcebergTable.java
@@ -419,7 +419,7 @@ public interface FeIcebergTable extends FeFsTable {
     /**
      * Get FileDescriptor by data file location
      */
-    private static HdfsPartition.FileDescriptor getFileDescriptor(Path fileLoc,
+    public static HdfsPartition.FileDescriptor getFileDescriptor(Path fileLoc,
         Path tableLoc, ListMap<TNetworkAddress> hostIndex) throws IOException {
       FileSystem fs = FileSystemUtil.getFileSystemForPath(tableLoc);
       FileStatus fileStatus = fs.getFileStatus(fileLoc);
@@ -454,7 +454,7 @@ public interface FeIcebergTable extends FeFsTable {
         FeIcebergTable table) throws IOException, TableLoadingException {
       // Empty predicates
       List<DataFile> dataFileList = IcebergUtil.getIcebergDataFiles(table,
-          new ArrayList<>());
+          new ArrayList<>(), /*timeTravelSpecl=*/null);
 
       Map<String, HdfsPartition.FileDescriptor> fileDescMap = new HashMap<>();
       for (DataFile file : dataFileList) {
diff --git a/fe/src/main/java/org/apache/impala/planner/IcebergScanNode.java b/fe/src/main/java/org/apache/impala/planner/IcebergScanNode.java
index f719735..fb17d8d 100644
--- a/fe/src/main/java/org/apache/impala/planner/IcebergScanNode.java
+++ b/fe/src/main/java/org/apache/impala/planner/IcebergScanNode.java
@@ -17,12 +17,14 @@
 
 package org.apache.impala.planner;
 
+import java.io.IOException;
 import java.math.BigDecimal;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.List;
 import java.util.ListIterator;
 
+import org.apache.hadoop.fs.Path;
 import org.apache.iceberg.DataFile;
 import org.apache.iceberg.expressions.Expressions;
 import org.apache.iceberg.expressions.Expression.Operation;
@@ -38,6 +40,7 @@ import org.apache.impala.analysis.NumericLiteral;
 import org.apache.impala.analysis.SlotRef;
 import org.apache.impala.analysis.StringLiteral;
 import org.apache.impala.analysis.TableRef;
+import org.apache.impala.analysis.TimeTravelSpec;
 import org.apache.impala.analysis.TupleDescriptor;
 import org.apache.impala.catalog.FeCatalogUtils;
 import org.apache.impala.catalog.FeFsPartition;
@@ -51,22 +54,30 @@ import org.apache.impala.common.ImpalaRuntimeException;
 import org.apache.impala.util.IcebergUtil;
 
 import com.google.common.base.Preconditions;
-import org.apache.impala.util.KuduUtil;
+import org.apache.impala.util.ExprUtil;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Scan of a single iceberg table
  */
 public class IcebergScanNode extends HdfsScanNode {
+  private final static Logger LOG = LoggerFactory.getLogger(TimeTravelSpec.class);
+
   private final FeIcebergTable icebergTable_;
 
   // Exprs in icebergConjuncts_ converted to UnboundPredicate.
   private final List<UnboundPredicate> icebergPredicates_ = new ArrayList<>();
 
+  private TimeTravelSpec timeTravelSpec_;
+
   public IcebergScanNode(PlanNodeId id, TupleDescriptor desc, List<Expr> conjuncts,
-      TableRef hdfsTblRef, FeFsTable feFsTable, MultiAggregateInfo aggInfo) {
-    super(id, desc, conjuncts, getIcebergPartition(feFsTable), hdfsTblRef, aggInfo,
+      TableRef tblRef, FeFsTable feFsTable, MultiAggregateInfo aggInfo) {
+    super(id, desc, conjuncts, getIcebergPartition(feFsTable), tblRef, aggInfo,
         null, false);
     icebergTable_ = (FeIcebergTable) desc_.getTable();
+    timeTravelSpec_ = tblRef.getTimeTravelSpec();
     // Hdfs table transformed from iceberg table only has one partition
     Preconditions.checkState(partitions_.size() == 1);
   }
@@ -91,27 +102,54 @@ public class IcebergScanNode extends HdfsScanNode {
    * We need prune hdfs partition FileDescriptor by iceberg predicates
    */
   public List<FileDescriptor> getFileDescriptorByIcebergPredicates()
-      throws ImpalaRuntimeException{
+      throws ImpalaRuntimeException {
     List<DataFile> dataFileList;
     try {
-      dataFileList = IcebergUtil.getIcebergDataFiles(icebergTable_, icebergPredicates_);
+      dataFileList = IcebergUtil.getIcebergDataFiles(icebergTable_, icebergPredicates_,
+          timeTravelSpec_);
     } catch (TableLoadingException e) {
       throw new ImpalaRuntimeException(String.format(
           "Failed to load data files for Iceberg table: %s", icebergTable_.getFullName()),
           e);
     }
-
+    long dataFilesCacheMisses = 0;
     List<FileDescriptor> fileDescList = new ArrayList<>();
     for (DataFile dataFile : dataFileList) {
       FileDescriptor fileDesc = icebergTable_.getPathHashToFileDescMap()
           .get(IcebergUtil.getDataFilePathHash(dataFile));
-      fileDescList.add(fileDesc);
-      //Todo: how to deal with iceberg metadata update, we need to invalidate manually now
       if (fileDesc == null) {
-        throw new ImpalaRuntimeException("Cannot find file in cache: " + dataFile.path()
-            + " with snapshot id: " + String.valueOf(icebergTable_.snapshotId()));
+        if (timeTravelSpec_ == null) {
+          // We should always find the data files in the cache when not doing time travel.
+          throw new ImpalaRuntimeException("Cannot find file in cache: " + dataFile.path()
+              + " with snapshot id: " + String.valueOf(icebergTable_.snapshotId()));
+        }
+        ++dataFilesCacheMisses;
+        try {
+          fileDesc = FeIcebergTable.Utils.getFileDescriptor(
+              new Path(dataFile.path().toString()),
+              new Path(icebergTable_.getIcebergTableLocation()),
+              icebergTable_.getHostIndex());
+        } catch (IOException ex) {
+          throw new ImpalaRuntimeException(
+              "Cannot load file descriptor for " + dataFile.path(), ex);
+        }
+        if (fileDesc == null) {
+          throw new ImpalaRuntimeException(
+              "Cannot load file descriptor for: " + dataFile.path());
+        }
+        // Add file descriptor to the cache.
+        icebergTable_.getPathHashToFileDescMap().put(
+            IcebergUtil.getDataFilePathHash(dataFile), fileDesc);
       }
+      fileDescList.add(fileDesc);
     }
+
+    if (dataFilesCacheMisses > 0) {
+      Preconditions.checkState(timeTravelSpec_ != null);
+      LOG.info("File descriptors had to be loaded on demand during time travel: " +
+          String.valueOf(dataFilesCacheMisses));
+    }
+
     return fileDescList;
   }
 
@@ -194,7 +232,8 @@ public class IcebergScanNode extends HdfsScanNode {
         break;
       }
       case TIMESTAMP: {
-        long unixMicros = KuduUtil.timestampToUnixTimeMicros(analyzer, literal);
+        // TODO(IMPALA-10850): interpret timestamps in local timezone.
+        long unixMicros = ExprUtil.utcTimestampToUnixTimeMicros(analyzer, literal);
         unboundPredicate = Expressions.predicate(op, colName, unixMicros);
         break;
       }
diff --git a/fe/src/main/java/org/apache/impala/planner/KuduScanNode.java b/fe/src/main/java/org/apache/impala/planner/KuduScanNode.java
index 3ab8ded..3eb881a 100644
--- a/fe/src/main/java/org/apache/impala/planner/KuduScanNode.java
+++ b/fe/src/main/java/org/apache/impala/planner/KuduScanNode.java
@@ -56,6 +56,7 @@ import org.apache.impala.thrift.TScanRange;
 import org.apache.impala.thrift.TScanRangeLocation;
 import org.apache.impala.thrift.TScanRangeLocationList;
 import org.apache.impala.thrift.TScanRangeSpec;
+import org.apache.impala.util.ExprUtil;
 import org.apache.impala.util.KuduUtil;
 import org.apache.impala.util.ExecutorMembershipSnapshot;
 import org.apache.kudu.ColumnSchema;
@@ -555,8 +556,9 @@ public class KuduScanNode extends ScanNode {
       case TIMESTAMP: {
         try {
           // TODO: Simplify when Impala supports a 64-bit TIMESTAMP type.
+          // TODO(IMPALA-10850): interpret timestamps in local timezone.
           kuduPredicate = KuduPredicate.newComparisonPredicate(column, op,
-              KuduUtil.timestampToUnixTimeMicros(analyzer, literal));
+              ExprUtil.utcTimestampToUnixTimeMicros(analyzer, literal));
         } catch (Exception e) {
           LOG.info("Exception converting Kudu timestamp predicate: " + expr.toSql(), e);
           return false;
@@ -667,7 +669,7 @@ public class KuduScanNode extends ScanNode {
       case TIMESTAMP: {
         try {
           // TODO: Simplify when Impala supports a 64-bit TIMESTAMP type.
-          return KuduUtil.timestampToUnixTimeMicros(analyzer, e);
+          return ExprUtil.utcTimestampToUnixTimeMicros(analyzer, e);
         } catch (Exception ex) {
           LOG.info("Exception converting Kudu timestamp expr: " + e.toSql(), ex);
         }
diff --git a/fe/src/main/java/org/apache/impala/util/ExprUtil.java b/fe/src/main/java/org/apache/impala/util/ExprUtil.java
new file mode 100644
index 0000000..b5fc972
--- /dev/null
+++ b/fe/src/main/java/org/apache/impala/util/ExprUtil.java
@@ -0,0 +1,68 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.util;
+
+import com.google.common.base.Preconditions;
+
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.impala.analysis.Analyzer;
+import org.apache.impala.analysis.Expr;
+import org.apache.impala.analysis.FunctionCallExpr;
+import org.apache.impala.analysis.StringLiteral;
+import org.apache.impala.catalog.Type;
+import org.apache.impala.common.AnalysisException;
+import org.apache.impala.common.InternalException;
+import org.apache.impala.service.FeSupport;
+import org.apache.impala.thrift.TColumnValue;
+
+public class ExprUtil {
+  /**
+   * Converts a UTC timestamp to UNIX microseconds.
+   */
+  public static long utcTimestampToUnixTimeMicros(Analyzer analyzer, Expr timestampExpr)
+      throws AnalysisException, InternalException {
+    Preconditions.checkArgument(timestampExpr.isAnalyzed());
+    Preconditions.checkArgument(timestampExpr.isConstant());
+    Preconditions.checkArgument(timestampExpr.getType() == Type.TIMESTAMP);
+    Expr toUnixTimeExpr = new FunctionCallExpr("utc_to_unix_micros",
+        Lists.newArrayList(timestampExpr));
+    toUnixTimeExpr.analyze(analyzer);
+    TColumnValue result = FeSupport.EvalExprWithoutRow(toUnixTimeExpr,
+        analyzer.getQueryCtx());
+    if (!result.isSetLong_val()) {
+      throw new InternalException("Error converting timestamp expression: " +
+          timestampExpr.debugString());
+    }
+    return result.getLong_val();
+  }
+
+  /**
+   * Converts a timestamp in local timezone to UTC, then to UNIX microseconds.
+   */
+  public static long localTimestampToUnixTimeMicros(Analyzer analyzer, Expr timestampExpr)
+      throws AnalysisException, InternalException {
+    Preconditions.checkArgument(timestampExpr.isAnalyzed());
+    Preconditions.checkArgument(timestampExpr.isConstant());
+    Preconditions.checkArgument(timestampExpr.getType() == Type.TIMESTAMP);
+    Expr toUtcTimestamp = new FunctionCallExpr("to_utc_timestamp",
+        Lists.newArrayList(timestampExpr,
+        new StringLiteral(analyzer.getQueryCtx().getLocal_time_zone())));
+    toUtcTimestamp.analyze(analyzer);
+    return utcTimestampToUnixTimeMicros(analyzer, toUtcTimestamp);
+  }
+}
diff --git a/fe/src/main/java/org/apache/impala/util/IcebergUtil.java b/fe/src/main/java/org/apache/impala/util/IcebergUtil.java
index 6829e0e..27dd9ce 100644
--- a/fe/src/main/java/org/apache/impala/util/IcebergUtil.java
+++ b/fe/src/main/java/org/apache/impala/util/IcebergUtil.java
@@ -62,6 +62,8 @@ import org.apache.iceberg.types.Types;
 import org.apache.impala.analysis.IcebergPartitionField;
 import org.apache.impala.analysis.IcebergPartitionSpec;
 import org.apache.impala.analysis.IcebergPartitionTransform;
+import org.apache.impala.analysis.TimeTravelSpec;
+import org.apache.impala.analysis.TimeTravelSpec.Kind;
 import org.apache.impala.catalog.Catalog;
 import org.apache.impala.catalog.FeIcebergTable;
 import org.apache.impala.catalog.HdfsFileFormat;
@@ -506,10 +508,11 @@ public class IcebergUtil {
    * Get iceberg data file by file system table location and iceberg predicates
    */
   public static List<DataFile> getIcebergDataFiles(FeIcebergTable table,
-      List<UnboundPredicate> predicates) throws TableLoadingException {
+      List<UnboundPredicate> predicates, TimeTravelSpec timeTravelSpec)
+        throws TableLoadingException {
     if (table.snapshotId() == -1) return Collections.emptyList();
-    BaseTable baseTable =  (BaseTable)IcebergUtil.loadTable(table);
-    TableScan scan = baseTable.newScan().useSnapshot(table.snapshotId());
+
+    TableScan scan = createScanAsOf(table, timeTravelSpec);
     for (UnboundPredicate predicate : predicates) {
       scan = scan.filter(predicate);
     }
@@ -521,6 +524,23 @@ public class IcebergUtil {
     return dataFileList;
   }
 
+  private static TableScan createScanAsOf(FeIcebergTable table,
+      TimeTravelSpec timeTravelSpec) throws TableLoadingException {
+    BaseTable baseTable = (BaseTable)IcebergUtil.loadTable(table);
+    TableScan scan = baseTable.newScan();
+    if (timeTravelSpec == null) {
+      scan = scan.useSnapshot(table.snapshotId());
+    } else {
+      if (timeTravelSpec.getKind() == Kind.TIME_AS_OF) {
+        scan = scan.asOfTime(timeTravelSpec.getAsOfMillis());
+      } else {
+        Preconditions.checkState(timeTravelSpec.getKind() == Kind.VERSION_AS_OF);
+        scan = scan.useSnapshot(timeTravelSpec.getAsOfVersion());
+      }
+    }
+    return scan;
+  }
+
   /**
    * Use DataFile path to generate 128-bit Murmur3 hash as map key, cached in memory
    */
diff --git a/fe/src/main/java/org/apache/impala/util/KuduUtil.java b/fe/src/main/java/org/apache/impala/util/KuduUtil.java
index 4ddb734..7037c92 100644
--- a/fe/src/main/java/org/apache/impala/util/KuduUtil.java
+++ b/fe/src/main/java/org/apache/impala/util/KuduUtil.java
@@ -256,23 +256,6 @@ public class KuduUtil {
     }
   }
 
-  public static long timestampToUnixTimeMicros(Analyzer analyzer, Expr timestampExpr)
-      throws AnalysisException, InternalException {
-    Preconditions.checkArgument(timestampExpr.isAnalyzed());
-    Preconditions.checkArgument(timestampExpr.isConstant());
-    Preconditions.checkArgument(timestampExpr.getType() == Type.TIMESTAMP);
-    Expr toUnixTimeExpr = new FunctionCallExpr("utc_to_unix_micros",
-        Lists.newArrayList(timestampExpr));
-    toUnixTimeExpr.analyze(analyzer);
-    TColumnValue result = FeSupport.EvalExprWithoutRow(toUnixTimeExpr,
-        analyzer.getQueryCtx());
-    if (!result.isSetLong_val()) {
-      throw new InternalException("Error converting timestamp expression: " +
-          timestampExpr.debugString());
-    }
-    return result.getLong_val();
-  }
-
   public static Encoding fromThrift(TColumnEncoding encoding)
       throws ImpalaRuntimeException {
     switch (encoding) {
diff --git a/fe/src/main/jflex/sql-scanner.flex b/fe/src/main/jflex/sql-scanner.flex
index d65e298..6778352 100644
--- a/fe/src/main/jflex/sql-scanner.flex
+++ b/fe/src/main/jflex/sql-scanner.flex
@@ -190,6 +190,7 @@ import org.apache.impala.thrift.TReservedWordsVersion;
     keywordMap.put("novalidate", SqlParserSymbols.KW_NOVALIDATE);
     keywordMap.put("null", SqlParserSymbols.KW_NULL);
     keywordMap.put("nulls", SqlParserSymbols.KW_NULLS);
+    keywordMap.put("of", SqlParserSymbols.KW_OF);
     keywordMap.put("offset", SqlParserSymbols.KW_OFFSET);
     keywordMap.put("on", SqlParserSymbols.KW_ON);
     keywordMap.put("or", SqlParserSymbols.KW_OR);
@@ -250,6 +251,8 @@ import org.apache.impala.thrift.TReservedWordsVersion;
     keywordMap.put("string", SqlParserSymbols.KW_STRING);
     keywordMap.put("struct", SqlParserSymbols.KW_STRUCT);
     keywordMap.put("symbol", SqlParserSymbols.KW_SYMBOL);
+    keywordMap.put("system_time", SqlParserSymbols.KW_SYSTEM_TIME);
+    keywordMap.put("system_version", SqlParserSymbols.KW_SYSTEM_VERSION);
     keywordMap.put("table", SqlParserSymbols.KW_TABLE);
     keywordMap.put("tables", SqlParserSymbols.KW_TABLES);
     keywordMap.put("tablesample", SqlParserSymbols.KW_TABLESAMPLE);
diff --git a/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java b/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java
index 9f03c13..8955b3d 100644
--- a/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java
@@ -38,6 +38,7 @@ import org.apache.impala.common.AnalysisException;
 import org.apache.impala.common.FileSystemUtil;
 import org.apache.impala.common.ImpalaException;
 import org.apache.impala.service.BackendConfig;
+import org.apache.impala.service.FeSupport;
 import org.apache.impala.thrift.TFunctionCategory;
 import org.junit.Assert;
 import org.junit.Test;
@@ -50,6 +51,10 @@ import com.google.common.collect.Sets;
 
 public class AnalyzeStmtsTest extends AnalyzerTest {
 
+  static {
+    FeSupport.loadLibrary();
+  }
+
   /**
    * Tests analyzing the given collection table reference and field assumed to be in
    * functional.allcomplextypes, including different combinations of
@@ -4554,7 +4559,7 @@ public class AnalyzeStmtsTest extends AnalyzerTest {
     testNumberOfMembers(ValuesStmt.class, 0);
 
     // Also check TableRefs.
-    testNumberOfMembers(TableRef.class, 26);
+    testNumberOfMembers(TableRef.class, 27);
     testNumberOfMembers(BaseTableRef.class, 0);
     testNumberOfMembers(InlineViewRef.class, 10);
   }
@@ -4855,6 +4860,36 @@ public class AnalyzeStmtsTest extends AnalyzerTest {
   }
 
   @Test
+  public void testIcebergTimeTravel() throws ImpalaException {
+    TableName iceT = new TableName("functional_parquet", "iceberg_non_partitioned");
+    TableName nonIceT = new TableName("functional", "allcomplextypes");
+
+    TblsAnalyzeOk("select * from $TBL for system_time as of now()", iceT);
+    TblsAnalyzeOk("select * from $TBL for system_time as of '2021-08-09 15:52:45'", iceT);
+    TblsAnalyzeOk("select * from $TBL for system_time as of " +
+        "cast('2021-08-09 15:52:45' as timestamp) - interval 2 days + interval 3 hours",
+        iceT);
+    TblsAnalyzeOk("select * from $TBL for system_time as of now() + interval 3 days",
+        iceT);
+    TblsAnalyzeOk("select * from $TBL for system_version as of 123456", iceT);
+
+    TblsAnalysisError("select * from $TBL for system_time as of 42", iceT,
+        "FOR SYSTEM_TIME AS OF <expression> must be a timestamp type");
+    TblsAnalysisError("select * from $TBL for system_time as of id", iceT,
+        "FOR SYSTEM_TIME AS OF <expression> must be a constant expression");
+    TblsAnalysisError("select * from $TBL for system_time as of '2021-02-32 15:52:45'",
+        iceT, "Invalid TIMESTAMP expression");
+
+    TblsAnalysisError("select * from $TBL for system_version as of 3.14",
+        iceT, "FOR SYSTEM_VERSION AS OF <expression> must be an integer type but is");
+
+    TblsAnalysisError("select * from $TBL for system_time as of now()", nonIceT,
+        "FOR SYSTEM_TIME AS OF clause is only supported for Iceberg tables.");
+    TblsAnalysisError("select * from $TBL for system_version as of 123", nonIceT,
+        "FOR SYSTEM_VERSION AS OF clause is only supported for Iceberg tables.");
+  }
+
+  @Test
   public void testCreatePartitionedIcebergTable() throws ImpalaException {
     String tblProperties = " TBLPROPERTIES ('iceberg.catalog'='hadoop.tables')";
     AnalyzesOk("CREATE TABLE tbl1 (i int, p1 int, p2 timestamp) " +
diff --git a/fe/src/test/java/org/apache/impala/analysis/ParserTest.java b/fe/src/test/java/org/apache/impala/analysis/ParserTest.java
index 866570d..ba4de8c 100644
--- a/fe/src/test/java/org/apache/impala/analysis/ParserTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/ParserTest.java
@@ -659,6 +659,29 @@ public class ParserTest extends FrontendTestBase {
   }
 
   @Test
+  public void TestTimeTravel() {
+    String timeAsOf = "select * from a for system_time as of";
+    String aliases[] = new String[] {"", " a_snapshot", " as a_snapshot"};
+    for (String alias : aliases) {
+      ParsesOk(timeAsOf + " '2021-08-09 15:14:40'" + alias);
+      ParsesOk(timeAsOf + " days_sub('2021-08-09 15:14:40', 3)" + alias);
+      ParsesOk(timeAsOf + " now()" + alias);
+      ParsesOk(timeAsOf + " days_sub(now(), 12)" + alias);
+      ParsesOk(timeAsOf + " now() - interval 100 days" + alias);
+      // 'system_version as of' only takes numeric literals
+      ParsesOk("select * from a for system_version as of 12345" + alias);
+      ParserError("select * from a for system_version as of -12345" + alias);
+      ParserError("select * from a for system_version as of \"12345\"" + alias);
+      ParserError("select * from a for system_version as of 34 + 34" + alias);
+      ParserError("select * from a for system_version as of b" + alias);
+      ParserError("select * from a for system_version as of b + 4" + alias);
+    }
+
+    ParserError("select * from t for system_time as of");
+    ParserError("select * from t for system_version as of");
+  }
+
+  @Test
   public void TestTableSampleClause() {
     String tblRefs[] = new String[] { "tbl", "db.tbl", "db.tbl.col", "db.tbl.col.fld" };
     String tblAliases[] = new String[] { "", "t" };
diff --git a/testdata/workloads/functional-query/queries/QueryTest/iceberg-negative.test b/testdata/workloads/functional-query/queries/QueryTest/iceberg-negative.test
index 126710e..9d4aa90 100644
--- a/testdata/workloads/functional-query/queries/QueryTest/iceberg-negative.test
+++ b/testdata/workloads/functional-query/queries/QueryTest/iceberg-negative.test
@@ -566,6 +566,16 @@ ALTER TABLE non_iceberg_table SET PARTITION SPEC (i);
 AnalysisException: ALTER TABLE SET PARTITION SPEC is only supported for Iceberg tables: $DATABASE.non_iceberg_table
 ====
 ---- QUERY
+SELECT * FROM non_iceberg_table FOR SYSTEM_TIME AS OF now();
+---- CATCH
+AS OF clause is only supported for Iceberg tables.
+====
+---- QUERY
+SELECT * FROM non_iceberg_table FOR SYSTEM_VERSION AS OF 42;
+---- CATCH
+AS OF clause is only supported for Iceberg tables.
+====
+---- QUERY
 CREATE TABLE iceberg_alter_part ( i int, d DATE, s STRUCT<f1:BIGINT,f2:BIGINT>)
 STORED AS ICEBERG;
 ALTER TABLE iceberg_alter_part SET PARTITION SPEC (g);
diff --git a/tests/query_test/test_iceberg.py b/tests/query_test/test_iceberg.py
index 08b0601..987dd42 100644
--- a/tests/query_test/test_iceberg.py
+++ b/tests/query_test/test_iceberg.py
@@ -15,8 +15,10 @@
 # specific language governing permissions and limitations
 # under the License.
 
+import datetime
 import os
 import random
+import time
 
 from subprocess import check_call
 from parquet.ttypes import ConvertedType
@@ -111,6 +113,128 @@ class TestIcebergTable(ImpalaTestSuite):
     # Check "is_current_ancestor" column.
     assert(first_snapshot[3] == "TRUE" and second_snapshot[3] == "TRUE")
 
+  def test_time_travel(self, vector, unique_database):
+    tbl_name = unique_database + ".time_travel"
+
+    def execute_query_ts(query):
+      self.execute_query(query)
+      return str(datetime.datetime.now())
+
+    def expect_results(query, expected_results):
+      data = self.execute_query(query)
+      assert len(data.data) == len(expected_results)
+      for r in expected_results:
+        assert r in data.data
+
+    def expect_results_t(ts, expected_results):
+      expect_results(
+          "select * from {0} for system_time as of {1}".format(tbl_name, ts),
+          expected_results)
+
+    def expect_results_v(snapshot_id, expected_results):
+      expect_results(
+          "select * from {0} for system_version as of {1}".format(tbl_name, snapshot_id),
+          expected_results)
+
+    def quote(s):
+      return "'{0}'".format(s)
+
+    def cast_ts(ts):
+        return "CAST({0} as timestamp)".format(quote(ts))
+
+    def get_snapshots():
+      data = self.execute_query("describe history {0}".format(tbl_name))
+      ret = list()
+      for row in data.data:
+        fields = row.split('\t')
+        ret.append(fields[1])
+      return ret
+
+    def impala_now():
+      now_data = self.execute_query("select now()")
+      return now_data.data[0]
+
+    # Iceberg doesn't create a snapshot entry for the initial empty table
+    self.execute_query("create table {0} (i int) stored as iceberg".format(tbl_name))
+    ts_1 = execute_query_ts("insert into {0} values (1)".format(tbl_name))
+    ts_2 = execute_query_ts("insert into {0} values (2)".format(tbl_name))
+    ts_3 = execute_query_ts("truncate table {0}".format(tbl_name))
+    time.sleep(1)
+    ts_4 = execute_query_ts("insert into {0} values (100)".format(tbl_name))
+    # Query table as of timestamps.
+    expect_results_t("now()", ['100'])
+    expect_results_t(quote(ts_1), ['1'])
+    expect_results_t(quote(ts_2), ['1', '2'])
+    expect_results_t(quote(ts_3), [])
+    expect_results_t(quote(ts_4), ['100'])
+    expect_results_t(cast_ts(ts_4) + " - interval 1 seconds", [])
+    # Future queries return the current snapshot.
+    expect_results_t(cast_ts(ts_4) + " + interval 1 hours", ['100'])
+    # Query table as of snapshot IDs.
+    snapshots = get_snapshots()
+    expect_results_v(snapshots[0], ['1'])
+    expect_results_v(snapshots[1], ['1', '2'])
+    expect_results_v(snapshots[2], [])
+    expect_results_v(snapshots[3], ['100'])
+
+    # SELECT diff
+    expect_results("""SELECT * FROM {tbl} FOR SYSTEM_TIME AS OF '{ts_new}'
+                      MINUS
+                      SELECT * FROM {tbl} FOR SYSTEM_TIME AS OF '{ts_old}'""".format(
+                   tbl=tbl_name, ts_new=ts_2, ts_old=ts_1),
+                   ['2'])
+    expect_results("""SELECT * FROM {tbl} FOR SYSTEM_VERSION AS OF {v_new}
+                      MINUS
+                      SELECT * FROM {tbl} FOR SYSTEM_VERSION AS OF {v_old}""".format(
+                   tbl=tbl_name, v_new=snapshots[1], v_old=snapshots[0]),
+                   ['2'])
+    # Mix SYSTEM_TIME ans SYSTEM_VERSION
+    expect_results("""SELECT * FROM {tbl} FOR SYSTEM_VERSION AS OF {v_new}
+                      MINUS
+                      SELECT * FROM {tbl} FOR SYSTEM_TIME AS OF '{ts_old}'""".format(
+                   tbl=tbl_name, v_new=snapshots[1], ts_old=ts_1),
+                   ['2'])
+    expect_results("""SELECT * FROM {tbl} FOR SYSTEM_TIME AS OF '{ts_new}'
+                      MINUS
+                      SELECT * FROM {tbl} FOR SYSTEM_VERSION AS OF {v_old}""".format(
+                   tbl=tbl_name, ts_new=ts_2, v_old=snapshots[0]),
+                   ['2'])
+
+    # Query old snapshot
+    try:
+      self.execute_query("SELECT * FROM {0} FOR SYSTEM_TIME AS OF {1}".format(
+          tbl_name, "now() - interval 2 years"))
+      assert False  # Exception must be thrown
+    except Exception as e:
+      assert "Cannot find a snapshot older than" in str(e)
+    # Query invalid snapshot
+    try:
+      self.execute_query("SELECT * FROM {0} FOR SYSTEM_VERSION AS OF 42".format(tbl_name))
+      assert False  # Exception must be thrown
+    except Exception as e:
+      assert "Cannot find snapshot with ID 42" in str(e)
+
+    # Check that timezone is interpreted in local timezone controlled by query option
+    # TIMEZONE
+    self.execute_query("truncate table {0}".format(tbl_name))
+    self.execute_query("insert into {0} values (1111)".format(tbl_name))
+    self.execute_query("SET TIMEZONE='Europe/Budapest'")
+    now_budapest = impala_now()
+    expect_results_t(quote(now_budapest), ['1111'])
+
+    # Let's switch to Tokyo time. Tokyo time is always greater than Budapest time.
+    self.execute_query("SET TIMEZONE='Asia/Tokyo'")
+    now_tokyo = impala_now()
+    expect_results_t(quote(now_tokyo), ['1111'])
+    try:
+      # Interpreting Budapest time in Tokyo time points to the past when the table
+      # didn't exist.
+      expect_results_t(quote(now_budapest), [])
+      assert False
+    except Exception as e:
+      assert "Cannot find a snapshot older than" in str(e)
+
+
   @SkipIf.not_hdfs
   def test_strings_utf8(self, vector, unique_database):
     # Create table