You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@superset.apache.org by GitBox <gi...@apache.org> on 2022/04/18 07:32:40 UTC

[GitHub] [superset] zhaoyongjie opened a new pull request, #19750: feat: query results accuracy test

zhaoyongjie opened a new pull request, #19750:
URL: https://github.com/apache/superset/pull/19750

   ### SUMMARY
   Currently, we don't have tests that ensure query accuracy. For example, we should combine database results and post-processing calculation if the user applies time comparison, then verify the results and the expected results.
   
   This PR introduces a pattern that the developer easily constructs **query context** and **query object**, and verifies the query results and expected results.
   
   ### BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
   <!--- Skip this if not applicable -->
   
   ### TESTING INSTRUCTIONS
   <!--- Required! What steps can be taken to manually verify the changes? -->
   
   ### ADDITIONAL INFORMATION
   <!--- Check any relevant boxes with "x" -->
   <!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->
   - [ ] Has associated issue:
   - [ ] Required feature flags:
   - [ ] Changes UI
   - [ ] Includes DB Migration (follow approval process in [SIP-59](https://github.com/apache/superset/issues/13351))
     - [ ] Migration is atomic, supports rollback & is backwards-compatible
     - [ ] Confirm DB migration upgrade and downgrade tested
     - [ ] Runtime estimates and downtime expectations provided
   - [ ] Introduces new feature or API
   - [ ] Removes existing feature or API
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852542796


##########
tests/example_data/data_loading/csv_dataset_loader.py:
##########
@@ -0,0 +1,110 @@
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+from __future__ import annotations
+
+import os.path
+from pathlib import Path
+from typing import List, TYPE_CHECKING
+from urllib.parse import urlparse
+
+import pandas as pd
+
+from superset import config, db
+from superset.utils.database import get_example_database
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+class CsvDatasetLoader:
+    # A simple csvloader, should run in Superset AppContext
+    csv_path: str
+    df: pd.DataFrame
+    table_name: str
+    dataset: SqlaTable
+
+    def __init__(
+        self,
+        csv_path: str,
+        cache: bool = True,
+        parse_dates: List[str] = [],
+    ):
+        # read from http
+        if csv_path.startswith("http") and csv_path.endswith(".csv"):
+            filename = urlparse(csv_path).path.split("/")[-1]
+            filepath = os.path.join(config.DATA_DIR, filename)
+            if os.path.exists(filepath) and cache:
+                self.csv_path = filepath
+                self.df = pd.read_csv(filepath, parse_dates=parse_dates)
+                self.table_name = filename.replace(".csv", "")
+            else:
+                self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+                if cache:
+                    self.df.to_csv(filepath, index=False)
+                self.csv_path = filepath
+                self.table_name = filename.replace(".csv", "")
+
+        # read from fs
+        if os.path.exists(csv_path) and csv_path.endswith(".csv"):
+            self.csv_path = csv_path
+            self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+            self.table_name = Path(csv_path).name.replace(".csv", "")
+
+    def load_table(self) -> None:
+        # load table to the default schema
+        example_database = get_example_database()
+        self.df.to_sql(
+            name=self.table_name,
+            con=example_database.get_sqla_engine(),
+            index=False,
+            if_exists="replace",
+        )

Review Comment:
   Okay. I will try to extend `load_data`, and reuse it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] codecov[bot] commented on pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
codecov[bot] commented on PR #19750:
URL: https://github.com/apache/superset/pull/19750#issuecomment-1101184082

   # [Codecov](https://codecov.io/gh/apache/superset/pull/19750?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#19750](https://codecov.io/gh/apache/superset/pull/19750?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (4652f65) into [master](https://codecov.io/gh/apache/superset/commit/94075983f8abfcc7749cede5af9e24d2a9f1abe0?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (9407598) will **decrease** coverage by `12.84%`.
   > The diff coverage is `n/a`.
   
   ```diff
   @@             Coverage Diff             @@
   ##           master   #19750       +/-   ##
   ===========================================
   - Coverage   66.51%   53.66%   -12.85%     
   ===========================================
     Files        1686     1686               
     Lines       64591    64589        -2     
     Branches     6636     6636               
   ===========================================
   - Hits        42961    34663     -8298     
   - Misses      19931    28227     +8296     
     Partials     1699     1699               
   ```
   
   | Flag | Coverage Δ | |
   |---|---|---|
   | hive | `?` | |
   | mysql | `?` | |
   | postgres | `?` | |
   | presto | `52.54% <ø> (+<0.01%)` | :arrow_up: |
   | python | `56.25% <ø> (-26.18%)` | :arrow_down: |
   | sqlite | `?` | |
   | unit | `47.76% <ø> (+<0.01%)` | :arrow_up: |
   
   Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
   
   | [Impacted Files](https://codecov.io/gh/apache/superset/pull/19750?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [superset/utils/dashboard\_import\_export.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvdXRpbHMvZGFzaGJvYXJkX2ltcG9ydF9leHBvcnQucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [superset/key\_value/commands/upsert.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQva2V5X3ZhbHVlL2NvbW1hbmRzL3Vwc2VydC5weQ==) | `0.00% <0.00%> (-89.59%)` | :arrow_down: |
   | [superset/key\_value/commands/update.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQva2V5X3ZhbHVlL2NvbW1hbmRzL3VwZGF0ZS5weQ==) | `0.00% <0.00%> (-89.37%)` | :arrow_down: |
   | [superset/key\_value/commands/delete.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQva2V5X3ZhbHVlL2NvbW1hbmRzL2RlbGV0ZS5weQ==) | `0.00% <0.00%> (-85.30%)` | :arrow_down: |
   | [superset/db\_engines/hive.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvZGJfZW5naW5lcy9oaXZlLnB5) | `0.00% <0.00%> (-85.19%)` | :arrow_down: |
   | [superset/key\_value/commands/delete\_expired.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQva2V5X3ZhbHVlL2NvbW1hbmRzL2RlbGV0ZV9leHBpcmVkLnB5) | `0.00% <0.00%> (-80.77%)` | :arrow_down: |
   | [superset/dashboards/commands/importers/v0.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvZGFzaGJvYXJkcy9jb21tYW5kcy9pbXBvcnRlcnMvdjAucHk=) | `14.79% <0.00%> (-75.15%)` | :arrow_down: |
   | [superset/datasets/commands/importers/v0.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvZGF0YXNldHMvY29tbWFuZHMvaW1wb3J0ZXJzL3YwLnB5) | `24.82% <0.00%> (-68.80%)` | :arrow_down: |
   | [superset/databases/commands/test\_connection.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvZGF0YWJhc2VzL2NvbW1hbmRzL3Rlc3RfY29ubmVjdGlvbi5weQ==) | `31.42% <0.00%> (-68.58%)` | :arrow_down: |
   | [superset/datasets/commands/update.py](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvZGF0YXNldHMvY29tbWFuZHMvdXBkYXRlLnB5) | `25.88% <0.00%> (-68.24%)` | :arrow_down: |
   | ... and [268 more](https://codecov.io/gh/apache/superset/pull/19750/diff?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/superset/pull/19750?src=pr&el=continue&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/superset/pull/19750?src=pr&el=footer&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Last update [9407598...4652f65](https://codecov.io/gh/apache/superset/pull/19750?src=pr&el=lastupdated&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852528922


##########
tests/example_data/data_loading/csv_dataset_loader.py:
##########
@@ -0,0 +1,110 @@
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+from __future__ import annotations
+
+import os.path
+from pathlib import Path
+from typing import List, TYPE_CHECKING
+from urllib.parse import urlparse
+
+import pandas as pd
+
+from superset import config, db
+from superset.utils.database import get_example_database
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+class CsvDatasetLoader:
+    # A simple csvloader, should run in Superset AppContext
+    csv_path: str
+    df: pd.DataFrame
+    table_name: str
+    dataset: SqlaTable
+
+    def __init__(
+        self,
+        csv_path: str,
+        cache: bool = True,
+        parse_dates: List[str] = [],
+    ):
+        # read from http
+        if csv_path.startswith("http") and csv_path.endswith(".csv"):
+            filename = urlparse(csv_path).path.split("/")[-1]
+            filepath = os.path.join(config.DATA_DIR, filename)
+            if os.path.exists(filepath) and cache:
+                self.csv_path = filepath
+                self.df = pd.read_csv(filepath, parse_dates=parse_dates)
+                self.table_name = filename.replace(".csv", "")
+            else:
+                self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+                if cache:
+                    self.df.to_csv(filepath, index=False)
+                self.csv_path = filepath
+                self.table_name = filename.replace(".csv", "")
+
+        # read from fs
+        if os.path.exists(csv_path) and csv_path.endswith(".csv"):
+            self.csv_path = csv_path
+            self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+            self.table_name = Path(csv_path).name.replace(".csv", "")
+
+    def load_table(self) -> None:
+        # load table to the default schema
+        example_database = get_example_database()
+        self.df.to_sql(
+            name=self.table_name,
+            con=example_database.get_sqla_engine(),
+            index=False,
+            if_exists="replace",
+        )

Review Comment:
   Nice! I will do it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852525028


##########
tests/conftest.py:
##########
@@ -103,3 +105,23 @@ def data_loader(
     return PandasDataLoader(
         example_db_engine, pandas_loader_configuration, table_to_df_convertor
     )
+
+
+@fixture
+def superset_app_ctx():
+    with app.app_context() as ctx:
+        yield ctx
+
+
+@fixture
+def load_sales_dataset():
+    with app.app_context():
+        loader = CsvDatasetLoader(
+            "https://raw.githubusercontent.com/apache-superset/examples-data/lowercase_columns_examples/datasets/examples/sales.csv",
+            parse_dates=["order_date"],
+        )
+        loader.load_table()
+        dataset = loader.load_dataset()
+        yield dataset
+        loader.remove_dataset()
+        loader.remove_table()

Review Comment:
   make sense. I will do.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852524689


##########
superset/connectors/sqla/models.py:
##########
@@ -2042,6 +2042,10 @@ def after_delete(  # pylint: disable=unused-argument
         dataset = (
             session.query(NewDataset).filter_by(sqlatable_id=target.id).one_or_none()
         )
+        for tbl in dataset.tables:
+            if len(tbl.datasets) == 1 and tbl.datasets[0] == dataset:
+                session.delete(tbl)

Review Comment:
   This PR is not needed, I just want to pass the CI.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] betodealmeida commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
betodealmeida commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852514225


##########
tests/example_data/data_loading/csv_dataset_loader.py:
##########
@@ -0,0 +1,110 @@
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+from __future__ import annotations
+
+import os.path
+from pathlib import Path
+from typing import List, TYPE_CHECKING
+from urllib.parse import urlparse
+
+import pandas as pd
+
+from superset import config, db
+from superset.utils.database import get_example_database
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+class CsvDatasetLoader:
+    # A simple csvloader, should run in Superset AppContext

Review Comment:
   Nit, it's better to make this a docstring so it shows up in code tools:
   
   ```suggestion
       """A simple csvloader, should run in Superset AppContext"""
   ```



##########
tests/conftest.py:
##########
@@ -103,3 +105,23 @@ def data_loader(
     return PandasDataLoader(
         example_db_engine, pandas_loader_configuration, table_to_df_convertor
     )
+
+
+@fixture
+def superset_app_ctx():
+    with app.app_context() as ctx:
+        yield ctx
+
+
+@fixture
+def load_sales_dataset():
+    with app.app_context():
+        loader = CsvDatasetLoader(
+            "https://raw.githubusercontent.com/apache-superset/examples-data/lowercase_columns_examples/datasets/examples/sales.csv",
+            parse_dates=["order_date"],
+        )
+        loader.load_table()
+        dataset = loader.load_dataset()
+        yield dataset
+        loader.remove_dataset()
+        loader.remove_table()

Review Comment:
   You can reuse the `superset_app_ctx` fixture:
   
   ```suggestion
   def load_sales_dataset(superset_app_ctx):
       loader = CsvDatasetLoader(
           "https://raw.githubusercontent.com/apache-superset/examples-data/lowercase_columns_examples/datasets/examples/sales.csv",
           parse_dates=["order_date"],
       )
       loader.load_table()
       dataset = loader.load_dataset()
       yield dataset
       loader.remove_dataset()
       loader.remove_table()
   ```



##########
tests/example_data/data_loading/csv_dataset_loader.py:
##########
@@ -0,0 +1,110 @@
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+from __future__ import annotations
+
+import os.path
+from pathlib import Path
+from typing import List, TYPE_CHECKING
+from urllib.parse import urlparse
+
+import pandas as pd
+
+from superset import config, db
+from superset.utils.database import get_example_database
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+class CsvDatasetLoader:
+    # A simple csvloader, should run in Superset AppContext
+    csv_path: str
+    df: pd.DataFrame
+    table_name: str
+    dataset: SqlaTable
+
+    def __init__(
+        self,
+        csv_path: str,
+        cache: bool = True,
+        parse_dates: List[str] = [],
+    ):
+        # read from http
+        if csv_path.startswith("http") and csv_path.endswith(".csv"):
+            filename = urlparse(csv_path).path.split("/")[-1]
+            filepath = os.path.join(config.DATA_DIR, filename)
+            if os.path.exists(filepath) and cache:
+                self.csv_path = filepath
+                self.df = pd.read_csv(filepath, parse_dates=parse_dates)
+                self.table_name = filename.replace(".csv", "")
+            else:
+                self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+                if cache:
+                    self.df.to_csv(filepath, index=False)
+                self.csv_path = filepath
+                self.table_name = filename.replace(".csv", "")
+
+        # read from fs
+        if os.path.exists(csv_path) and csv_path.endswith(".csv"):
+            self.csv_path = csv_path
+            self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+            self.table_name = Path(csv_path).name.replace(".csv", "")
+
+    def load_table(self) -> None:
+        # load table to the default schema
+        example_database = get_example_database()
+        self.df.to_sql(
+            name=self.table_name,
+            con=example_database.get_sqla_engine(),
+            index=False,
+            if_exists="replace",
+        )

Review Comment:
   We can probably reuse the logic from https://github.com/apache/superset/blob/a2d34ec4b8a89723e7468f194a98386699af0bd7/superset/datasets/commands/importers/v1/utils.py#L151-L182 here (you just need to modify it to load local files).



##########
superset/datasets/models.py:
##########
@@ -76,7 +76,11 @@ class Dataset(Model, AuditMixinNullable, ExtraJSONMixin, ImportExportMixin):
     expression = sa.Column(sa.Text)
 
     # n:n relationship
-    tables: List[Table] = relationship("Table", secondary=table_association_table)
+    tables: List[Table] = relationship(
+        "Table",
+        backref="datasets",
+        secondary=table_association_table,
+    )

Review Comment:
   Is this needed for this PR?



##########
superset/connectors/sqla/models.py:
##########
@@ -2042,6 +2042,10 @@ def after_delete(  # pylint: disable=unused-argument
         dataset = (
             session.query(NewDataset).filter_by(sqlatable_id=target.id).one_or_none()
         )
+        for tbl in dataset.tables:
+            if len(tbl.datasets) == 1 and tbl.datasets[0] == dataset:
+                session.delete(tbl)

Review Comment:
   Is this needed in this PR or just leftover?



##########
tests/integration_tests/query_context/test_time_comparion.py:
##########
@@ -0,0 +1,126 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from __future__ import annotations
+
+from datetime import datetime
+from typing import TYPE_CHECKING
+
+from superset.common.chart_data import ChartDataResultFormat, ChartDataResultType
+from superset.common.query_context import QueryContext
+from superset.common.query_object import QueryObject
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+def test_time_comparison(load_sales_dataset: SqlaTable) -> None:
+    query_context = QueryContext(
+        datasource=load_sales_dataset,
+        queries=[],
+        form_data={},
+        result_type=ChartDataResultType.FULL,
+        result_format=ChartDataResultFormat.JSON,
+        force=True,
+        cache_values={},
+    )
+    query_object = QueryObject(
+        metrics=["count"],
+        columns=["order_date"],
+        orderby=[
+            (
+                "order_date",
+                True,
+            )
+        ],
+        granularity="order_date",
+        extras={"time_grain_sqla": "P1M"},
+        from_dttm=datetime(2004, 1, 1),
+        to_dttm=datetime(2005, 1, 1),
+        row_limit=100000,
+    )
+    rv_2014 = query_context.get_df_payload(query_object, force_cached=True)
+    """
+    >>> rv_2014['df']
+           order_date  count
+    0  2004-01-01     91
+    1  2004-02-01     86
+    2  2004-03-01     56
+    3  2004-04-01     64
+    4  2004-05-01     74
+    5  2004-06-01     85
+    6  2004-07-01     91
+    7  2004-08-01    133
+    8  2004-09-01     95
+    9  2004-10-01    159
+    10 2004-11-01    301
+    11 2004-12-01    110
+    """

Review Comment:
   Does this get checked by `pytest`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] betodealmeida commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
betodealmeida commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852539980


##########
tests/integration_tests/query_context/test_time_comparion.py:
##########
@@ -0,0 +1,126 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from __future__ import annotations
+
+from datetime import datetime
+from typing import TYPE_CHECKING
+
+from superset.common.chart_data import ChartDataResultFormat, ChartDataResultType
+from superset.common.query_context import QueryContext
+from superset.common.query_object import QueryObject
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+def test_time_comparison(load_sales_dataset: SqlaTable) -> None:
+    query_context = QueryContext(
+        datasource=load_sales_dataset,
+        queries=[],
+        form_data={},
+        result_type=ChartDataResultType.FULL,
+        result_format=ChartDataResultFormat.JSON,
+        force=True,
+        cache_values={},
+    )
+    query_object = QueryObject(
+        metrics=["count"],
+        columns=["order_date"],
+        orderby=[
+            (
+                "order_date",
+                True,
+            )
+        ],
+        granularity="order_date",
+        extras={"time_grain_sqla": "P1M"},
+        from_dttm=datetime(2004, 1, 1),
+        to_dttm=datetime(2005, 1, 1),
+        row_limit=100000,
+    )
+    rv_2014 = query_context.get_df_payload(query_object, force_cached=True)
+    """
+    >>> rv_2014['df']
+           order_date  count
+    0  2004-01-01     91
+    1  2004-02-01     86
+    2  2004-03-01     56
+    3  2004-04-01     64
+    4  2004-05-01     74
+    5  2004-06-01     85
+    6  2004-07-01     91
+    7  2004-08-01    133
+    8  2004-09-01     95
+    9  2004-10-01    159
+    10 2004-11-01    301
+    11 2004-12-01    110
+    """

Review Comment:
   Ah, I thought pytest was doing something like doctest does, and checking that the output matches the string.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie closed pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie closed pull request #19750: feat: query results accuracy test
URL: https://github.com/apache/superset/pull/19750


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852525513


##########
superset/datasets/models.py:
##########
@@ -76,7 +76,11 @@ class Dataset(Model, AuditMixinNullable, ExtraJSONMixin, ImportExportMixin):
     expression = sa.Column(sa.Text)
 
     # n:n relationship
-    tables: List[Table] = relationship("Table", secondary=table_association_table)
+    tables: List[Table] = relationship(
+        "Table",
+        backref="datasets",
+        secondary=table_association_table,
+    )

Review Comment:
   no need, just want to pass the CI.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r851947806


##########
tests/integration_tests/sqla_models_tests.py:
##########
@@ -659,51 +659,52 @@ def test_filter_on_text_column(text_column_table):
     assert result_object.df["count"][0] == 1
 
 
-def test_should_generate_closed_and_open_time_filter_range():
-    with app.app_context():
-        if backend() != "postgresql":
-            pytest.skip(f"{backend()} has different dialect for datetime column")
-
-        table = SqlaTable(
-            table_name="temporal_column_table",
-            sql=(
-                "SELECT '2021-12-31'::timestamp as datetime_col "
-                "UNION SELECT '2022-01-01'::timestamp "
-                "UNION SELECT '2022-03-10'::timestamp "
-                "UNION SELECT '2023-01-01'::timestamp "
-                "UNION SELECT '2023-03-10'::timestamp "
-            ),
-            database=get_example_database(),
-        )
-        TableColumn(
-            column_name="datetime_col",
-            type="TIMESTAMP",
-            table=table,
-            is_dttm=True,
-        )
-        SqlMetric(metric_name="count", expression="count(*)", table=table)
-        result_object = table.query(
-            {
-                "metrics": ["count"],
-                "is_timeseries": False,
-                "filter": [],
-                "from_dttm": datetime(2022, 1, 1),
-                "to_dttm": datetime(2023, 1, 1),
-                "granularity": "datetime_col",
-            }
-        )
-        """ >>> result_object.query
-                SELECT count(*) AS count
-                FROM
-                  (SELECT '2021-12-31'::timestamp as datetime_col
-                   UNION SELECT '2022-01-01'::timestamp
-                   UNION SELECT '2022-03-10'::timestamp
-                   UNION SELECT '2023-01-01'::timestamp
-                   UNION SELECT '2023-03-10'::timestamp) AS virtual_table
-                WHERE datetime_col >= TO_TIMESTAMP('2022-01-01 00:00:00.000000', 'YYYY-MM-DD HH24:MI:SS.US')
-                  AND datetime_col < TO_TIMESTAMP('2023-01-01 00:00:00.000000', 'YYYY-MM-DD HH24:MI:SS.US')
-        """
-        assert result_object.df.iloc[0]["count"] == 2
+def test_should_generate_closed_and_open_time_filter_range(
+    superset_app_ctx: AppContext,
+):
+    if backend() != "postgresql":
+        pytest.skip(f"{backend()} has different dialect for datetime column")

Review Comment:
   by-catch: use `fixture` replace `context manager`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852526693


##########
tests/integration_tests/query_context/test_time_comparion.py:
##########
@@ -0,0 +1,126 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from __future__ import annotations
+
+from datetime import datetime
+from typing import TYPE_CHECKING
+
+from superset.common.chart_data import ChartDataResultFormat, ChartDataResultType
+from superset.common.query_context import QueryContext
+from superset.common.query_object import QueryObject
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+def test_time_comparison(load_sales_dataset: SqlaTable) -> None:
+    query_context = QueryContext(
+        datasource=load_sales_dataset,
+        queries=[],
+        form_data={},
+        result_type=ChartDataResultType.FULL,
+        result_format=ChartDataResultFormat.JSON,
+        force=True,
+        cache_values={},
+    )
+    query_object = QueryObject(
+        metrics=["count"],
+        columns=["order_date"],
+        orderby=[
+            (
+                "order_date",
+                True,
+            )
+        ],
+        granularity="order_date",
+        extras={"time_grain_sqla": "P1M"},
+        from_dttm=datetime(2004, 1, 1),
+        to_dttm=datetime(2005, 1, 1),
+        row_limit=100000,
+    )
+    rv_2014 = query_context.get_df_payload(query_object, force_cached=True)
+    """
+    >>> rv_2014['df']
+           order_date  count
+    0  2004-01-01     91
+    1  2004-02-01     86
+    2  2004-03-01     56
+    3  2004-04-01     64
+    4  2004-05-01     74
+    5  2004-06-01     85
+    6  2004-07-01     91
+    7  2004-08-01    133
+    8  2004-09-01     95
+    9  2004-10-01    159
+    10 2004-11-01    301
+    11 2004-12-01    110
+    """

Review Comment:
   It doesn't check by `pytest`. The purpose here is to test readability.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] zhaoyongjie commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
zhaoyongjie commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852539299


##########
tests/example_data/data_loading/csv_dataset_loader.py:
##########
@@ -0,0 +1,110 @@
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+from __future__ import annotations
+
+import os.path
+from pathlib import Path
+from typing import List, TYPE_CHECKING
+from urllib.parse import urlparse
+
+import pandas as pd
+
+from superset import config, db
+from superset.utils.database import get_example_database
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+class CsvDatasetLoader:
+    # A simple csvloader, should run in Superset AppContext
+    csv_path: str
+    df: pd.DataFrame
+    table_name: str
+    dataset: SqlaTable
+
+    def __init__(
+        self,
+        csv_path: str,
+        cache: bool = True,
+        parse_dates: List[str] = [],
+    ):
+        # read from http
+        if csv_path.startswith("http") and csv_path.endswith(".csv"):
+            filename = urlparse(csv_path).path.split("/")[-1]
+            filepath = os.path.join(config.DATA_DIR, filename)
+            if os.path.exists(filepath) and cache:
+                self.csv_path = filepath
+                self.df = pd.read_csv(filepath, parse_dates=parse_dates)
+                self.table_name = filename.replace(".csv", "")
+            else:
+                self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+                if cache:
+                    self.df.to_csv(filepath, index=False)
+                self.csv_path = filepath
+                self.table_name = filename.replace(".csv", "")
+
+        # read from fs
+        if os.path.exists(csv_path) and csv_path.endswith(".csv"):
+            self.csv_path = csv_path
+            self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+            self.table_name = Path(csv_path).name.replace(".csv", "")
+
+    def load_table(self) -> None:
+        # load table to the default schema
+        example_database = get_example_database()
+        self.df.to_sql(
+            name=self.table_name,
+            con=example_database.get_sqla_engine(),
+            index=False,
+            if_exists="replace",
+        )

Review Comment:
   These 2 functions look different after I dig into them. We don't have a **Dataset** before we call `CsvDatasetLoader`. The `CsvDatasetLoader` loads a CSV to **example database**, and then infer to **Dataset** from the loaded table. The `load_data` has existed **Dataset**.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] betodealmeida commented on a diff in pull request #19750: feat: query results accuracy test

Posted by GitBox <gi...@apache.org>.
betodealmeida commented on code in PR #19750:
URL: https://github.com/apache/superset/pull/19750#discussion_r852541785


##########
tests/example_data/data_loading/csv_dataset_loader.py:
##########
@@ -0,0 +1,110 @@
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing,
+#  software distributed under the License is distributed on an
+#  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#  KIND, either express or implied.  See the License for the
+#  specific language governing permissions and limitations
+#  under the License.
+from __future__ import annotations
+
+import os.path
+from pathlib import Path
+from typing import List, TYPE_CHECKING
+from urllib.parse import urlparse
+
+import pandas as pd
+
+from superset import config, db
+from superset.utils.database import get_example_database
+
+if TYPE_CHECKING:
+    from superset.connectors.sqla.models import SqlaTable
+
+
+class CsvDatasetLoader:
+    # A simple csvloader, should run in Superset AppContext
+    csv_path: str
+    df: pd.DataFrame
+    table_name: str
+    dataset: SqlaTable
+
+    def __init__(
+        self,
+        csv_path: str,
+        cache: bool = True,
+        parse_dates: List[str] = [],
+    ):
+        # read from http
+        if csv_path.startswith("http") and csv_path.endswith(".csv"):
+            filename = urlparse(csv_path).path.split("/")[-1]
+            filepath = os.path.join(config.DATA_DIR, filename)
+            if os.path.exists(filepath) and cache:
+                self.csv_path = filepath
+                self.df = pd.read_csv(filepath, parse_dates=parse_dates)
+                self.table_name = filename.replace(".csv", "")
+            else:
+                self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+                if cache:
+                    self.df.to_csv(filepath, index=False)
+                self.csv_path = filepath
+                self.table_name = filename.replace(".csv", "")
+
+        # read from fs
+        if os.path.exists(csv_path) and csv_path.endswith(".csv"):
+            self.csv_path = csv_path
+            self.df = pd.read_csv(csv_path, parse_dates=parse_dates)
+            self.table_name = Path(csv_path).name.replace(".csv", "")
+
+    def load_table(self) -> None:
+        # load table to the default schema
+        example_database = get_example_database()
+        self.df.to_sql(
+            name=self.table_name,
+            con=example_database.get_sqla_engine(),
+            index=False,
+            if_exists="replace",
+        )

Review Comment:
   You're right, but it might not be hard to modify it so it doesn't need the dataset (it's using it just to infer types). But maybe later we can consolidate these two functions into one (and maybe also `superset.examples.helpers.get_example_data`).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org