You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@superset.apache.org by "hughhhh (via GitHub)" <gi...@apache.org> on 2023/01/25 17:55:27 UTC

[GitHub] [superset] hughhhh opened a new pull request, #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

hughhhh opened a new pull request, #22853:
URL: https://github.com/apache/superset/pull/22853

   <!---
   Please write the PR title following the conventions at https://www.conventionalcommits.org/en/v1.0.0/
   Example:
   fix(dashboard): load charts correctly
   -->
   
   ### SUMMARY
   <!--- Describe the change below, including rationale and design decisions -->
   
   ### BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
   <!--- Skip this if not applicable -->
   
   ### TESTING INSTRUCTIONS
   <!--- Required! What steps can be taken to manually verify the changes? -->
   
   ### ADDITIONAL INFORMATION
   <!--- Check any relevant boxes with "x" -->
   <!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->
   - [ ] Has associated issue:
   - [ ] Required feature flags:
   - [ ] Changes UI
   - [ ] Includes DB Migration (follow approval process in [SIP-59](https://github.com/apache/superset/issues/13351))
     - [ ] Migration is atomic, supports rollback & is backwards-compatible
     - [ ] Confirm DB migration upgrade and downgrade tested
     - [ ] Runtime estimates and downtime expectations provided
   - [ ] Introduces new feature or API
   - [ ] Removes existing feature or API
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] Antonio-RiveroMartnez commented on a diff in pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "Antonio-RiveroMartnez (via GitHub)" <gi...@apache.org>.
Antonio-RiveroMartnez commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1113444910


##########
superset/models/sql_lab.py:
##########
@@ -183,47 +184,33 @@ def sql_tables(self) -> List[Table]:
         return list(ParsedQuery(self.sql).tables)
 
     @property
-    def columns(self) -> List[Dict[str, Any]]:
-        bool_types = ("BOOL",)
-        num_types = (
-            "DOUBLE",
-            "FLOAT",
-            "INT",
-            "BIGINT",
-            "NUMBER",
-            "LONG",
-            "REAL",
-            "NUMERIC",
-            "DECIMAL",
-            "MONEY",
+    def columns(self) -> List["TableColumn"]:
+        from superset.connectors.sqla.models import (  # pylint: disable=import-outside-toplevel
+            TableColumn,
         )
-        date_types = ("DATE", "TIME")
-        str_types = ("VARCHAR", "STRING", "CHAR")
+
         columns = []
-        col_type = ""
         for col in self.extra.get("columns", []):
-            computed_column = {**col}
-            col_type = col.get("type")
-
-            if col_type and any(map(lambda t: t in col_type.upper(), str_types)):
-                computed_column["type_generic"] = GenericDataType.STRING
-            if col_type and any(map(lambda t: t in col_type.upper(), bool_types)):
-                computed_column["type_generic"] = GenericDataType.BOOLEAN
-            if col_type and any(map(lambda t: t in col_type.upper(), num_types)):
-                computed_column["type_generic"] = GenericDataType.NUMERIC
-            if col_type and any(map(lambda t: t in col_type.upper(), date_types)):
-                computed_column["type_generic"] = GenericDataType.TEMPORAL
-
-            computed_column["column_name"] = col.get("name")
-            computed_column["groupby"] = True
-            columns.append(computed_column)
+            columns.append(
+                TableColumn(
+                    column_name=col["name"],
+                    type=col["type"],
+                    is_dttm=col["is_dttm"],
+                    groupby=True,

Review Comment:
   I know this is not new on this PR, but wondering, why do we set `groupby=True`  here? Shouldn't this be set based on the `col` properties?



##########
superset/models/sql_lab.py:
##########
@@ -183,47 +184,33 @@ def sql_tables(self) -> List[Table]:
         return list(ParsedQuery(self.sql).tables)
 
     @property
-    def columns(self) -> List[Dict[str, Any]]:
-        bool_types = ("BOOL",)
-        num_types = (
-            "DOUBLE",
-            "FLOAT",
-            "INT",
-            "BIGINT",
-            "NUMBER",
-            "LONG",
-            "REAL",
-            "NUMERIC",
-            "DECIMAL",
-            "MONEY",
+    def columns(self) -> List["TableColumn"]:
+        from superset.connectors.sqla.models import (  # pylint: disable=import-outside-toplevel
+            TableColumn,
         )
-        date_types = ("DATE", "TIME")
-        str_types = ("VARCHAR", "STRING", "CHAR")
+
         columns = []
-        col_type = ""
         for col in self.extra.get("columns", []):
-            computed_column = {**col}
-            col_type = col.get("type")
-
-            if col_type and any(map(lambda t: t in col_type.upper(), str_types)):
-                computed_column["type_generic"] = GenericDataType.STRING
-            if col_type and any(map(lambda t: t in col_type.upper(), bool_types)):
-                computed_column["type_generic"] = GenericDataType.BOOLEAN
-            if col_type and any(map(lambda t: t in col_type.upper(), num_types)):
-                computed_column["type_generic"] = GenericDataType.NUMERIC
-            if col_type and any(map(lambda t: t in col_type.upper(), date_types)):
-                computed_column["type_generic"] = GenericDataType.TEMPORAL
-
-            computed_column["column_name"] = col.get("name")
-            computed_column["groupby"] = True
-            columns.append(computed_column)
+            columns.append(
+                TableColumn(
+                    column_name=col["name"],
+                    type=col["type"],
+                    is_dttm=col["is_dttm"],
+                    groupby=True,
+                    filterable=True,

Review Comment:
   Before we were not setting `filterable` as `True`, but we do now. Is this intentional? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] github-actions[bot] commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1462902689

   @jinghua-qa Ephemeral environment spinning up at http://34.219.214.33:8080. Credentials are `admin`/`admin`. Please allow several minutes for bootstrapping and startup.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] eschutho commented on a diff in pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "eschutho (via GitHub)" <gi...@apache.org>.
eschutho commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1114900544


##########
tests/integration_tests/sqllab_tests.py:
##########
@@ -523,8 +531,16 @@ def test_sqllab_viz_bad_payload(self):
             "chartType": "dist_bar",
             "schema": "superset",
             "columns": [
-                {"is_dttm": False, "type": "STRING", "name": f"viz_type_{random()}"},
-                {"is_dttm": False, "type": "OBJECT", "name": f"ccount_{random()}"},
+                {
+                    "is_dttm": False,
+                    "type": "STRING",
+                    "column_name": f"viz_type_{random()}",

Review Comment:
   It's great that we're aligning the schemas here, but I would also consider this a breaking change, so let's hold of on merging this until 3.0.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] jinghua-qa commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "jinghua-qa (via GitHub)" <gi...@apache.org>.
jinghua-qa commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1462898052

   /testenv up
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] rusackas commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "rusackas (via GitHub)" <gi...@apache.org>.
rusackas commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1404014203

   Pinging @villebro since I know he's keen on this. Might wanna fill out the PR description though, for the rest of the world ;)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] jinghua-qa commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "jinghua-qa (via GitHub)" <gi...@apache.org>.
jinghua-qa commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1464113250

   Tested in the new ephermal env, found some issues:
   Still see issue 5:
   aggregate field is not pre-filled in metric control when d&d any value from the list to the Metric section.
   
   https://user-images.githubusercontent.com/81597121/224245810-785ab086-3fc1-4e5a-b432-f5520999d8cc.mov
   
   New issue:
   user can not save dataset in sql
   
   https://user-images.githubusercontent.com/81597121/224245933-726d5884-456d-4347-bcf4-3c0b50bffbb4.mov
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] betodealmeida commented on a diff in pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "betodealmeida (via GitHub)" <gi...@apache.org>.
betodealmeida commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1114887671


##########
superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx:
##########
@@ -61,7 +61,7 @@ export type ExploreQuery = QueryResponse & {
 };
 
 export interface ISimpleColumn {
-  name?: string | null;
+  column_name?: string | null;

Review Comment:
   Any reason why we're changing this? `column.column_name` is redundant.



##########
superset/models/sql_lab.py:
##########
@@ -183,47 +184,33 @@ def sql_tables(self) -> List[Table]:
         return list(ParsedQuery(self.sql).tables)
 
     @property
-    def columns(self) -> List[Dict[str, Any]]:
-        bool_types = ("BOOL",)
-        num_types = (
-            "DOUBLE",
-            "FLOAT",
-            "INT",
-            "BIGINT",
-            "NUMBER",
-            "LONG",
-            "REAL",
-            "NUMERIC",
-            "DECIMAL",
-            "MONEY",
+    def columns(self) -> List["TableColumn"]:
+        from superset.connectors.sqla.models import (  # pylint: disable=import-outside-toplevel
+            TableColumn,
         )
-        date_types = ("DATE", "TIME")
-        str_types = ("VARCHAR", "STRING", "CHAR")
+
         columns = []
-        col_type = ""
         for col in self.extra.get("columns", []):
-            computed_column = {**col}
-            col_type = col.get("type")
-
-            if col_type and any(map(lambda t: t in col_type.upper(), str_types)):
-                computed_column["type_generic"] = GenericDataType.STRING
-            if col_type and any(map(lambda t: t in col_type.upper(), bool_types)):
-                computed_column["type_generic"] = GenericDataType.BOOLEAN
-            if col_type and any(map(lambda t: t in col_type.upper(), num_types)):
-                computed_column["type_generic"] = GenericDataType.NUMERIC
-            if col_type and any(map(lambda t: t in col_type.upper(), date_types)):
-                computed_column["type_generic"] = GenericDataType.TEMPORAL
-
-            computed_column["column_name"] = col.get("name")
-            computed_column["groupby"] = True
-            columns.append(computed_column)
+            columns.append(
+                TableColumn(
+                    column_name=col["name"],
+                    type=col["type"],
+                    is_dttm=col["is_dttm"],
+                    groupby=True,
+                    filterable=True,
+                )
+            )
         return columns
 
+    @property
+    def db_extra(self) -> Optional[Dict[str, Any]]:
+        return None
+
     @property
     def data(self) -> Dict[str, Any]:
         order_by_choices = []
         for col in self.columns:
-            column_name = str(col.get("column_name") or "")
+            column_name = str(col.column_name or "")

Review Comment:
   We don't need `str()` here, do we?



##########
superset/models/helpers.py:
##########
@@ -682,7 +690,18 @@ class ExploreMixin:  # pylint: disable=too-many-public-methods
     }
 
     @property
-    def query(self) -> str:
+    def fetch_value_predicate(self) -> str:
+        return "fix this!"

Review Comment:
   ```suggestion
           raise NotImplementedError()
   ```



##########
tests/integration_tests/charts/data/api_tests.py:
##########
@@ -1203,8 +1203,8 @@ def test_chart_cache_timeout_chart_not_found(
     [
         (200, {"where": "1 = 1"}),
         (200, {"having": "count(*) > 0"}),
-        (400, {"where": "col1 in (select distinct col1 from physical_dataset)"}),
-        (400, {"having": "count(*) > (select count(*) from physical_dataset)"}),
+        (403, {"where": "col1 in (select distinct col1 from physical_dataset)"}),
+        (403, {"having": "count(*) > (select count(*) from physical_dataset)"}),

Review Comment:
   Why are we getting 403s now?



##########
superset/models/helpers.py:
##########
@@ -1465,14 +1552,20 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
                     col = metrics_exprs_by_expr.get(str(col), col)
                     need_groupby = True
             elif col in columns_by_name:
-                gb_column_obj = columns_by_name[col]
-                if isinstance(gb_column_obj, dict):
-                    col = self.get_sqla_col(gb_column_obj)
-                else:
-                    col = gb_column_obj.get_sqla_col()
+                col = self.convert_tbl_column_to_sqla_col(
+                    columns_by_name[col], template_processor=template_processor
+                )
+                # col = columns_by_name[col].get_sqla_col(
+                #     template_processor=template_processor
+                # )

Review Comment:
   ```suggestion
   ```



##########
superset/connectors/sqla/models.py:
##########
@@ -304,6 +282,35 @@ def db_extra(self) -> Dict[str, Any]:
     def type_generic(self) -> Optional[utils.GenericDataType]:
         if self.is_dttm:
             return GenericDataType.TEMPORAL
+
+        bool_types = ("BOOL",)
+        num_types = (
+            "DOUBLE",
+            "FLOAT",
+            "INT",
+            "BIGINT",
+            "NUMBER",
+            "LONG",
+            "REAL",
+            "NUMERIC",
+            "DECIMAL",
+            "MONEY",
+        )
+        date_types = ("DATE", "TIME")
+        str_types = ("VARCHAR", "STRING", "CHAR")
+
+        if self.table is None:
+            # Query.TableColumns don't have a reference to a table.db_engine_spec
+            # reference so this logic will manage rendering types
+            if self.type and any(map(lambda t: t in self.type.upper(), str_types)):

Review Comment:
   Nit: I think you moved this old code around, but today it would be better written without `map` nor `lambda`:
   
   ```python
   if self.type and any(t in self.type.upper() for t in str_types):
   ```
   
   Same below.



##########
superset/models/helpers.py:
##########
@@ -1179,26 +1241,27 @@ def _get_series_orderby(
 
     def adhoc_column_to_sqla(
         self,
-        col: Type["AdhocColumn"],  # type: ignore
+        col: "AdhocColumn",  # type: ignore
         template_processor: Optional[BaseTemplateProcessor] = None,
     ) -> ColumnElement:
-        """
-        Turn an adhoc column into a sqlalchemy column.
-
-        :param col: Adhoc column definition
-        :param template_processor: template_processor instance
-        :returns: The metric defined as a sqlalchemy column
-        :rtype: sqlalchemy.sql.column
-        """
-        label = utils.get_column_name(col)  # type: ignore
-        expression = self._process_sql_expression(
-            expression=col["sqlExpression"],
-            database_id=self.database_id,
-            schema=self.schema,
-            template_processor=template_processor,
-        )
-        sqla_column = literal_column(expression)
-        return self.make_sqla_column_compatible(sqla_column, label)
+        raise NotImplementedError()
+        # """
+        # Turn an adhoc column into a sqlalchemy column.
+
+        # :param col: Adhoc column definition
+        # :param template_processor: template_processor instance
+        # :returns: The metric defined as a sqlalchemy column
+        # :rtype: sqlalchemy.sql.column
+        # """
+        # label = utils.get_column_name(col)  # type: ignore
+        # expression = self._process_sql_expression(
+        #     expression=col["sqlExpression"],
+        #     database_id=self.database_id,
+        #     schema=self.schema,
+        #     template_processor=template_processor,
+        # )
+        # sqla_column = literal_column(expression)
+        # return self.make_sqla_column_compatible(sqla_column, label)

Review Comment:
   If we're not using code it's better to remove it:
   
   ```suggestion
   ```



##########
superset/models/helpers.py:
##########
@@ -1498,33 +1591,27 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
                     # if groupby field/expr equals granularity field/expr
                     if selected == granularity:
                         table_col = columns_by_name[selected]
-                        if isinstance(table_col, dict):
-                            outer = self.get_timestamp_expression(
-                                column=table_col,
-                                time_grain=time_grain,
-                                label=selected,
-                                template_processor=template_processor,
-                            )
-                        else:
-                            outer = table_col.get_timestamp_expression(
-                                time_grain=time_grain,
-                                label=selected,
-                                template_processor=template_processor,
-                            )
+                        outer = table_col.get_timestamp_expression(
+                            time_grain=time_grain,
+                            label=selected,
+                            template_processor=template_processor,
+                        )
                     # if groupby field equals a selected column
                     elif selected in columns_by_name:
-                        if isinstance(columns_by_name[selected], dict):
-                            outer = sa.column(f"{selected}")
-                            outer = self.make_sqla_column_compatible(outer, selected)
-                        else:
-                            outer = columns_by_name[selected].get_sqla_col()
+                        outer = self.convert_tbl_column_to_sqla_col(
+                            columns_by_name[selected],
+                            template_processor=template_processor,
+                        )
+                    # outer = columns_by_name[selected].get_sqla_col(
+                    #     template_processor=template_processor
+                    # )

Review Comment:
   ```suggestion
   ```



##########
superset/models/helpers.py:
##########
@@ -1643,67 +1725,57 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
             sqla_col: Optional[Column] = None
             if flt_col == utils.DTTM_ALIAS and is_timeseries and dttm_col:
                 col_obj = dttm_col
-            elif utils.is_adhoc_column(flt_col):
-                sqla_col = self.adhoc_column_to_sqla(flt_col)  # type: ignore
+            elif is_adhoc_column(flt_col):
+                sqla_col = self.adhoc_column_to_sqla(
+                    col=flt_col, template_processor=template_processor
+                )
             else:
                 col_obj = columns_by_name.get(flt_col)
             filter_grain = flt.get("grain")
 
             if is_feature_enabled("ENABLE_TEMPLATE_REMOVE_FILTERS"):
-                if utils.get_column_name(flt_col) in removed_filters:
+                if get_column_name(flt_col) in removed_filters:
                     # Skip generating SQLA filter when the jinja template handles it.
                     continue
 
             if col_obj or sqla_col is not None:
                 if sqla_col is not None:
                     pass
                 elif col_obj and filter_grain:
-                    if isinstance(col_obj, dict):
-                        sqla_col = self.get_timestamp_expression(
-                            col_obj, time_grain, template_processor=template_processor
-                        )
-                    else:
-                        sqla_col = col_obj.get_timestamp_expression(
-                            time_grain=filter_grain,
-                            template_processor=template_processor,
-                        )
-                elif col_obj and isinstance(col_obj, dict):
-                    sqla_col = sa.column(col_obj.get("column_name"))
+                    sqla_col = col_obj.get_timestamp_expression(
+                        time_grain=filter_grain, template_processor=template_processor
+                    )
                 elif col_obj:
-                    sqla_col = col_obj.get_sqla_col()
-
-                if col_obj and isinstance(col_obj, dict):
-                    col_type = col_obj.get("type")
-                else:
-                    col_type = col_obj.type if col_obj else None
+                    sqla_col = self.convert_tbl_column_to_sqla_col(
+                        tbl_column=col_obj, template_processor=template_processor
+                    )
+                    # sqla_col = col_obj.get_sqla_col(
+                    #     template_processor=template_processor
+                    # )
+                col_type = col_obj.type if col_obj else None
                 col_spec = db_engine_spec.get_column_spec(
                     native_type=col_type,
-                    db_extra=self.database.get_extra(),  # type: ignore
+                    #                    db_extra=self.database.get_extra(),

Review Comment:
   ```suggestion
   ```



##########
superset/models/helpers.py:
##########
@@ -1797,41 +1876,63 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
                                 time_col=col_obj,
                                 start_dttm=_since,
                                 end_dttm=_until,
+                                label=sqla_col.key,
+                                template_processor=template_processor,
                             )
+                            # col_obj.get_time_filter(
+                            #     start_dttm=_since,
+                            #     end_dttm=_until,
+                            #     label=sqla_col.key,
+                            #     template_processor=template_processor,
+                            # )

Review Comment:
   ```suggestion
   ```



##########
superset/models/helpers.py:
##########
@@ -1538,19 +1625,28 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
                 select_exprs.append(outer)
         elif columns:
             for selected in columns:
-                selected = self.validate_adhoc_subquery(
-                    selected,
+                if is_adhoc_column(selected):
+                    _sql = selected["sqlExpression"]
+                    _column_label = selected["label"]

Review Comment:
   By convention variables with a leading underscore are "private"... can we give these a better name? Like `column_sql` or something?



##########
superset/models/helpers.py:
##########
@@ -1881,26 +1982,30 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
                 inner_time_filter = []
 
                 if dttm_col and not db_engine_spec.time_groupby_inline:
-                    if isinstance(dttm_col, dict):
-                        inner_time_filter = [
-                            self.get_time_filter(
-                                dttm_col,
-                                inner_from_dttm or from_dttm,
-                                inner_to_dttm or to_dttm,
-                            )
-                        ]
-                    else:
-                        inner_time_filter = [
-                            dttm_col.get_time_filter(
-                                inner_from_dttm or from_dttm,
-                                inner_to_dttm or to_dttm,
-                            )
-                        ]
-
+                    inner_time_filter = [
+                        self.get_time_filter(
+                            time_col=dttm_col,
+                            start_dttm=inner_from_dttm or from_dttm,
+                            end_dttm=inner_to_dttm or to_dttm,
+                            template_processor=template_processor,
+                        )
+                        # dttm_col.get_time_filter(
+                        #     start_dttm=inner_from_dttm or from_dttm,
+                        #     end_dttm=inner_to_dttm or to_dttm,
+                        #     template_processor=template_processor,
+                        # )

Review Comment:
   ```suggestion
   ```



##########
superset/models/sql_lab.py:
##########
@@ -183,47 +184,33 @@ def sql_tables(self) -> List[Table]:
         return list(ParsedQuery(self.sql).tables)
 
     @property
-    def columns(self) -> List[Dict[str, Any]]:
-        bool_types = ("BOOL",)
-        num_types = (
-            "DOUBLE",
-            "FLOAT",
-            "INT",
-            "BIGINT",
-            "NUMBER",
-            "LONG",
-            "REAL",
-            "NUMERIC",
-            "DECIMAL",
-            "MONEY",
+    def columns(self) -> List["TableColumn"]:
+        from superset.connectors.sqla.models import (  # pylint: disable=import-outside-toplevel
+            TableColumn,
         )
-        date_types = ("DATE", "TIME")
-        str_types = ("VARCHAR", "STRING", "CHAR")
+
         columns = []
-        col_type = ""
         for col in self.extra.get("columns", []):
-            computed_column = {**col}
-            col_type = col.get("type")
-
-            if col_type and any(map(lambda t: t in col_type.upper(), str_types)):
-                computed_column["type_generic"] = GenericDataType.STRING
-            if col_type and any(map(lambda t: t in col_type.upper(), bool_types)):
-                computed_column["type_generic"] = GenericDataType.BOOLEAN
-            if col_type and any(map(lambda t: t in col_type.upper(), num_types)):
-                computed_column["type_generic"] = GenericDataType.NUMERIC
-            if col_type and any(map(lambda t: t in col_type.upper(), date_types)):
-                computed_column["type_generic"] = GenericDataType.TEMPORAL
-
-            computed_column["column_name"] = col.get("name")
-            computed_column["groupby"] = True
-            columns.append(computed_column)
+            columns.append(
+                TableColumn(
+                    column_name=col["name"],
+                    type=col["type"],
+                    is_dttm=col["is_dttm"],
+                    groupby=True,
+                    filterable=True,

Review Comment:
   Are we setting `filterable` to true in both datasets and queries, or just queries? For datasets we want to respect the configuration, so that if a user has marked a column as not filterable it shouldn't be filterable. (For queries I understand we need all columns to be filterable and groupable.)



##########
superset/models/helpers.py:
##########
@@ -1643,67 +1725,57 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
             sqla_col: Optional[Column] = None
             if flt_col == utils.DTTM_ALIAS and is_timeseries and dttm_col:
                 col_obj = dttm_col
-            elif utils.is_adhoc_column(flt_col):
-                sqla_col = self.adhoc_column_to_sqla(flt_col)  # type: ignore
+            elif is_adhoc_column(flt_col):
+                sqla_col = self.adhoc_column_to_sqla(
+                    col=flt_col, template_processor=template_processor
+                )
             else:
                 col_obj = columns_by_name.get(flt_col)
             filter_grain = flt.get("grain")
 
             if is_feature_enabled("ENABLE_TEMPLATE_REMOVE_FILTERS"):
-                if utils.get_column_name(flt_col) in removed_filters:
+                if get_column_name(flt_col) in removed_filters:
                     # Skip generating SQLA filter when the jinja template handles it.
                     continue
 
             if col_obj or sqla_col is not None:
                 if sqla_col is not None:
                     pass
                 elif col_obj and filter_grain:
-                    if isinstance(col_obj, dict):
-                        sqla_col = self.get_timestamp_expression(
-                            col_obj, time_grain, template_processor=template_processor
-                        )
-                    else:
-                        sqla_col = col_obj.get_timestamp_expression(
-                            time_grain=filter_grain,
-                            template_processor=template_processor,
-                        )
-                elif col_obj and isinstance(col_obj, dict):
-                    sqla_col = sa.column(col_obj.get("column_name"))
+                    sqla_col = col_obj.get_timestamp_expression(
+                        time_grain=filter_grain, template_processor=template_processor
+                    )
                 elif col_obj:
-                    sqla_col = col_obj.get_sqla_col()
-
-                if col_obj and isinstance(col_obj, dict):
-                    col_type = col_obj.get("type")
-                else:
-                    col_type = col_obj.type if col_obj else None
+                    sqla_col = self.convert_tbl_column_to_sqla_col(
+                        tbl_column=col_obj, template_processor=template_processor
+                    )
+                    # sqla_col = col_obj.get_sqla_col(
+                    #     template_processor=template_processor
+                    # )
+                col_type = col_obj.type if col_obj else None
                 col_spec = db_engine_spec.get_column_spec(
                     native_type=col_type,
-                    db_extra=self.database.get_extra(),  # type: ignore
+                    #                    db_extra=self.database.get_extra(),
                 )
                 is_list_target = op in (
                     utils.FilterOperator.IN.value,
                     utils.FilterOperator.NOT_IN.value,
                 )
 
-                if col_obj and isinstance(col_obj, dict):
-                    col_advanced_data_type = ""
-                else:
-                    col_advanced_data_type = (
-                        col_obj.advanced_data_type if col_obj else ""
-                    )
+                col_advanced_data_type = col_obj.advanced_data_type if col_obj else ""
 
                 if col_spec and not col_advanced_data_type:
                     target_generic_type = col_spec.generic_type
                 else:
-                    target_generic_type = utils.GenericDataType.STRING
+                    target_generic_type = GenericDataType.STRING
                 eq = self.filter_values_handler(
                     values=val,
                     operator=op,
                     target_generic_type=target_generic_type,
                     target_native_type=col_type,
                     is_list_target=is_list_target,
                     db_engine_spec=db_engine_spec,
-                    db_extra=self.database.get_extra(),  # type: ignore
+                    #                     db_extra=self.database.get_extra(),

Review Comment:
   ```suggestion
   ```



##########
superset-frontend/src/explore/components/controls/MetricControl/AdhocMetricOption.jsx:
##########
@@ -48,7 +48,7 @@ class AdhocMetricOption extends React.PureComponent {
   }
 
   onRemoveMetric(e) {
-    e.stopPropagation();
+    if (e !== undefined) e.stopPropagation();

Review Comment:
   `?.` works as a guard as well.



##########
superset/models/helpers.py:
##########
@@ -1643,67 +1725,57 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
             sqla_col: Optional[Column] = None
             if flt_col == utils.DTTM_ALIAS and is_timeseries and dttm_col:
                 col_obj = dttm_col
-            elif utils.is_adhoc_column(flt_col):
-                sqla_col = self.adhoc_column_to_sqla(flt_col)  # type: ignore
+            elif is_adhoc_column(flt_col):
+                sqla_col = self.adhoc_column_to_sqla(
+                    col=flt_col, template_processor=template_processor
+                )
             else:
                 col_obj = columns_by_name.get(flt_col)
             filter_grain = flt.get("grain")
 
             if is_feature_enabled("ENABLE_TEMPLATE_REMOVE_FILTERS"):
-                if utils.get_column_name(flt_col) in removed_filters:
+                if get_column_name(flt_col) in removed_filters:
                     # Skip generating SQLA filter when the jinja template handles it.
                     continue
 
             if col_obj or sqla_col is not None:
                 if sqla_col is not None:
                     pass
                 elif col_obj and filter_grain:
-                    if isinstance(col_obj, dict):
-                        sqla_col = self.get_timestamp_expression(
-                            col_obj, time_grain, template_processor=template_processor
-                        )
-                    else:
-                        sqla_col = col_obj.get_timestamp_expression(
-                            time_grain=filter_grain,
-                            template_processor=template_processor,
-                        )
-                elif col_obj and isinstance(col_obj, dict):
-                    sqla_col = sa.column(col_obj.get("column_name"))
+                    sqla_col = col_obj.get_timestamp_expression(
+                        time_grain=filter_grain, template_processor=template_processor
+                    )
                 elif col_obj:
-                    sqla_col = col_obj.get_sqla_col()
-
-                if col_obj and isinstance(col_obj, dict):
-                    col_type = col_obj.get("type")
-                else:
-                    col_type = col_obj.type if col_obj else None
+                    sqla_col = self.convert_tbl_column_to_sqla_col(
+                        tbl_column=col_obj, template_processor=template_processor
+                    )
+                    # sqla_col = col_obj.get_sqla_col(
+                    #     template_processor=template_processor
+                    # )

Review Comment:
   ```suggestion
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on a diff in pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1114912927


##########
superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx:
##########
@@ -61,7 +61,7 @@ export type ExploreQuery = QueryResponse & {
 };
 
 export interface ISimpleColumn {
-  name?: string | null;
+  column_name?: string | null;

Review Comment:
   since we are condensing `Query` to use the `TableColumn` model object we need make the names match
   https://github.com/apache/superset/blob/f2b512f0dc8514509953e67fb70faacf92868a92/superset/connectors/base/models.py#L602
   
   Before we were managing our on dict inside extra, and I made is `name` originally



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] john-bodley commented on a diff in pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "john-bodley (via GitHub)" <gi...@apache.org>.
john-bodley commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1198093934


##########
superset/models/helpers.py:
##########
@@ -1577,57 +1669,43 @@ def get_sqla_query(  # pylint: disable=too-many-arguments,too-many-locals,too-ma
                         col=granularity,
                     )
                 )
-            time_filters: List[Any] = []
+            time_filters = []
 
             if is_timeseries:
-                if isinstance(dttm_col, dict):
-                    timestamp = self.get_timestamp_expression(
-                        dttm_col, time_grain, template_processor=template_processor
-                    )
-                else:
-                    timestamp = dttm_col.get_timestamp_expression(
-                        time_grain=time_grain, template_processor=template_processor
-                    )
+                timestamp = dttm_col.get_timestamp_expression(

Review Comment:
   @hughhhh this logic causes an error if the `dttm_col` is associated with a query as opposed to a table as `dttm_col` which has been coerced to be of type `TableColumn` has no table associated with it, i.e., `dttm_col.table` is `None` and thus [this](https://github.com/apache/superset/blob/2222073778b0cee193f34c2500f2c489bb2a4bbe/superset/connectors/sqla/models.py#L265-L267) throws an exception.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] github-actions[bot] commented on pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by github-actions.
github-actions[bot] commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1405382629

   @hughhhh Ephemeral environment spinning up at http://34.221.152.250:8080. Credentials are `admin`/`admin`. Please allow several minutes for bootstrapping and startup.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] villebro commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "villebro (via GitHub)" <gi...@apache.org>.
villebro commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1491586919

   @hughhhh is there an ETA for this? I think this is a pretty big issue on master right now, as divergence keeps getting worse + the SIP-68 models seem to be working quite badly right now in Explore. I'm happy to review and help push this through, but in the interest of saving time it would be nice if we could have CI passing first + the requested changes addressed before reviewing this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on a diff in pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1113782733


##########
superset-frontend/src/explore/components/controls/MetricControl/AdhocMetricOption.jsx:
##########
@@ -48,7 +48,7 @@ class AdhocMetricOption extends React.PureComponent {
   }
 
   onRemoveMetric(e) {
-    e.stopPropagation();
+    if (e !== undefined) e.stopPropagation();

Review Comment:
   using this as a defense statement some cases `e` would come back null



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] Antonio-RiveroMartnez commented on a diff in pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "Antonio-RiveroMartnez (via GitHub)" <gi...@apache.org>.
Antonio-RiveroMartnez commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1113436929


##########
superset-frontend/src/explore/components/controls/MetricControl/AdhocMetricOption.jsx:
##########
@@ -48,7 +48,7 @@ class AdhocMetricOption extends React.PureComponent {
   }
 
   onRemoveMetric(e) {
-    e.stopPropagation();
+    if (e !== undefined) e.stopPropagation();

Review Comment:
   Same as `e?.stopPropagation()` ?



##########
superset-frontend/src/explore/components/controls/MetricControl/AdhocMetricOption.jsx:
##########
@@ -67,7 +67,7 @@ class AdhocMetricOption extends React.PureComponent {
       multi,
       datasourceWarningMessage,
     } = this.props;
-
+    console.log('hello');

Review Comment:
   leftover



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] andrey-zayats commented on pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "andrey-zayats (via GitHub)" <gi...@apache.org>.
andrey-zayats commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1408777962

   issue 4:
   chart in explore returns a syntax error when the user applies time range (can't repro with ALL datasets).
   on which I was able to repro (maybe they are broken):
   - video_game_sales
   - FCC 2018 Survey
   - birth_france_by_region
   ![image](https://user-images.githubusercontent.com/90777564/215512116-16ee140a-b8f7-4161-88c8-95a26d57a86e.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on a diff in pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1113784906


##########
superset/models/sql_lab.py:
##########
@@ -183,47 +184,33 @@ def sql_tables(self) -> List[Table]:
         return list(ParsedQuery(self.sql).tables)
 
     @property
-    def columns(self) -> List[Dict[str, Any]]:
-        bool_types = ("BOOL",)
-        num_types = (
-            "DOUBLE",
-            "FLOAT",
-            "INT",
-            "BIGINT",
-            "NUMBER",
-            "LONG",
-            "REAL",
-            "NUMERIC",
-            "DECIMAL",
-            "MONEY",
+    def columns(self) -> List["TableColumn"]:
+        from superset.connectors.sqla.models import (  # pylint: disable=import-outside-toplevel
+            TableColumn,
         )
-        date_types = ("DATE", "TIME")
-        str_types = ("VARCHAR", "STRING", "CHAR")
+
         columns = []
-        col_type = ""
         for col in self.extra.get("columns", []):
-            computed_column = {**col}
-            col_type = col.get("type")
-
-            if col_type and any(map(lambda t: t in col_type.upper(), str_types)):
-                computed_column["type_generic"] = GenericDataType.STRING
-            if col_type and any(map(lambda t: t in col_type.upper(), bool_types)):
-                computed_column["type_generic"] = GenericDataType.BOOLEAN
-            if col_type and any(map(lambda t: t in col_type.upper(), num_types)):
-                computed_column["type_generic"] = GenericDataType.NUMERIC
-            if col_type and any(map(lambda t: t in col_type.upper(), date_types)):
-                computed_column["type_generic"] = GenericDataType.TEMPORAL
-
-            computed_column["column_name"] = col.get("name")
-            computed_column["groupby"] = True
-            columns.append(computed_column)
+            columns.append(
+                TableColumn(
+                    column_name=col["name"],
+                    type=col["type"],
+                    is_dttm=col["is_dttm"],
+                    groupby=True,
+                    filterable=True,

Review Comment:
   yea we need to set this allow metrics items for `Query.columns` to able to be used in the dropdown



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on a diff in pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1115648077


##########
tests/integration_tests/charts/data/api_tests.py:
##########
@@ -1203,8 +1203,8 @@ def test_chart_cache_timeout_chart_not_found(
     [
         (200, {"where": "1 = 1"}),
         (200, {"having": "count(*) > 0"}),
-        (400, {"where": "col1 in (select distinct col1 from physical_dataset)"}),
-        (400, {"having": "count(*) > (select count(*) from physical_dataset)"}),
+        (403, {"where": "col1 in (select distinct col1 from physical_dataset)"}),
+        (403, {"having": "count(*) > (select count(*) from physical_dataset)"}),

Review Comment:
   So not 100% sure why this started firing but if you follow the codepath for this test specifically for `subquery_not_allowed` we should of been logging 403 from the start.
   
   [where the check is happening](https://sourcegraph.com/github.com/apache/superset/-/blob/superset/models/helpers.py?L124)
   
   [exception defintion](https://sourcegraph.com/github.com/apache/superset/-/blob/superset/exceptions.py#L165)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1405379401

   /test_env up


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] jinghua-qa commented on pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "jinghua-qa (via GitHub)" <gi...@apache.org>.
jinghua-qa commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1406779701

   issue 1:
   When power=query, missing data type in the left data panel in explore
   <img width="1792" alt="Screen Shot 2023-01-27 at 8 45 56 AM" src="https://user-images.githubusercontent.com/81597121/215145270-268be423-2456-4793-b82a-5c59b3b147ae.png">
   issue 2:
   Column dropdown is empty in Metric when power=query
   
   https://user-images.githubusercontent.com/81597121/215145817-3d8ea88a-3cad-4fc6-8fcc-18c50aa2964d.mov
   
   issue 3:
   Not able to remove metrics in explore when power = dataset
   
   https://user-images.githubusercontent.com/81597121/215146205-c9f67d24-ffae-4135-893a-6f6a5b389fb0.mov
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] eschutho commented on a diff in pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "eschutho (via GitHub)" <gi...@apache.org>.
eschutho commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1114901525


##########
superset-frontend/src/SqlLab/components/SaveDatasetModal/index.tsx:
##########
@@ -61,7 +61,7 @@ export type ExploreQuery = QueryResponse & {
 };
 
 export interface ISimpleColumn {
-  name?: string | null;
+  column_name?: string | null;

Review Comment:
   This is great to see this. IIRC there were quite a few places in the react codebase where we had to look up either column_name or name. I found one here: superset-frontend/packages/superset-ui-chart-controls/src/utils/getTemporalColumns.ts Should we do a sweep through and look for more?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] codecov[bot] commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by codecov.
codecov[bot] commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1404018335

   # [Codecov](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) Report
   > Merging [#22853](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (516dde4) into [master](https://codecov.io/gh/apache/superset/commit/3e07de7f39a1e6ee9d0db43435ce4a7285003a79?el=desc&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) (3e07de7) will **decrease** coverage by `14.33%`.
   > The diff coverage is `42.97%`.
   
   ```diff
   @@             Coverage Diff             @@
   ##           master   #22853       +/-   ##
   ===========================================
   - Coverage   67.28%   52.95%   -14.33%     
   ===========================================
     Files        1877     1879        +2     
     Lines       72023    72698      +675     
     Branches     7897     7897               
   ===========================================
   - Hits        48461    38498     -9963     
   - Misses      21534    32172    +10638     
     Partials     2028     2028               
   ```
   
   | Flag | Coverage Δ | |
   |---|---|---|
   | hive | `?` | |
   | mysql | `?` | |
   | postgres | `?` | |
   | presto | `?` | |
   | python | `51.98% <42.97%> (-29.91%)` | :arrow_down: |
   | sqlite | `?` | |
   | unit | `51.98% <42.97%> (+0.43%)` | :arrow_up: |
   
   Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#carryforward-flags-in-the-pull-request-comment) to find out more.
   
   | [Impacted Files](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | Coverage Δ | |
   |---|---|---|
   | [superset/models/sql\_lab.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvbW9kZWxzL3NxbF9sYWIucHk=) | `67.69% <28.57%> (-7.12%)` | :arrow_down: |
   | [superset/models/helpers.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvbW9kZWxzL2hlbHBlcnMucHk=) | `44.76% <35.86%> (+6.66%)` | :arrow_up: |
   | [superset/connectors/sqla/models.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvY29ubmVjdG9ycy9zcWxhL21vZGVscy5weQ==) | `60.74% <100.00%> (-28.63%)` | :arrow_down: |
   | [superset/result\_set.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvcmVzdWx0X3NldC5weQ==) | `66.66% <100.00%> (-31.21%)` | :arrow_down: |
   | [superset/utils/core.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvdXRpbHMvY29yZS5weQ==) | `57.71% <100.00%> (-33.18%)` | :arrow_down: |
   | [superset/utils/pandas\_postprocessing/boxplot.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvdXRpbHMvcGFuZGFzX3Bvc3Rwcm9jZXNzaW5nL2JveHBsb3QucHk=) | `100.00% <100.00%> (ø)` | |
   | [superset/utils/pandas\_postprocessing/flatten.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvdXRpbHMvcGFuZGFzX3Bvc3Rwcm9jZXNzaW5nL2ZsYXR0ZW4ucHk=) | `100.00% <100.00%> (ø)` | |
   | [superset/examples/css\_templates.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvZXhhbXBsZXMvY3NzX3RlbXBsYXRlcy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [superset/utils/dashboard\_import\_export.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvdXRpbHMvZGFzaGJvYXJkX2ltcG9ydF9leHBvcnQucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [superset/migrations/shared/migrate\_viz/\_\_init\_\_.py](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation#diff-c3VwZXJzZXQvbWlncmF0aW9ucy9zaGFyZWQvbWlncmF0ZV92aXovX19pbml0X18ucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | ... and [325 more](https://codecov.io/gh/apache/superset/pull/22853?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation) | |
   
   :mega: We’re building smart automated test selection to slash your CI/CD build times. [Learn more](https://about.codecov.io/iterative-testing/?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Software+Foundation)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] github-actions[bot] commented on pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1502285397

   Ephemeral environment shutdown and build artifacts deleted.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh merged pull request #22853: chore: Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh merged PR #22853:
URL: https://github.com/apache/superset/pull/22853


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on PR #22853:
URL: https://github.com/apache/superset/pull/22853#issuecomment-1405379552

   /testenv up


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org


[GitHub] [superset] hughhhh commented on a diff in pull request #22853: chore(WIP): Refactor ExploreMixin to power both Datasets (SqlaTable) and Query models

Posted by "hughhhh (via GitHub)" <gi...@apache.org>.
hughhhh commented on code in PR #22853:
URL: https://github.com/apache/superset/pull/22853#discussion_r1113785182


##########
superset/models/sql_lab.py:
##########
@@ -183,47 +184,33 @@ def sql_tables(self) -> List[Table]:
         return list(ParsedQuery(self.sql).tables)
 
     @property
-    def columns(self) -> List[Dict[str, Any]]:
-        bool_types = ("BOOL",)
-        num_types = (
-            "DOUBLE",
-            "FLOAT",
-            "INT",
-            "BIGINT",
-            "NUMBER",
-            "LONG",
-            "REAL",
-            "NUMERIC",
-            "DECIMAL",
-            "MONEY",
+    def columns(self) -> List["TableColumn"]:
+        from superset.connectors.sqla.models import (  # pylint: disable=import-outside-toplevel
+            TableColumn,
         )
-        date_types = ("DATE", "TIME")
-        str_types = ("VARCHAR", "STRING", "CHAR")
+
         columns = []
-        col_type = ""
         for col in self.extra.get("columns", []):
-            computed_column = {**col}
-            col_type = col.get("type")
-
-            if col_type and any(map(lambda t: t in col_type.upper(), str_types)):
-                computed_column["type_generic"] = GenericDataType.STRING
-            if col_type and any(map(lambda t: t in col_type.upper(), bool_types)):
-                computed_column["type_generic"] = GenericDataType.BOOLEAN
-            if col_type and any(map(lambda t: t in col_type.upper(), num_types)):
-                computed_column["type_generic"] = GenericDataType.NUMERIC
-            if col_type and any(map(lambda t: t in col_type.upper(), date_types)):
-                computed_column["type_generic"] = GenericDataType.TEMPORAL
-
-            computed_column["column_name"] = col.get("name")
-            computed_column["groupby"] = True
-            columns.append(computed_column)
+            columns.append(
+                TableColumn(
+                    column_name=col["name"],
+                    type=col["type"],
+                    is_dttm=col["is_dttm"],
+                    groupby=True,

Review Comment:
   same as the `filterable` comment need it to be set so the UI allows user to drag&drop



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@superset.apache.org
For additional commands, e-mail: notifications-help@superset.apache.org