You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Michael Armbrust (JIRA)" <ji...@apache.org> on 2015/09/15 23:27:46 UTC
[jira] [Resolved] (SPARK-4794) Wrong parse of GROUP BY query
[ https://issues.apache.org/jira/browse/SPARK-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Michael Armbrust resolved SPARK-4794.
-------------------------------------
Resolution: Cannot Reproduce
I think we have fixed our resolution logic here, but please reopen if you can still reproduce.
> Wrong parse of GROUP BY query
> -----------------------------
>
> Key: SPARK-4794
> URL: https://issues.apache.org/jira/browse/SPARK-4794
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.2.0
> Reporter: Damien Carol
>
> Spark is not able to parse this query :
> {code:sql}
> select
> `cf_encaissement_fact_pq`.`annee` as `Annee`,
> `cf_encaissement_fact_pq`.`mois` as `Mois`,
> `cf_encaissement_fact_pq`.`jour` as `Jour`,
> `cf_encaissement_fact_pq`.`heure` as `Heure`,
> `cf_encaissement_fact_pq`.`nom_societe` as `Societe`,
> `cf_encaissement_fact_pq`.`id_magasin` as `Magasin`,
> `cf_encaissement_fact_pq`.`CarteFidelitePresentee` as `CF_Presentee`,
> `cf_encaissement_fact_pq`.`CompteCarteFidelite` as `CompteCarteFidelite`,
> `cf_encaissement_fact_pq`.`NbCompteCarteFidelite` as `NbCompteCarteFidelite`,
> `cf_encaissement_fact_pq`.`DetentionCF` as `DetentionCF`,
> `cf_encaissement_fact_pq`.`NbCarteFidelite` as `NbCarteFidelite`,
> `cf_encaissement_fact_pq`.`Id_CF_Dim_DUCB` as `Plage_DUCB`,
> `cf_encaissement_fact_pq`.`NbCheque` as `NbCheque`,
> `cf_encaissement_fact_pq`.`CACheque` as `CACheque`,
> `cf_encaissement_fact_pq`.`NbImpaye` as `NbImpaye`,
> `cf_encaissement_fact_pq`.`Id_Ensemble` as `NbEnsemble`,
> `cf_encaissement_fact_pq`.`ZIBZIN` as `NbCompte`,
> `cf_encaissement_fact_pq`.`ResteDuImpaye` as `ResteDuImpaye`
> from
> `testsimon3`.`cf_encaissement_fact_pq` as `cf_encaissement_fact_pq`
> where
> `cf_encaissement_fact_pq`.`annee` = 2013
> and
> `cf_encaissement_fact_pq`.`mois` = 7
> and
> `cf_encaissement_fact_pq`.`jour` = 12
> order by
> `cf_encaissement_fact_pq`.`annee` ASC,
> `cf_encaissement_fact_pq`.`mois` ASC,
> `cf_encaissement_fact_pq`.`jour` ASC,
> `cf_encaissement_fact_pq`.`heure` ASC,
> `cf_encaissement_fact_pq`.`nom_societe` ASC,
> `cf_encaissement_fact_pq`.`id_magasin` ASC,
> `cf_encaissement_fact_pq`.`CarteFidelitePresentee` ASC,
> `cf_encaissement_fact_pq`.`CompteCarteFidelite` ASC,
> `cf_encaissement_fact_pq`.`NbCompteCarteFidelite` ASC,
> `cf_encaissement_fact_pq`.`DetentionCF` ASC,
> `cf_encaissement_fact_pq`.`NbCarteFidelite` ASC,
> `cf_encaissement_fact_pq`.`Id_CF_Dim_DUCB` ASC
> {code}
> If I remove table name in ORDER BY conditions, Spark can handle it.
> {code:sql}
> select
> `cf_encaissement_fact_pq`.`annee` as `Annee`,
> `cf_encaissement_fact_pq`.`mois` as `Mois`,
> `cf_encaissement_fact_pq`.`jour` as `Jour`,
> `cf_encaissement_fact_pq`.`heure` as `Heure`,
> `cf_encaissement_fact_pq`.`nom_societe` as `Societe`,
> `cf_encaissement_fact_pq`.`id_magasin` as `Magasin`,
> `cf_encaissement_fact_pq`.`CarteFidelitePresentee` as `CFPresentee`,
> `cf_encaissement_fact_pq`.`CompteCarteFidelite` as `CompteCarteFidelite`,
> `cf_encaissement_fact_pq`.`NbCompteCarteFidelite` as `NbCompteCarteFidelite`,
> `cf_encaissement_fact_pq`.`DetentionCF` as `DetentionCF`,
> `cf_encaissement_fact_pq`.`NbCarteFidelite` as `NbCarteFidelite`,
> `cf_encaissement_fact_pq`.`Id_CF_Dim_DUCB` as `PlageDUCB`,
> `cf_encaissement_fact_pq`.`NbCheque` as `NbCheque`,
> `cf_encaissement_fact_pq`.`CACheque` as `CACheque`,
> `cf_encaissement_fact_pq`.`NbImpaye` as `NbImpaye`,
> `cf_encaissement_fact_pq`.`Id_Ensemble` as `NbEnsemble`,
> `cf_encaissement_fact_pq`.`ZIBZIN` as `NbCompte`,
> `cf_encaissement_fact_pq`.`ResteDuImpaye` as `ResteDuImpaye`
> from
> `testsimon3`.`cf_encaissement_fact_pq` as `cf_encaissement_fact_pq`
> where
> `cf_encaissement_fact_pq`.`annee` = 2013
> and
> `cf_encaissement_fact_pq`.`mois` = 7
> and
> `cf_encaissement_fact_pq`.`jour` = 12
> order by
> `annee` ASC,
> `mois` ASC,
> `jour` ASC,
> `heure` ASC,
> `nom_societe` ASC,
> `id_magasin` ASC,
> `CarteFidelitePresentee` ASC,
> `CompteCarteFidelite` ASC,
> `NbCompteCarteFidelite` ASC,
> `DetentionCF` ASC,
> `NbCarteFidelite` ASC,
> `Id_CF_Dim_DUCB` ASC
> {code}
> I'm using Spark Master with Thrift server (HIVE 0.12)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org