You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "pankhuri (JIRA)" <ji...@apache.org> on 2015/05/19 15:26:59 UTC
[jira] [Created] (SPARK-7730) Complex Teradata queries throwing
Analysis Exception when running on spark
pankhuri created SPARK-7730:
-------------------------------
Summary: Complex Teradata queries throwing Analysis Exception when running on spark
Key: SPARK-7730
URL: https://issues.apache.org/jira/browse/SPARK-7730
Project: Spark
Issue Type: Bug
Components: Spark Shell
Affects Versions: 1.3.1
Environment: develeopement
Reporter: pankhuri
Connected spark wth tearadata. When running below TeraData query on spark-shell:
select substr(w_warehouse_name,1,20) as xx,sm_type,cc_name
,sum(case when (cs_ship_date_sk - cs_sold_date_sk <= 30 ) then 1 else 0 end) as days
,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 30) and
(cs_ship_date_sk - cs_sold_date_sk <= 60) then 1 else 0 end ) as sdays
,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 60) and
(cs_ship_date_sk - cs_sold_date_sk <= 90) then 1 else 0 end) as rdays
,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 90) and
(cs_ship_date_sk - cs_sold_date_sk <= 120) then 1 else 0 end) as ndays
,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 120) then 1 else 0 end) as dfdays
from test
where d_month_seq between 1193 and 1193 + 11
and cs_ship_date_sk = d_date_sk
and cs_warehouse_sk = w_warehouse_sk
and cs_ship_mode_sk = sm_ship_mode_sk
and cs_call_center_sk = cc_call_center_sk
group by xx ,sm_type ,cc_name order by xx,sm_type,cc_name
org.apache.spark.sql.AnalysisException: cannot resolve 'xx' given input columns cc_name, sdays, days, sm_type, rdays, xx, ndays, dfdays;
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org