You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Lalwani, Jayesh" <Ja...@capitalone.com> on 2017/12/29 21:02:27 UTC

Subqueries

I have a table, and I want to find the latest records in the table. The table has a column called instnc_id that is incremented everyday. So, I want to find the records that have the max instnc_id.

I am trying to do this using subqueries, but it gives me an error. For example, when I try this

select ACCT_ID, CR_RVKD_STAT_CD, ACCT_SFX_NUM, SCURT_FRD_STAT_CD, CLSD_REAS_CD from (select *, max(instnc_id) as max_inst_id FROM Stat_hist) where instnc_id=max_inst_id

the error I get is

Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`max_inst_id`' given input columns: [CR_RVKD_STAT_CD, ACCT_SFX_NUM, CLSD_REAS_CD, ACCT_ID, instnc_id, SCURT_FRD_STAT_CD]; line 1 pos 172;
'Project ['ACCT_ID, 'CR_RVKD_STAT_CD, 'ACCT_SFX_NUM, 'SCURT_FRD_STAT_CD, CLSD_REAS_CD, scalar-subquery#298 [] AS max_inst_id#299]
:  +- 'Project [unresolvedalias('max('instnc_id), None)]
:     +- 'UnresolvedRelation `Stat_hist`
+- 'Filter (instnc_id#92 = 'max_inst_id)
   +- SubqueryAlias stat_hist
      +- Project [ACCT_ID#0, ACCT_SFX_NUM#1, CR_RVKD_STAT_CD#23, SCURT_FRD_STAT_CD#34, CLSD_REAS_CD#19, instnc_id#92]

I have tried various combinations but I keep getting into the same problem: It doesn’t recognize max_inst_id as a column.

The only thing that works is if I get max_inst_id in a dataframe and then inner join it with the original table
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: Subqueries

Posted by "Lalwani, Jayesh" <Ja...@capitalone.com>.
Thanks. You are right on both counts


  1.  Doing max(instnc_id) over () works. I thought that Spark would automatically treat max(instnc_id) as max(instnc_id) over ()
  2.  Spark tries to do max function in one task, and it runs out of memory

I’ll revert back to join. Thanks again
From: Nicholas Hakobian <ni...@rallyhealth.com>
Date: Friday, December 29, 2017 at 8:10 PM
To: "Lalwani, Jayesh" <Ja...@capitalone.com>
Cc: "user@spark.apache.org" <us...@spark.apache.org>
Subject: Re: Subqueries

This sounds like a perfect example of using windowing functions. Have you tried something like the following:

select ACCT_ID, CR_RVKD_STAT_CD, ACCT_SFX_NUM, SCURT_FRD_STAT_CD, CLSD_REAS_CD from (select *, max(instnc_id) over () as max_inst_id FROM Stat_hist) where instnc_id=max_inst_id

However, I have seen instances where window functions without partitioning clauses will cause all partitions to be executed on one task (and spark usually warns about this condition) and this will be very slow. It might actually be more performant to use the inner join which, even though it is scanning through the raw data twice, is more parallelizable.

If you have your data stored in a columnar compressed data format like parquet or orc, the query on the right side of the join should only have a single column, so I/O on that column would be significantly less than the full table; you might even be able to squeeze some more performance out of it (depending on the size of the table), by caching it beforehand.

Nicholas Szandor Hakobian, Ph.D.
Staff Data Scientist
Rally Health
nicholas.hakobian@rallyhealth.com<ma...@rallyhealth.com>

On Fri, Dec 29, 2017 at 1:02 PM, Lalwani, Jayesh <Ja...@capitalone.com>> wrote:
I have a table, and I want to find the latest records in the table. The table has a column called instnc_id that is incremented everyday. So, I want to find the records that have the max instnc_id.

I am trying to do this using subqueries, but it gives me an error. For example, when I try this

select ACCT_ID, CR_RVKD_STAT_CD, ACCT_SFX_NUM, SCURT_FRD_STAT_CD, CLSD_REAS_CD from (select *, max(instnc_id) as max_inst_id FROM Stat_hist) where instnc_id=max_inst_id

the error I get is

Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`max_inst_id`' given input columns: [CR_RVKD_STAT_CD, ACCT_SFX_NUM, CLSD_REAS_CD, ACCT_ID, instnc_id, SCURT_FRD_STAT_CD]; line 1 pos 172;
'Project ['ACCT_ID, 'CR_RVKD_STAT_CD, 'ACCT_SFX_NUM, 'SCURT_FRD_STAT_CD, CLSD_REAS_CD, scalar-subquery#298 [] AS max_inst_id#299]
:  +- 'Project [unresolvedalias('max('instnc_id), None)]
:     +- 'UnresolvedRelation `Stat_hist`
+- 'Filter (instnc_id#92 = 'max_inst_id)
   +- SubqueryAlias stat_hist
      +- Project [ACCT_ID#0, ACCT_SFX_NUM#1, CR_RVKD_STAT_CD#23, SCURT_FRD_STAT_CD#34, CLSD_REAS_CD#19, instnc_id#92]

I have tried various combinations but I keep getting into the same problem: It doesn’t recognize max_inst_id as a column.

The only thing that works is if I get max_inst_id in a dataframe and then inner join it with the original table

________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: Subqueries

Posted by Nicholas Hakobian <ni...@rallyhealth.com>.
This sounds like a perfect example of using windowing functions. Have you
tried something like the following:

select ACCT_ID, CR_RVKD_STAT_CD, ACCT_SFX_NUM, SCURT_FRD_STAT_CD,
CLSD_REAS_CD from (select *, max(instnc_id) *over ()* as max_inst_id FROM
Stat_hist) where instnc_id=max_inst_id

However, I have seen instances where window functions without partitioning
clauses will cause all partitions to be executed on one task (and spark
usually warns about this condition) and this will be very slow. It might
actually be more performant to use the inner join which, even though it is
scanning through the raw data twice, is more parallelizable.

If you have your data stored in a columnar compressed data format like
parquet or orc, the query on the right side of the join should only have a
single column, so I/O on that column would be significantly less than the
full table; you might even be able to squeeze some more performance out of
it (depending on the size of the table), by caching it beforehand.

Nicholas Szandor Hakobian, Ph.D.
Staff Data Scientist
Rally Health
nicholas.hakobian@rallyhealth.com

On Fri, Dec 29, 2017 at 1:02 PM, Lalwani, Jayesh <
Jayesh.Lalwani@capitalone.com> wrote:

> I have a table, and I want to find the latest records in the table. The
> table has a column called instnc_id that is incremented everyday. So, I
> want to find the records that have the max instnc_id.
>
>
>
> I am trying to do this using subqueries, but it gives me an error. For
> example, when I try this
>
>
>
> select ACCT_ID, CR_RVKD_STAT_CD, ACCT_SFX_NUM, SCURT_FRD_STAT_CD,
> CLSD_REAS_CD from (select *, max(instnc_id) as max_inst_id FROM Stat_hist)
> where instnc_id=max_inst_id
>
>
>
> the error I get is
>
>
>
> Caused by: org.apache.spark.sql.AnalysisException: cannot resolve
> '`max_inst_id`' given input columns: [CR_RVKD_STAT_CD, ACCT_SFX_NUM,
> CLSD_REAS_CD, ACCT_ID, instnc_id, SCURT_FRD_STAT_CD]; line 1 pos 172;
>
> 'Project ['ACCT_ID, 'CR_RVKD_STAT_CD, 'ACCT_SFX_NUM, 'SCURT_FRD_STAT_CD,
> CLSD_REAS_CD, scalar-subquery#298 [] AS max_inst_id#299]
>
> :  +- 'Project [unresolvedalias('max('instnc_id), None)]
>
> :     +- 'UnresolvedRelation `Stat_hist`
>
> +- 'Filter (instnc_id#92 = 'max_inst_id)
>
>    +- SubqueryAlias stat_hist
>
>       +- Project [ACCT_ID#0, ACCT_SFX_NUM#1, CR_RVKD_STAT_CD#23,
> SCURT_FRD_STAT_CD#34, CLSD_REAS_CD#19, instnc_id#92]
>
>
>
> I have tried various combinations but I keep getting into the same
> problem: It doesn’t recognize max_inst_id as a column.
>
>
>
> The only thing that works is if I get max_inst_id in a dataframe and then
> inner join it with the original table
>
> ------------------------------
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>