You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ashok Kumar <as...@yahoo.com.INVALID> on 2016/02/24 10:16:15 UTC

Execution plan in spark

 Gurus,
Is there anything like explain in Spark to see the execution plan in functional programming?
warm regards

Re: Execution plan in spark

Posted by Sabarish Sasidharan <sa...@manthan.com>.
There is no execution plan for FP. Execution plan exists for sql.

Regards
Sab
On 24-Feb-2016 2:46 pm, "Ashok Kumar" <as...@yahoo.com.invalid> wrote:

> Gurus,
>
> Is there anything like explain in Spark to see the execution plan in
> functional programming?
>
> warm regards
>

Re: Execution plan in spark

Posted by Mich Talebzadeh <mi...@cloudtechnologypartners.co.uk>.
 

Also bear in mind that explain() method call works on transformations
(Transformations are just manipulations of the data.). 

examples filter, map, orderBy etc 

scala> var y =
HiveContext.table("sales").select("time_id").agg(max("time_id")).explain(true)

== Parsed Logical Plan == 

'Aggregate [max('time_id) AS max(time_id)#359]
 Project [time_id#354]
 MetastoreRelation oraclehadoop, sales, None 

== Analyzed Logical Plan ==
max(time_id): timestamp
Aggregate [max(time_id#354) AS max(time_id)#359]
 Project [time_id#354]
 MetastoreRelation oraclehadoop, sales, None 

== Optimized Logical Plan ==
Aggregate [max(time_id#354) AS max(time_id)#359]
 Project [time_id#354]
 MetastoreRelation oraclehadoop, sales, None 

== Physical Plan ==
TungstenAggregate(key=[],
functions=[(max(time_id#354),mode=Final,isDistinct=false)],
output=[max(time_id)#359])
 TungstenExchange SinglePartition
 TungstenAggregate(key=[],
functions=[(max(time_id#354),mode=Partial,isDistinct=false)],
output=[max#363])
 HiveTableScan [time_id#354], (MetastoreRelation oraclehadoop, sales,
None) 

Code Generation: true
y: Unit = () 

On 24/02/2016 09:49, Ashok Kumar wrote: 

> looks useful thanks 
> 
> On Wednesday, 24 February 2016, 9:42, Yin Yang <yy...@gmail.com> wrote:
> 
> Is the following what you were looking for ? 
> 
> sqlContext.sql(""" 
> CREATE TEMPORARY TABLE partitionedParquet 
> USING org.apache.spark.sql.parquet 
> OPTIONS ( 
> path '/tmp/partitioned' 
> )""") 
> 
> table("partitionedParquet").explain(true) 
> 
> On Wed, Feb 24, 2016 at 1:16 AM, Ashok Kumar <as...@yahoo.com.invalid> wrote:
> 
>> Gurus, 
>> 
>> Is there anything like explain in Spark to see the execution plan in functional programming? 
>> 
>> warm regards

-- 

Dr Mich Talebzadeh

LinkedIn
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

http://talebzadehmich.wordpress.com

NOTE: The information in this email is proprietary and confidential.
This message is for the designated recipient only, if you are not the
intended recipient, you should destroy it immediately. Any information
in this message shall not be understood as given or endorsed by Cloud
Technology Partners Ltd, its subsidiaries or their employees, unless
expressly so stated. It is the responsibility of the recipient to ensure
that this email is virus free, therefore neither Cloud Technology
partners Ltd, its subsidiaries nor their employees accept any
responsibility.

 

Re: Execution plan in spark

Posted by Ashok Kumar <as...@yahoo.com.INVALID>.
looks useful thanks 

    On Wednesday, 24 February 2016, 9:42, Yin Yang <yy...@gmail.com> wrote:
 

 Is the following what you were looking for ?
    sqlContext.sql("""    CREATE TEMPORARY TABLE partitionedParquet    USING org.apache.spark.sql.parquet    OPTIONS (      path '/tmp/partitioned'    )""")
    table("partitionedParquet").explain(true)
On Wed, Feb 24, 2016 at 1:16 AM, Ashok Kumar <as...@yahoo.com.invalid> wrote:

 Gurus,
Is there anything like explain in Spark to see the execution plan in functional programming?
warm regards



  

Re: Execution plan in spark

Posted by Yin Yang <yy...@gmail.com>.
Is the following what you were looking for ?

    sqlContext.sql("""
    CREATE TEMPORARY TABLE partitionedParquet
    USING org.apache.spark.sql.parquet
    OPTIONS (
      path '/tmp/partitioned'
    )""")

    table("partitionedParquet").explain(true)

On Wed, Feb 24, 2016 at 1:16 AM, Ashok Kumar <as...@yahoo.com.invalid>
wrote:

> Gurus,
>
> Is there anything like explain in Spark to see the execution plan in
> functional programming?
>
> warm regards
>