You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by buntu <bu...@gmail.com> on 2014/07/31 21:27:24 UTC

SchemaRDD select expression

I'm looking to write a select statement to get a distinct count on userId
grouped by keyword column on a parquet file SchemaRDD equivalent of: 
  SELECT keyword, count(distinct(userId)) from table group by keyword

How to write it using the chained select().groupBy() operations?

Thanks!



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: SchemaRDD select expression

Posted by Buntu Dev <bu...@gmail.com>.
Thanks Michael for confirming!


On Thu, Jul 31, 2014 at 2:43 PM, Michael Armbrust <mi...@databricks.com>
wrote:

> The performance should be the same using the DSL or SQL strings.
>
>
> On Thu, Jul 31, 2014 at 2:36 PM, Buntu Dev <bu...@gmail.com> wrote:
>
>> I was not sure if registerAsTable() and then query against that table
>> have additional performance impact and if DSL eliminates that.
>>
>>
>> On Thu, Jul 31, 2014 at 2:33 PM, Zongheng Yang <zo...@gmail.com>
>> wrote:
>>
>>> Looking at what this patch [1] has to do to achieve it, I am not sure
>>> if you can do the same thing in 1.0.0 using DSL only. Just curious,
>>> why don't you use the hql() / sql() methods and pass a query string
>>> in?
>>>
>>> [1] https://github.com/apache/spark/pull/1211/files
>>>
>>> On Thu, Jul 31, 2014 at 2:20 PM, Buntu Dev <bu...@gmail.com> wrote:
>>> > Thanks Zongheng for the pointer. Is there a way to achieve the same in
>>> 1.0.0
>>> > ?
>>> >
>>> >
>>> > On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang <zo...@gmail.com>
>>> wrote:
>>> >>
>>> >> countDistinct is recently added and is in 1.0.2. If you are using that
>>> >> or the master branch, you could try something like:
>>> >>
>>> >>     r.select('keyword, countDistinct('userId)).groupBy('keyword)
>>> >>
>>> >> On Thu, Jul 31, 2014 at 12:27 PM, buntu <bu...@gmail.com> wrote:
>>> >> > I'm looking to write a select statement to get a distinct count on
>>> >> > userId
>>> >> > grouped by keyword column on a parquet file SchemaRDD equivalent of:
>>> >> >   SELECT keyword, count(distinct(userId)) from table group by
>>> keyword
>>> >> >
>>> >> > How to write it using the chained select().groupBy() operations?
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > View this message in context:
>>> >> >
>>> http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
>>> >> > Sent from the Apache Spark User List mailing list archive at
>>> Nabble.com.
>>> >
>>> >
>>>
>>
>>
>

Re: SchemaRDD select expression

Posted by Michael Armbrust <mi...@databricks.com>.
The performance should be the same using the DSL or SQL strings.


On Thu, Jul 31, 2014 at 2:36 PM, Buntu Dev <bu...@gmail.com> wrote:

> I was not sure if registerAsTable() and then query against that table have
> additional performance impact and if DSL eliminates that.
>
>
> On Thu, Jul 31, 2014 at 2:33 PM, Zongheng Yang <zo...@gmail.com>
> wrote:
>
>> Looking at what this patch [1] has to do to achieve it, I am not sure
>> if you can do the same thing in 1.0.0 using DSL only. Just curious,
>> why don't you use the hql() / sql() methods and pass a query string
>> in?
>>
>> [1] https://github.com/apache/spark/pull/1211/files
>>
>> On Thu, Jul 31, 2014 at 2:20 PM, Buntu Dev <bu...@gmail.com> wrote:
>> > Thanks Zongheng for the pointer. Is there a way to achieve the same in
>> 1.0.0
>> > ?
>> >
>> >
>> > On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang <zo...@gmail.com>
>> wrote:
>> >>
>> >> countDistinct is recently added and is in 1.0.2. If you are using that
>> >> or the master branch, you could try something like:
>> >>
>> >>     r.select('keyword, countDistinct('userId)).groupBy('keyword)
>> >>
>> >> On Thu, Jul 31, 2014 at 12:27 PM, buntu <bu...@gmail.com> wrote:
>> >> > I'm looking to write a select statement to get a distinct count on
>> >> > userId
>> >> > grouped by keyword column on a parquet file SchemaRDD equivalent of:
>> >> >   SELECT keyword, count(distinct(userId)) from table group by keyword
>> >> >
>> >> > How to write it using the chained select().groupBy() operations?
>> >> >
>> >> > Thanks!
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > View this message in context:
>> >> >
>> http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
>> >> > Sent from the Apache Spark User List mailing list archive at
>> Nabble.com.
>> >
>> >
>>
>
>

Re: SchemaRDD select expression

Posted by Buntu Dev <bu...@gmail.com>.
I was not sure if registerAsTable() and then query against that table have
additional performance impact and if DSL eliminates that.


On Thu, Jul 31, 2014 at 2:33 PM, Zongheng Yang <zo...@gmail.com> wrote:

> Looking at what this patch [1] has to do to achieve it, I am not sure
> if you can do the same thing in 1.0.0 using DSL only. Just curious,
> why don't you use the hql() / sql() methods and pass a query string
> in?
>
> [1] https://github.com/apache/spark/pull/1211/files
>
> On Thu, Jul 31, 2014 at 2:20 PM, Buntu Dev <bu...@gmail.com> wrote:
> > Thanks Zongheng for the pointer. Is there a way to achieve the same in
> 1.0.0
> > ?
> >
> >
> > On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang <zo...@gmail.com>
> wrote:
> >>
> >> countDistinct is recently added and is in 1.0.2. If you are using that
> >> or the master branch, you could try something like:
> >>
> >>     r.select('keyword, countDistinct('userId)).groupBy('keyword)
> >>
> >> On Thu, Jul 31, 2014 at 12:27 PM, buntu <bu...@gmail.com> wrote:
> >> > I'm looking to write a select statement to get a distinct count on
> >> > userId
> >> > grouped by keyword column on a parquet file SchemaRDD equivalent of:
> >> >   SELECT keyword, count(distinct(userId)) from table group by keyword
> >> >
> >> > How to write it using the chained select().groupBy() operations?
> >> >
> >> > Thanks!
> >> >
> >> >
> >> >
> >> > --
> >> > View this message in context:
> >> >
> http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
> >> > Sent from the Apache Spark User List mailing list archive at
> Nabble.com.
> >
> >
>

Re: SchemaRDD select expression

Posted by Zongheng Yang <zo...@gmail.com>.
Looking at what this patch [1] has to do to achieve it, I am not sure
if you can do the same thing in 1.0.0 using DSL only. Just curious,
why don't you use the hql() / sql() methods and pass a query string
in?

[1] https://github.com/apache/spark/pull/1211/files

On Thu, Jul 31, 2014 at 2:20 PM, Buntu Dev <bu...@gmail.com> wrote:
> Thanks Zongheng for the pointer. Is there a way to achieve the same in 1.0.0
> ?
>
>
> On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang <zo...@gmail.com> wrote:
>>
>> countDistinct is recently added and is in 1.0.2. If you are using that
>> or the master branch, you could try something like:
>>
>>     r.select('keyword, countDistinct('userId)).groupBy('keyword)
>>
>> On Thu, Jul 31, 2014 at 12:27 PM, buntu <bu...@gmail.com> wrote:
>> > I'm looking to write a select statement to get a distinct count on
>> > userId
>> > grouped by keyword column on a parquet file SchemaRDD equivalent of:
>> >   SELECT keyword, count(distinct(userId)) from table group by keyword
>> >
>> > How to write it using the chained select().groupBy() operations?
>> >
>> > Thanks!
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
>> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>

Re: SchemaRDD select expression

Posted by Buntu Dev <bu...@gmail.com>.
Thanks Zongheng for the pointer. Is there a way to achieve the same in
1.0.0 ?


On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang <zo...@gmail.com> wrote:

> countDistinct is recently added and is in 1.0.2. If you are using that
> or the master branch, you could try something like:
>
>     r.select('keyword, countDistinct('userId)).groupBy('keyword)
>
> On Thu, Jul 31, 2014 at 12:27 PM, buntu <bu...@gmail.com> wrote:
> > I'm looking to write a select statement to get a distinct count on userId
> > grouped by keyword column on a parquet file SchemaRDD equivalent of:
> >   SELECT keyword, count(distinct(userId)) from table group by keyword
> >
> > How to write it using the chained select().groupBy() operations?
> >
> > Thanks!
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Re: SchemaRDD select expression

Posted by Zongheng Yang <zo...@gmail.com>.
countDistinct is recently added and is in 1.0.2. If you are using that
or the master branch, you could try something like:

    r.select('keyword, countDistinct('userId)).groupBy('keyword)

On Thu, Jul 31, 2014 at 12:27 PM, buntu <bu...@gmail.com> wrote:
> I'm looking to write a select statement to get a distinct count on userId
> grouped by keyword column on a parquet file SchemaRDD equivalent of:
>   SELECT keyword, count(distinct(userId)) from table group by keyword
>
> How to write it using the chained select().groupBy() operations?
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.