You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Veeranagouda Mukkanagoudar <ve...@gmail.com> on 2014/09/05 01:36:54 UTC

spark RDD join Error

I am planning to use RDD join operation, to test out i was trying to
compile some test code, but am getting following compilation error

*value join is not a member of org.apache.spark.rdd.RDD[(String, Int)]*
*[error]     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }*

Code:

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD

def joinTest(rddA: RDD[(String, Int)], rddB: RDD[(String, Int)]) :
RDD[(String, Int)] = {
    rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }
}

Any help would be great .

Veera

Re: spark RDD join Error

Posted by Chris Fregly <ch...@fregly.com>.
specifically, you're picking up the following implicit:

import org.apache.spark.SparkContext.rddToPairRDDFunctions

(in case you're a wildcard-phobe like me)


On Thu, Sep 4, 2014 at 5:15 PM, Veeranagouda Mukkanagoudar <
veeran54@gmail.com> wrote:

> Thanks a lot, that fixed the issue :)
>
>
> On Thu, Sep 4, 2014 at 4:51 PM, Zhan Zhang <zz...@hortonworks.com> wrote:
>
>> Try this:
>> Import org.apache.spark.SparkContext._
>>
>> Thanks.
>>
>> Zhan Zhang
>>
>>
>> On Sep 4, 2014, at 4:36 PM, Veeranagouda Mukkanagoudar <
>> veeran54@gmail.com> wrote:
>>
>> I am planning to use RDD join operation, to test out i was trying to
>> compile some test code, but am getting following compilation error
>>
>> *value join is not a member of org.apache.spark.rdd.RDD[(String, Int)]*
>> *[error]     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }*
>>
>> Code:
>>
>> import org.apache.spark.{SparkConf, SparkContext}
>> import org.apache.spark.rdd.RDD
>>
>> def joinTest(rddA: RDD[(String, Int)], rddB: RDD[(String, Int)]) :
>> RDD[(String, Int)] = {
>>     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }
>> }
>>
>> Any help would be great .
>>
>> Veera
>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

Re: spark RDD join Error

Posted by Veeranagouda Mukkanagoudar <ve...@gmail.com>.
Thanks a lot, that fixed the issue :)


On Thu, Sep 4, 2014 at 4:51 PM, Zhan Zhang <zz...@hortonworks.com> wrote:

> Try this:
> Import org.apache.spark.SparkContext._
>
> Thanks.
>
> Zhan Zhang
>
>
> On Sep 4, 2014, at 4:36 PM, Veeranagouda Mukkanagoudar <ve...@gmail.com>
> wrote:
>
> I am planning to use RDD join operation, to test out i was trying to
> compile some test code, but am getting following compilation error
>
> *value join is not a member of org.apache.spark.rdd.RDD[(String, Int)]*
> *[error]     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }*
>
> Code:
>
> import org.apache.spark.{SparkConf, SparkContext}
> import org.apache.spark.rdd.RDD
>
> def joinTest(rddA: RDD[(String, Int)], rddB: RDD[(String, Int)]) :
> RDD[(String, Int)] = {
>     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }
> }
>
> Any help would be great .
>
> Veera
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: spark RDD join Error

Posted by Zhan Zhang <zz...@hortonworks.com>.
Try this:
Import org.apache.spark.SparkContext._

Thanks.

Zhan Zhang


On Sep 4, 2014, at 4:36 PM, Veeranagouda Mukkanagoudar <ve...@gmail.com> wrote:

> I am planning to use RDD join operation, to test out i was trying to compile some test code, but am getting following compilation error 
> 
> value join is not a member of org.apache.spark.rdd.RDD[(String, Int)]
> [error]     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }
> 
> Code:
> 
> import org.apache.spark.{SparkConf, SparkContext}
> import org.apache.spark.rdd.RDD
> 
> def joinTest(rddA: RDD[(String, Int)], rddB: RDD[(String, Int)]) : RDD[(String, Int)] = {
>     rddA.join(rddB).map { case (k, (a, b)) => (k, a+b) }
> }
> 
> Any help would be great .
> 
> Veera
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.