You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Russell Spitzer <ru...@gmail.com> on 2016/09/04 16:31:24 UTC

Re: spark cassandra issue

https://github.com/datastax/spark-cassandra-connector/blob/v1.3.1/doc/14_data_frames.md
In Spark 1.3 it was illegal to use "table" as a key in Spark SQL so in that
version of Spark the connector needed to use the option "c_table"

val df = sqlContext.read.
     | format("org.apache.spark.sql.cassandra").
     | options(Map( "c_table" -> "****", "keyspace" -> "***")).
     | load()


On Sun, Sep 4, 2016 at 8:32 AM Mich Talebzadeh <mi...@gmail.com>
wrote:

> and your Cassandra table is there etc?
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 4 September 2016 at 16:20, Selvam Raman <se...@gmail.com> wrote:
>
>> Hey Mich,
>>
>> I am using the same one right now. Thanks for the reply.
>> import org.apache.spark.sql.cassandra._
>> import com.datastax.spark.connector._ //Loads implicit functions
>> sc.cassandraTable("keyspace name", "table name")
>>
>>
>> On Sun, Sep 4, 2016 at 8:48 PM, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> Hi Selvan.
>>>
>>> I don't deal with Cassandra but have you tried other options as
>>> described here
>>>
>>>
>>> https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
>>>
>>> To get a Spark RDD that represents a Cassandra table, call the
>>> cassandraTable method on the SparkContext object.
>>>
>>> import com.datastax.spark.connector._ //Loads implicit functions
>>> sc.cassandraTable("keyspace name", "table name")
>>>
>>>
>>>
>>> HTH
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 4 September 2016 at 15:52, Selvam Raman <se...@gmail.com> wrote:
>>>
>>>> its very urgent. please help me guys.
>>>>
>>>> On Sun, Sep 4, 2016 at 8:05 PM, Selvam Raman <se...@gmail.com> wrote:
>>>>
>>>>> Please help me to solve the issue.
>>>>>
>>>>> spark-shell --packages
>>>>> com.datastax.spark:spark-cassandra-connector_2.10:1.3.0 --conf
>>>>> spark.cassandra.connection.host=******
>>>>>
>>>>> val df = sqlContext.read.
>>>>>      | format("org.apache.spark.sql.cassandra").
>>>>>      | options(Map( "table" -> "****", "keyspace" -> "***")).
>>>>>      | load()
>>>>> java.util.NoSuchElementException: key not found: c_table
>>>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>>>         at
>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.default(ddl.scala:151)
>>>>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>>>>         at
>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.apply(ddl.scala:151)
>>>>>         at
>>>>> org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOptions(DefaultSource.scala:120)
>>>>>         at
>>>>> org.apache.spark.sql.cassandra.DefaultSource.createRelation(DefaultSource.scala:56)
>>>>>         at
>>>>> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
>>>>>         a
>>>>>
>>>>> --
>>>>> Selvam Raman
>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Selvam Raman
>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>
>>>
>>>
>>
>>
>> --
>> Selvam Raman
>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>
>
>

Re: spark cassandra issue

Posted by Selvam Raman <se...@gmail.com>.
Hi Russell.

if possible pleae help me to solve the below issue.

val df = sqlContext.read.
format("org.apache.spark.sql.cassandra").
options(Map("c_table"->"restt","keyspace"->"sss")).
load()


com.datastax.driver.core.TransportException: [/192.23.2.100:9042] Cannot
connect
        at com.datastax.driver.core.Connection.<init>(Connection.java:109)
        at com.datastax.driver.core.PooledConnection.<init>(
PooledConnection.java:32)
        at com.datastax.driver.core.Connection$Factory.open(
Connection.java:586)
        at com.datastax.driver.core.DynamicConnectionPool.<init>(
DynamicConnectionPool.java:74)
        at com.datastax.driver.core.HostConnectionPool.newInstance(
HostConnectionPool.java:33)
        at com.datastax.driver.core.SessionManager.replacePool(
SessionManager.java:271)
        at com.datastax.driver.core.SessionManager.access$400(
SessionManager.java:40)
        at com.datastax.driver.core.SessionManager$3.call(
SessionManager.java:308)
        at com.datastax.driver.core.SessionManager$3.call(
SessionManager.java:300)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection
timed out: /192.168.2.100:9042
        at org.jboss.netty.channel.socket.nio.NioClientBoss.
processConnectTimeout(NioClientBoss.java:137)
        at org.jboss.netty.channel.socket.nio.NioClientBoss.
process(NioClientBoss.java:83)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(
AbstractNioSelector.java:312)
        at org.jboss.netty.channel.socket.nio.NioClientBoss.run(
NioClientBoss.java:42)
        ... 3 more
16/09/04 18:37:35 ERROR core.Session: Error creating pool to /
192.168.2.74:9042
com.datastax.driver.core.TransportException: [/192.28.2.74:9042] Cannot
connect
        at com.datastax.driver.core.Connection.<init>(Connection.java:109)
        at com.datastax.driver.core.PooledConnection.<init>(
PooledConnection.java:32)
        at com.datastax.driver.core.Connection$Factory.open(
Connection.java:586)
        at com.datastax.driver.core.DynamicConnectionPool.<init>(
DynamicConnectionPool.java:74)
        at com.datastax.driver.core.HostConnectionPool.newInstance(
HostConnectionPool.java:33)
        at com.datastax.driver.core.SessionManager.replacePool(
SessionManager.java:271)
        at com.datastax.driver.core.SessionManager.access$400(
SessionManager.java:40)
        at com.datastax.driver.core.SessionManager$3.call(
SessionManager.java:308)
        at com.datastax.driver.core.SessionManager$3.call(
SessionManager.java:300)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.jboss.netty.channel.ConnectTimeoutException: connection
timed out: /192.168.2.74:9042
        at org.jboss.netty.channel.socket.nio.NioClientBoss.
processConnectTimeout(NioClientBoss.java:137)
        at org.jboss.netty.channel.socket.nio.NioClientBoss.
process(NioClientBoss.java:83)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(
AbstractNioSelector.java:312)
        at org.jboss.netty.channel.socket.nio.NioClientBoss.run(
NioClientBoss.java:42)
        ... 3 more
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/spark/sql/types/PrimitiveType
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
        at java.security.SecureClassLoader.defineClass(
SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
        at java.security.SecureClassLoader.defineClass(
SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at org.apache.spark.sql.cassandra.DataTypeConverter$.<
init>(DataTypeConverter.scala:32)
        at org.apache.spark.sql.cassandra.DataTypeConverter$.<
clinit>(DataTypeConverter.scala)
        at org.apache.spark.sql.cassandra.CassandraSourceRelation$$
anonfun$schema$1$$anonfun$apply$1.apply(CassandraSourceRelation.scala:58)
        at org.apache.spark.sql.cassandra.CassandraSourceRelation$$
anonfun$schema$1$$anonfun$apply$1.apply(CassandraSourceRelation.scala:58)
        at scala.collection.TraversableLike$$anonfun$map$
1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$
1.apply(TraversableLike.scala:244)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at scala.collection.IterableLike$class.foreach(IterableLike.
scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableLike$class.map(
TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at org.apache.spark.sql.cassandra.CassandraSourceRelation$$
anonfun$schema$1.apply(CassandraSourceRelation.scala:58)
        at org.apache.spark.sql.cassandra.CassandraSourceRelation$$
anonfun$schema$1.apply(CassandraSourceRelation.scala:58)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.sql.cassandra.CassandraSourceRelation.schema(
CassandraSourceRelation.scala:58)
        at org.apache.spark.sql.execution.datasources.
LogicalRelation.<init>(LogicalRelation.scala:37)
        at org.apache.spark.sql.DataFrameReader.load(
DataFrameReader.scala:120)
        at com.zebra.avp.oracle11i.OracleRepairData$.main(
OracleRepairData.scala:298)
        at com.zebra.avp.oracle11i.OracleRepairData.main(
OracleRepairData.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(
SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.types.
PrimitiveType
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

On Sun, Sep 4, 2016 at 10:04 PM, Russell Spitzer <ru...@gmail.com>
wrote:

> This would also be a better question for the SCC user list :)
> https://groups.google.com/a/lists.datastax.com/forum/#!
> forum/spark-connector-user
>
> On Sun, Sep 4, 2016 at 9:31 AM Russell Spitzer <ru...@gmail.com>
> wrote:
>
>> https://github.com/datastax/spark-cassandra-connector/
>> blob/v1.3.1/doc/14_data_frames.md
>> In Spark 1.3 it was illegal to use "table" as a key in Spark SQL so in
>> that version of Spark the connector needed to use the option "c_table"
>>
>>
>> val df = sqlContext.read.
>>      | format("org.apache.spark.sql.cassandra").
>>      | options(Map( "c_table" -> "****", "keyspace" -> "***")).
>>      | load()
>>
>>
>> On Sun, Sep 4, 2016 at 8:32 AM Mich Talebzadeh <mi...@gmail.com>
>> wrote:
>>
>>> and your Cassandra table is there etc?
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 4 September 2016 at 16:20, Selvam Raman <se...@gmail.com> wrote:
>>>
>>>> Hey Mich,
>>>>
>>>> I am using the same one right now. Thanks for the reply.
>>>> import org.apache.spark.sql.cassandra._
>>>> import com.datastax.spark.connector._ //Loads implicit functions
>>>> sc.cassandraTable("keyspace name", "table name")
>>>>
>>>>
>>>> On Sun, Sep 4, 2016 at 8:48 PM, Mich Talebzadeh <
>>>> mich.talebzadeh@gmail.com> wrote:
>>>>
>>>>> Hi Selvan.
>>>>>
>>>>> I don't deal with Cassandra but have you tried other options as
>>>>> described here
>>>>>
>>>>> https://github.com/datastax/spark-cassandra-connector/
>>>>> blob/master/doc/2_loading.md
>>>>>
>>>>> To get a Spark RDD that represents a Cassandra table, call the
>>>>> cassandraTable method on the SparkContext object.
>>>>>
>>>>> import com.datastax.spark.connector._ //Loads implicit functions
>>>>> sc.cassandraTable("keyspace name", "table name")
>>>>>
>>>>>
>>>>>
>>>>> HTH
>>>>>
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 4 September 2016 at 15:52, Selvam Raman <se...@gmail.com> wrote:
>>>>>
>>>>>> its very urgent. please help me guys.
>>>>>>
>>>>>> On Sun, Sep 4, 2016 at 8:05 PM, Selvam Raman <se...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Please help me to solve the issue.
>>>>>>>
>>>>>>> spark-shell --packages com.datastax.spark:spark-cassandra-connector_2.10:1.3.0
>>>>>>> --conf spark.cassandra.connection.host=******
>>>>>>>
>>>>>>> val df = sqlContext.read.
>>>>>>>      | format("org.apache.spark.sql.cassandra").
>>>>>>>      | options(Map( "table" -> "****", "keyspace" -> "***")).
>>>>>>>      | load()
>>>>>>> java.util.NoSuchElementException: key not found: c_table
>>>>>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>>>>>         at org.apache.spark.sql.execution.datasources.
>>>>>>> CaseInsensitiveMap.default(ddl.scala:151)
>>>>>>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>>>>>>         at org.apache.spark.sql.execution.datasources.
>>>>>>> CaseInsensitiveMap.apply(ddl.scala:151)
>>>>>>>         at org.apache.spark.sql.cassandra.DefaultSource$.
>>>>>>> TableRefAndOptions(DefaultSource.scala:120)
>>>>>>>         at org.apache.spark.sql.cassandra.DefaultSource.
>>>>>>> createRelation(DefaultSource.scala:56)
>>>>>>>         at org.apache.spark.sql.execution.datasources.
>>>>>>> ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
>>>>>>>         a
>>>>>>>
>>>>>>> --
>>>>>>> Selvam Raman
>>>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Selvam Raman
>>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Selvam Raman
>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>
>>>
>>>


-- 
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"

Re: spark cassandra issue

Posted by Russell Spitzer <ru...@gmail.com>.
This would also be a better question for the SCC user list :)
https://groups.google.com/a/lists.datastax.com/forum/#!forum/spark-connector-user

On Sun, Sep 4, 2016 at 9:31 AM Russell Spitzer <ru...@gmail.com>
wrote:

>
> https://github.com/datastax/spark-cassandra-connector/blob/v1.3.1/doc/14_data_frames.md
> In Spark 1.3 it was illegal to use "table" as a key in Spark SQL so in
> that version of Spark the connector needed to use the option "c_table"
>
>
> val df = sqlContext.read.
>      | format("org.apache.spark.sql.cassandra").
>      | options(Map( "c_table" -> "****", "keyspace" -> "***")).
>      | load()
>
>
> On Sun, Sep 4, 2016 at 8:32 AM Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> and your Cassandra table is there etc?
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 4 September 2016 at 16:20, Selvam Raman <se...@gmail.com> wrote:
>>
>>> Hey Mich,
>>>
>>> I am using the same one right now. Thanks for the reply.
>>> import org.apache.spark.sql.cassandra._
>>> import com.datastax.spark.connector._ //Loads implicit functions
>>> sc.cassandraTable("keyspace name", "table name")
>>>
>>>
>>> On Sun, Sep 4, 2016 at 8:48 PM, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> Hi Selvan.
>>>>
>>>> I don't deal with Cassandra but have you tried other options as
>>>> described here
>>>>
>>>>
>>>> https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
>>>>
>>>> To get a Spark RDD that represents a Cassandra table, call the
>>>> cassandraTable method on the SparkContext object.
>>>>
>>>> import com.datastax.spark.connector._ //Loads implicit functions
>>>> sc.cassandraTable("keyspace name", "table name")
>>>>
>>>>
>>>>
>>>> HTH
>>>>
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>> On 4 September 2016 at 15:52, Selvam Raman <se...@gmail.com> wrote:
>>>>
>>>>> its very urgent. please help me guys.
>>>>>
>>>>> On Sun, Sep 4, 2016 at 8:05 PM, Selvam Raman <se...@gmail.com> wrote:
>>>>>
>>>>>> Please help me to solve the issue.
>>>>>>
>>>>>> spark-shell --packages
>>>>>> com.datastax.spark:spark-cassandra-connector_2.10:1.3.0 --conf
>>>>>> spark.cassandra.connection.host=******
>>>>>>
>>>>>> val df = sqlContext.read.
>>>>>>      | format("org.apache.spark.sql.cassandra").
>>>>>>      | options(Map( "table" -> "****", "keyspace" -> "***")).
>>>>>>      | load()
>>>>>> java.util.NoSuchElementException: key not found: c_table
>>>>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>>>>         at
>>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.default(ddl.scala:151)
>>>>>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>>>>>         at
>>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.apply(ddl.scala:151)
>>>>>>         at
>>>>>> org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOptions(DefaultSource.scala:120)
>>>>>>         at
>>>>>> org.apache.spark.sql.cassandra.DefaultSource.createRelation(DefaultSource.scala:56)
>>>>>>         at
>>>>>> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
>>>>>>         a
>>>>>>
>>>>>> --
>>>>>> Selvam Raman
>>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Selvam Raman
>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Selvam Raman
>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>
>>
>>

Re: spark cassandra issue

Posted by Russell Spitzer <ru...@gmail.com>.
This would also be a better question for the SCC user list :)
https://groups.google.com/a/lists.datastax.com/forum/#!forum/spark-connector-user

On Sun, Sep 4, 2016 at 9:31 AM Russell Spitzer <ru...@gmail.com>
wrote:

>
> https://github.com/datastax/spark-cassandra-connector/blob/v1.3.1/doc/14_data_frames.md
> In Spark 1.3 it was illegal to use "table" as a key in Spark SQL so in
> that version of Spark the connector needed to use the option "c_table"
>
>
> val df = sqlContext.read.
>      | format("org.apache.spark.sql.cassandra").
>      | options(Map( "c_table" -> "****", "keyspace" -> "***")).
>      | load()
>
>
> On Sun, Sep 4, 2016 at 8:32 AM Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
>> and your Cassandra table is there etc?
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 4 September 2016 at 16:20, Selvam Raman <se...@gmail.com> wrote:
>>
>>> Hey Mich,
>>>
>>> I am using the same one right now. Thanks for the reply.
>>> import org.apache.spark.sql.cassandra._
>>> import com.datastax.spark.connector._ //Loads implicit functions
>>> sc.cassandraTable("keyspace name", "table name")
>>>
>>>
>>> On Sun, Sep 4, 2016 at 8:48 PM, Mich Talebzadeh <
>>> mich.talebzadeh@gmail.com> wrote:
>>>
>>>> Hi Selvan.
>>>>
>>>> I don't deal with Cassandra but have you tried other options as
>>>> described here
>>>>
>>>>
>>>> https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
>>>>
>>>> To get a Spark RDD that represents a Cassandra table, call the
>>>> cassandraTable method on the SparkContext object.
>>>>
>>>> import com.datastax.spark.connector._ //Loads implicit functions
>>>> sc.cassandraTable("keyspace name", "table name")
>>>>
>>>>
>>>>
>>>> HTH
>>>>
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>> On 4 September 2016 at 15:52, Selvam Raman <se...@gmail.com> wrote:
>>>>
>>>>> its very urgent. please help me guys.
>>>>>
>>>>> On Sun, Sep 4, 2016 at 8:05 PM, Selvam Raman <se...@gmail.com> wrote:
>>>>>
>>>>>> Please help me to solve the issue.
>>>>>>
>>>>>> spark-shell --packages
>>>>>> com.datastax.spark:spark-cassandra-connector_2.10:1.3.0 --conf
>>>>>> spark.cassandra.connection.host=******
>>>>>>
>>>>>> val df = sqlContext.read.
>>>>>>      | format("org.apache.spark.sql.cassandra").
>>>>>>      | options(Map( "table" -> "****", "keyspace" -> "***")).
>>>>>>      | load()
>>>>>> java.util.NoSuchElementException: key not found: c_table
>>>>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>>>>         at
>>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.default(ddl.scala:151)
>>>>>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>>>>>         at
>>>>>> org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.apply(ddl.scala:151)
>>>>>>         at
>>>>>> org.apache.spark.sql.cassandra.DefaultSource$.TableRefAndOptions(DefaultSource.scala:120)
>>>>>>         at
>>>>>> org.apache.spark.sql.cassandra.DefaultSource.createRelation(DefaultSource.scala:56)
>>>>>>         at
>>>>>> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
>>>>>>         a
>>>>>>
>>>>>> --
>>>>>> Selvam Raman
>>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Selvam Raman
>>>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Selvam Raman
>>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>>
>>
>>