You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by "Tiwari, Tarun" <Ta...@Kronos.com> on 2015/04/03 05:16:33 UTC

RE: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper

Sorry I was unable to reply for couple of days.
I checked the error again and can’t see any other initial cause. Here is the full error that is coming.

Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper
        at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)



From: Dave Brosius [mailto:dbrosius@mebigfatguy.com]
Sent: Tuesday, March 31, 2015 8:46 PM
To: user@cassandra.apache.org
Subject: Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper




Is there an 'initial cause' listed under that exception you gave? As NoClassDefFoundError is not exactly the same as ClassNotFoundException. It meant that ColumnMapper couldn't initialize it's static initializer, it could be because some other class couldn't be found, or it could be some other non classloader related error.



On 2015-03-31 10:42, Tiwari, Tarun wrote:
Hi Experts,

I am getting java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper while running a app to load data to Cassandra table using the datastax spark connector

Is there something else I need to import in the program or dependencies?

RUNTIME ERROR:  Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper
at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Below is my scala program

/*** ld_Cassandra_Table.scala ***/
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector
import com.datastax.spark.connector._

object ldCassandraTable {
                def main(args: Array[String]) {
  val fileName = args(0)
  val tblName = args(1)
  val conf = new SparkConf(true).set("spark.cassandra.connection.host", "<MASTER HOST>") .setMaster("<MASTER URL>") .setAppName("LoadCassandraTableApp")
  val sc = new SparkContext(conf)
  sc.addJar("/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar")
  val normalfill = sc.textFile(fileName).map(line => line.split('|'))
  normalfill.map(line => (line(0), line(1), line(2), line(3), line(4), line(5), line(6), line(7), line(8), line(9), line(10), line(11), line(12), line(13), line(14), line(15), line(16), line(17), line(18), line(19), line(20), line(21))).saveToCassandra(keyspace, tblName, SomeColumns("wfctotalid", "timesheetitemid", "employeeid", "durationsecsqty", "wageamt", "moneyamt", "applydtm", "laboracctid", "paycodeid", "startdtm", "stimezoneid", "adjstartdtm", "adjapplydtm", "enddtm", "homeaccountsw", "notpaidsw", "wfcjoborgid", "unapprovedsw", "durationdaysqty", "updatedtm", "totaledversion", "acctapprovalnum"))
  println("Records Loaded to ".format(tblName))
  Thread.sleep(500)
  sc.stop()
}
}

Below is the sbt file:

name:= “POC”
version := "0.0.1"

scalaVersion := "2.10.4"

// additional libraries
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.1.1" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.1.1" % "provided",
  "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.1" % "provided"
)

Regards,
Tarun Tiwari | Workforce Analytics-ETL | Kronos India
M: +91 9540 28 27 77 | Tel: +91 120 4015200
Kronos | Time & Attendance • Scheduling • Absence Management • HR & Payroll • Hiring • Labor Analytics
Join Kronos on: kronos.com<http://www.kronos.com/> | Facebook<http://www.kronos.com/facebook>|Twitter<http://www.kronos.com/twitter>|LinkedIn<http://www.kronos.com/linkedin> |YouTube<http://www.kronos.com/youtube>


RE: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper

Posted by "Tiwari, Tarun" <Ta...@Kronos.com>.
Yes it seems it was not taking the classpath for the Cassandra connector. Added it to driver class path argument but got into another error

Used below command now

spark-submit --class ldCassandraTable ./target/scala-2.10/merlin-spark-cassandra-poc_2.10-0.0.1.jar "/home/analytics/Documents/test_wfctotal.dat" test_wfctotal --driver-class-path /home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar

and getting new error

Spark assembly has been built with Hive, including Datanucleus jars on classpath
15/04/03 13:46:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
:/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar:/home/analytics/Installers/spark-1.1.1/conf:/home/analytics/Installers/spark-1.1.1/assembly/target/scala-2.10/spark-assembly-1.1.1-hadoop1.0.4.jar:/home/analytics/Installers/spark-1.1.1/lib_managed/jars/datanucleus-rdbms-3.2.1.jar:/home/analytics/Installers/spark-1.1.1/lib_managed/jars/datanucleus-core-3.2.2.jar:/home/analytics/Installers/spark-1.1.1/lib_managed/jars/datanucleus-api-jdo-3.2.1.jar
15/04/03 13:46:46 WARN LoadSnappy: Snappy native library not loaded
Records Loaded to
15/04/03 13:46:54 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(NODE02.int.kronos.com,60755) not found



From: Dave Brosius [mailto:dbrosius@mebigfatguy.com]
Sent: Friday, April 03, 2015 9:15 AM
To: user@cassandra.apache.org
Subject: Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper

This is what i meant by 'initial cause'

Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper

So it is in fact a classpath problem

Here is the class in question https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/mapper/ColumnMapper.scala

Maybe it would be worthwhile to put this at the top of your main method

System.out.println(System.getProperty( "java.class.path");

and show what that prints.

What version of the cassandra and what version of the cassandra-spark connector are you using, btw?




On 04/02/2015 11:16 PM, Tiwari, Tarun wrote:
Sorry I was unable to reply for couple of days.
I checked the error again and can’t see any other initial cause. Here is the full error that is coming.

Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper
        at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.mapper.ColumnMapper
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)



From: Dave Brosius [mailto:dbrosius@mebigfatguy.com]
Sent: Tuesday, March 31, 2015 8:46 PM
To: user@cassandra.apache.org<ma...@cassandra.apache.org>
Subject: Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper




Is there an 'initial cause' listed under that exception you gave? As NoClassDefFoundError is not exactly the same as ClassNotFoundException. It meant that ColumnMapper couldn't initialize it's static initializer, it could be because some other class couldn't be found, or it could be some other non classloader related error.



On 2015-03-31 10:42, Tiwari, Tarun wrote:
Hi Experts,

I am getting java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper while running a app to load data to Cassandra table using the datastax spark connector

Is there something else I need to import in the program or dependencies?

RUNTIME ERROR:  Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/mapper/ColumnMapper
at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Below is my scala program

/*** ld_Cassandra_Table.scala ***/
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector
import com.datastax.spark.connector._

object ldCassandraTable {
                def main(args: Array[String]) {
  val fileName = args(0)
  val tblName = args(1)
  val conf = new SparkConf(true).set("spark.cassandra.connection.host", "<MASTER HOST>") .setMaster("<MASTER URL>") .setAppName("LoadCassandraTableApp")
  val sc = new SparkContext(conf)
  sc.addJar("/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar")
  val normalfill = sc.textFile(fileName).map(line => line.split('|'))
  normalfill.map(line => (line(0), line(1), line(2), line(3), line(4), line(5), line(6), line(7), line(8), line(9), line(10), line(11), line(12), line(13), line(14), line(15), line(16), line(17), line(18), line(19), line(20), line(21))).saveToCassandra(keyspace, tblName, SomeColumns("wfctotalid", "timesheetitemid", "employeeid", "durationsecsqty", "wageamt", "moneyamt", "applydtm", "laboracctid", "paycodeid", "startdtm", "stimezoneid", "adjstartdtm", "adjapplydtm", "enddtm", "homeaccountsw", "notpaidsw", "wfcjoborgid", "unapprovedsw", "durationdaysqty", "updatedtm", "totaledversion", "acctapprovalnum"))
  println("Records Loaded to ".format(tblName))
  Thread.sleep(500)
  sc.stop()
}
}

Below is the sbt file:

name:= “POC”
version := "0.0.1"

scalaVersion := "2.10.4"

// additional libraries
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.1.1" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.1.1" % "provided",
  "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.1" % "provided"
)

Regards,
Tarun Tiwari | Workforce Analytics-ETL | Kronos India
M: +91 9540 28 27 77 | Tel: +91 120 4015200
Kronos | Time & Attendance • Scheduling • Absence Management • HR & Payroll • Hiring • Labor Analytics
Join Kronos on: kronos.com<http://www.kronos.com/> | Facebook<http://www.kronos.com/facebook>|Twitter<http://www.kronos.com/twitter>|LinkedIn<http://www.kronos.com/linkedin> |YouTube<http://www.kronos.com/youtube>



Re: Getting NoClassDefFoundError for com/datastax/spark/connector/mapper/ColumnMapper

Posted by Dave Brosius <db...@mebigfatguy.com>.
This is what i meant by 'initial cause'

Caused by: java.lang.ClassNotFoundException: 
com.datastax.spark.connector.mapper.ColumnMapper

So it is in fact a classpath problem

Here is the class in question 
https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/mapper/ColumnMapper.scala

Maybe it would be worthwhile to put this at the top of your main method

System.out.println(System.getProperty("java.class.path");

and show what that prints.

What version of the cassandra and what version of the cassandra-spark 
connector are you using, btw?





On 04/02/2015 11:16 PM, Tiwari, Tarun wrote:
>
> Sorry I was unable to reply for couple of days.
>
> I checked the error again and can’t see any other initial cause. Here 
> is the full error that is coming.
>
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/datastax/spark/connector/mapper/ColumnMapper
>
> at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
>
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> *Caused by: java.lang.ClassNotFoundException: 
> com.datastax.spark.connector.mapper.ColumnMapper*
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>
> *From:*Dave Brosius [mailto:dbrosius@mebigfatguy.com]
> *Sent:* Tuesday, March 31, 2015 8:46 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Getting NoClassDefFoundError for 
> com/datastax/spark/connector/mapper/ColumnMapper
>
> Is there an 'initial cause' listed under that exception you gave? As 
> NoClassDefFoundError is not exactly the same as 
> ClassNotFoundException. It meant that ColumnMapper couldn't initialize 
> it's static initializer, it could be because some other class couldn't 
> be found, or it could be some other non classloader related error.
>
>   
>
> On 2015-03-31 10:42, Tiwari, Tarun wrote:
>
>     Hi Experts,
>
>     I am getting java.lang.NoClassDefFoundError:
>     com/datastax/spark/connector/mapper/ColumnMapper while running a
>     app to load data to Cassandra table using the datastax spark connector
>
>     Is there something else I need to import in the program or
>     dependencies?
>
>     *RUNTIME ERROR:*  Exception in thread "main"
>     java.lang.NoClassDefFoundError:
>     com/datastax/spark/connector/mapper/ColumnMapper
>
>     at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala)
>
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>     *Below is my scala program*
>
>     /*** ld_Cassandra_Table.scala ***/
>
>     import org.apache.spark.SparkContext
>
>     import org.apache.spark.SparkContext._
>
>     import org.apache.spark.SparkConf
>
>     import com.datastax.spark.connector
>
>     import com.datastax.spark.connector._
>
>     object ldCassandraTable {
>
>     def main(args: Array[String]) {
>
>     val fileName = args(0)
>
>     val tblName = args(1)
>
>     val conf = new
>     SparkConf(true).set("spark.cassandra.connection.host", "<MASTER
>     HOST>") .setMaster("<MASTER URL>")
>     .setAppName("LoadCassandraTableApp")
>
>     val sc = new SparkContext(conf)
>
>     sc.addJar("/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar")
>
>     val normalfill = sc.textFile(fileName).map(line => line.split('|'))
>
>     normalfill.map(line => (line(0), line(1), line(2), line(3),
>     line(4), line(5), line(6), line(7), line(8), line(9), line(10),
>     line(11), line(12), line(13), line(14), line(15), line(16),
>     line(17), line(18), line(19), line(20),
>     line(21))).saveToCassandra(keyspace, tblName,
>     SomeColumns("wfctotalid", "timesheetitemid", "employeeid",
>     "durationsecsqty", "wageamt", "moneyamt", "applydtm",
>     "laboracctid", "paycodeid", "startdtm", "stimezoneid",
>     "adjstartdtm", "adjapplydtm", "enddtm", "homeaccountsw",
>     "notpaidsw", "wfcjoborgid", "unapprovedsw", "durationdaysqty",
>     "updatedtm", "totaledversion", "acctapprovalnum"))
>
>     println("Records Loaded to ".format(tblName))
>
>     Thread.sleep(500)
>
>     sc.stop()
>
>     }
>
>     }
>
>     *Below is the sbt file:*
>
>     name:= “POC”
>
>     version := "0.0.1"
>
>     scalaVersion := "2.10.4"
>
>     // additional libraries
>
>     libraryDependencies ++= Seq(
>
>     "org.apache.spark" %% "spark-core" % "1.1.1" % "provided",
>
>     "org.apache.spark" %% "spark-sql" % "1.1.1" % "provided",
>
>     "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.1" %
>     "provided"
>
>     )
>
>     Regards,
>
>     *Tarun Tiwari* | Workforce Analytics-ETL | *Kronos India*
>
>     M: +91 9540 28 27 77 | Tel: +91 120 4015200
>
>     Kronos | Time & Attendance • Scheduling • Absence Management • HR
>     & Payroll • Hiring • Labor Analytics
>
>     *Join Kronos on: **kronos.com* <http://www.kronos.com/>*|
>     **Facebook* <http://www.kronos.com/facebook>*|**Twitter*
>     <http://www.kronos.com/twitter>*|**LinkedIn*
>     <http://www.kronos.com/linkedin>*|**YouTube*
>     <http://www.kronos.com/youtube>
>