You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jena.apache.org by Daniel Maatari Okouya <ok...@yahoo.fr.INVALID> on 2018/09/30 17:21:40 UTC
Jena on Spark -
org.apache.jena.shared.NoReaderForLangException: Reader not found:
JSON-LD
I am using Jena in spark. I am facing a weird issue when i am deploying on the cluster (Does not happen on local Dev, for which i do not need to build an uber jar)
When i deploy on the cluster i get the following exceptions:
Caused by: org.apache.jena.shared.NoReaderForLangException: Reader not found: JSON-LD
at org.apache.jena.rdf.model.impl.RDFReaderFImpl.getReader(RDFReaderFImpl.java:61)
at org.apache.jena.rdf.model.impl.ModelCom.read(ModelCom.java:305)
1 - I wonder, generally speaking where does an error like org.apache.jena.shared.NoReaderForLangException: Reader not found: JSON-LD may come/originate from ? Again this code works perfect in local mode.
2 - My Build.sbt Assembly strategy:
lazy val entellectextractorsmappers = project
.settings(
commonSettings,
mainClass in assembly := Some("entellect.extractors.mappers.NormalizedDataMapper"),
assemblyMergeStrategy in assembly := {
case "application.conf" => MergeStrategy.concat
case "reference.conf" => MergeStrategy.concat
case PathList("META-INF", "services", "org.apache.jena.system.JenaSubsystemLifecycle") => MergeStrategy.concat
case PathList("META-INF", "services", "org.apache.spark.sql.sources.DataSourceRegister") => MergeStrategy.concat
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first},
dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-core" % "2.9.5",
dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-databind" % "2.9.5",
dependencyOverrides += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.9.5",
dependencyOverrides += "org.apache.jena" % "apache-jena" % "3.8.0",
libraryDependencies ++= Seq(
"org.apache.jena" % "apache-jena" % "3.8.0",
"edu.isi" % "karma-offline" % "0.0.1-SNAPSHOT",
"org.apache.spark" % "spark-core_2.11" % "2.3.1" % "provided",
"org.apache.spark" % "spark-sql_2.11" % "2.3.1" % "provided",
"org.apache.spark" %% "spark-sql-kafka-0-10" % "2.3.1"
//"com.datastax.cassandra" % "cassandra-driver-core" % "3.5.1"
))
.dependsOn(entellectextractorscommon)
If anyone has a hint about using jena and spark, awesome, else any hints about what could cause the Reader not found: JSON-LD, as in when can that happens, or what does it means from the library standpoint. This way i can trace back, what in my packaging is causing it to happen.
Re: Jena on Spark - org.apache.jena.shared.NoReaderForLangException:
Reader not found: JSON-LD
Posted by Andy Seaborne <an...@apache.org>.
https://stackoverflow.com/questions/52578547/spark-org-apache-jena-shared-noreaderforlangexception-reader-not-found-json-ld
On 30/09/18 18:21, Daniel Maatari Okouya wrote:
>
>
>
> I am using Jena in spark. I am facing a weird issue when i am deploying on the cluster (Does not happen on local Dev, for which i do not need to build an uber jar)
>
> When i deploy on the cluster i get the following exceptions:
>
> Caused by: org.apache.jena.shared.NoReaderForLangException: Reader not found: JSON-LD
> at org.apache.jena.rdf.model.impl.RDFReaderFImpl.getReader(RDFReaderFImpl.java:61)
> at org.apache.jena.rdf.model.impl.ModelCom.read(ModelCom.java:305)
> 1 - I wonder, generally speaking where does an error like org.apache.jena.shared.NoReaderForLangException: Reader not found: JSON-LD may come/originate from ? Again this code works perfect in local mode.
>
> 2 - My Build.sbt Assembly strategy:
>
> lazy val entellectextractorsmappers = project
> .settings(
> commonSettings,
> mainClass in assembly := Some("entellect.extractors.mappers.NormalizedDataMapper"),
> assemblyMergeStrategy in assembly := {
> case "application.conf" => MergeStrategy.concat
> case "reference.conf" => MergeStrategy.concat
> case PathList("META-INF", "services", "org.apache.jena.system.JenaSubsystemLifecycle") => MergeStrategy.concat
> case PathList("META-INF", "services", "org.apache.spark.sql.sources.DataSourceRegister") => MergeStrategy.concat
> case PathList("META-INF", xs @ _*) => MergeStrategy.discard
> case x => MergeStrategy.first},
> dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-core" % "2.9.5",
> dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-databind" % "2.9.5",
> dependencyOverrides += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.9.5",
> dependencyOverrides += "org.apache.jena" % "apache-jena" % "3.8.0",
> libraryDependencies ++= Seq(
> "org.apache.jena" % "apache-jena" % "3.8.0",
> "edu.isi" % "karma-offline" % "0.0.1-SNAPSHOT",
> "org.apache.spark" % "spark-core_2.11" % "2.3.1" % "provided",
> "org.apache.spark" % "spark-sql_2.11" % "2.3.1" % "provided",
> "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.3.1"
> //"com.datastax.cassandra" % "cassandra-driver-core" % "3.5.1"
> ))
> .dependsOn(entellectextractorscommon)
> If anyone has a hint about using jena and spark, awesome, else any hints about what could cause the Reader not found: JSON-LD, as in when can that happens, or what does it means from the library standpoint. This way i can trace back, what in my packaging is causing it to happen.
>