You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Paul Buster <bu...@yahoo.com> on 2016/04/27 21:18:14 UTC

Failed to find data source: com.databricks.spark.avro.

what am I doing wrong here ?  single instance ec2 with spark and zeppelin.  it works using spark/bin/pyspark, but in a zeppelin notebook it fails to find com.databricks.spark.avro.  thanks

from the zeppelin notebook:%pysparksc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId","xxx")sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey","xxx")df = sqlContext.read.format("com.databricks.spark.avro").load("s3n://mybucket/myfile.avro")
Py4JJavaError: An error occurred while calling o44.load.: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.avro. Please use Spark package http://spark-packages.org/package/databricks/spark-avro

in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-avro_2.10:2.0.1,com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 --driver-memory 16g"
in conf/spark-defaults.confspark.jars.packages     com.databricks:spark-avro_2.10:2.0.1

ps -ef | grep zeppelin   (shows "--packages com.databricks:spark-avro_2.10:2.0.1...")
ec2-user 32718 32706 99 18:58 pts/2    00:00:55 /usr/lib/jvm/jre/bin/java -cp /usr/local/zeppelin/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar:/usr/local/spark/conf/:/usr/local/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar:/usr/local/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark/lib/datanucleus-core-3.2.10.jar -Xms16g -Xmx16g -Dfile.encoding=UTF-8 -Dfile.encoding=UTF-8 -Dzeppelin.log.file=/usr/local/zeppelin/logs/zeppelin-interpreter-spark-ec2-user-ip-10-248-5-9.log -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=::/usr/local/zeppelin/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar --conf spark.driver.memory=16g --conf spark.driver.extraJavaOptions=  -Dfile.encoding=UTF-8  -Dfile.encoding=UTF-8 -Dzeppelin.log.file=/usr/local/zeppelin/logs/zeppelin-interpreter-spark-ec2-user-ip-10-248-5-9.log --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer --packages com.databricks:spark-avro_2.10:2.0.1,com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 /usr/local/zeppelin/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar 45438

Re: Failed to find data source: com.databricks.spark.avro.

Posted by Paul Buster <bu...@yahoo.com>.
this works

%depz.reset()z.addRepo("Spark Packages Repo").url("http://dl.bintray.com/spark-packages/maven")z.load("com.databricks:spark-avro_2.10:2.0.1"


but what did I miss when trying to load the dependency in the conf file(s) ?


 

    On Wednesday, April 27, 2016 1:18 PM, Paul Buster <bu...@yahoo.com> wrote:
 

 what am I doing wrong here ?  single instance ec2 with spark and zeppelin.  it works using spark/bin/pyspark, but in a zeppelin notebook it fails to find com.databricks.spark.avro.  thanks

from the zeppelin notebook:%pysparksc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId","xxx")sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey","xxx")df = sqlContext.read.format("com.databricks.spark.avro").load("s3n://mybucket/myfile.avro")
Py4JJavaError: An error occurred while calling o44.load.: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.avro. Please use Spark package http://spark-packages.org/package/databricks/spark-avro

in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-avro_2.10:2.0.1,com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 --driver-memory 16g"
in conf/spark-defaults.confspark.jars.packages     com.databricks:spark-avro_2.10:2.0.1

ps -ef | grep zeppelin   (shows "--packages com.databricks:spark-avro_2.10:2.0.1...")
ec2-user 32718 32706 99 18:58 pts/2    00:00:55 /usr/lib/jvm/jre/bin/java -cp /usr/local/zeppelin/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar:/usr/local/spark/conf/:/usr/local/spark/lib/spark-assembly-1.6.1-hadoop2.6.0.jar:/usr/local/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/local/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark/lib/datanucleus-core-3.2.10.jar -Xms16g -Xmx16g -Dfile.encoding=UTF-8 -Dfile.encoding=UTF-8 -Dzeppelin.log.file=/usr/local/zeppelin/logs/zeppelin-interpreter-spark-ec2-user-ip-10-248-5-9.log -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=::/usr/local/zeppelin/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar --conf spark.driver.memory=16g --conf spark.driver.extraJavaOptions=  -Dfile.encoding=UTF-8  -Dfile.encoding=UTF-8 -Dzeppelin.log.file=/usr/local/zeppelin/logs/zeppelin-interpreter-spark-ec2-user-ip-10-248-5-9.log --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer --packages com.databricks:spark-avro_2.10:2.0.1,com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 /usr/local/zeppelin/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar 45438