You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by David Klim <da...@hotmail.com> on 2016/01/04 16:13:05 UTC

Providing jars in HDFS

Hello,
I have bee running Zeppelin in yarn-client mode, and I so far I was copying required jars to the folder specified by spark.home	(/opt/zeppelin/interpreter/spark/) on each cluster node. Is it possible to specify some HDFS location to load the jars from there instead? How can I configure that
Thanks!
 		 	   		  

RE: Providing jars in HDFS

Posted by Mu...@cognizant.com.
Hi David,

Since you are working v0.6.0 of zeppelin and it is still in beta stage. To include all the required jars in the master setup, it might take some time. I don't have any idea of which I location you can give in HDFS. I'm sorry for that and I'm a noob like you..:)

Why don't you try with v0.5.5 of zeppelin ?

Thanks,
Snehit
________________________________
From: David Klim [davidklmlg@hotmail.com]
Sent: 04 January 2016 20:43:05
To: users@zeppelin.incubator.apache.org
Subject: Providing jars in HDFS

Hello,

I have bee running Zeppelin in yarn-client mode, and I so far I was copying required jars to the folder specified by spark.home (/opt/zeppelin/interpreter/spark/) on each cluster node. Is it possible to specify some HDFS location to load the jars from there instead? How can I configure that

Thanks!

This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored.