You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by vincent gromakowski <vi...@gmail.com> on 2016/02/23 15:15:34 UTC

how to automatically load jars

Hi,
I am trying to automatcally add jars to spark interpreter with several
methods but I cannot achieve it.
I am currently generating an interpreter.json file from ansible templates
before launching Zeppelin in Marathon.
1.  spark.jars
2.  spark.driver.extraClassPath
3.  groupArtifactVersion  (dependency loading)

In all case I get a class not found exception for the spark cassandra
connector. The only way to make it works is to go to interpreter settings,
edit spark settings, then save and restart the interpreter but it's not
automatic at all as we need to do it each time Zeppelin is started.

Is the interpreter.json file automatically loaded at the start of Zeppelin ?

Re: how to automatically load jars

Posted by vincent gromakowski <vi...@gmail.com>.
Yes i can see it. Its wird cause i have rebuilt from master yesterday.
Le 23 févr. 2016 11:51 PM, "moon soo Lee" <mo...@apache.org> a écrit :

> I tried with current master branch, but i couldn't reproduce.
> After "3. Start zeppelin", before "4. Run a paragraph", if you go to
> interpreter menu, can you see dependency you have added on GUI?
>
> Thanks,
> moon
>
> On Tue, Feb 23, 2016 at 11:41 AM vincent gromakowski <
> vincent.gromakowski@gmail.com> wrote:
>
>> 1. Stop zeppelin
>> 2. Add a dependency in interpreter.json
>> "dependencies": [
>>         {
>>           "groupArtifactVersion":
>> "/........./spark-cassandra-connector-1.5.0_2.10.jar",
>>           "local": false,
>>           "exclusions": []
>>         }
>>       ]
>> 3. Start zeppelin
>> 4. Run a paragraph with
>> import com.datastax.spark.connector._
>> => error: object datastax is not a member of package com
>> 5. Edit Spark interpreter settings, save it and restart it
>> 6. Re run the paragraph
>> => no error
>>
>> 2016-02-23 19:25 GMT+01:00 moon soo Lee <mo...@apache.org>:
>>
>>> interpreter.json supposed to be loaded on launch.
>>> Could double check that interpreter.json is not read at zeppelin launch?
>>> Or if it keep happening, could you let me know how to reproduce?
>>>
>>> Thanks,
>>> moon
>>>
>>>
>>> On Tue, Feb 23, 2016 at 8:22 AM vincent gromakowski <
>>> vincent.gromakowski@gmail.com> wrote:
>>>
>>>> What is the best way to configure spark interpreter ?
>>>>
>>>> Should I use zeppelin-env.sh and a very long line of "export
>>>> SPARK_SUBMIT_OPTIONS"
>>>>
>>>> or configure interpreter.json before launching Zeppelin daemon
>>>>
>>>> It seems interpreter.json is not read at zeppelin launch,I need to
>>>> manually go to settings web UI,edit the spark interpreter and restart it...
>>>>
>>>> 2016-02-23 15:15 GMT+01:00 vincent gromakowski <
>>>> vincent.gromakowski@gmail.com>:
>>>>
>>>>> Hi,
>>>>> I am trying to automatcally add jars to spark interpreter with several
>>>>> methods but I cannot achieve it.
>>>>> I am currently generating an interpreter.json file from ansible
>>>>> templates before launching Zeppelin in Marathon.
>>>>> 1.  spark.jars
>>>>> 2.  spark.driver.extraClassPath
>>>>> 3.  groupArtifactVersion  (dependency loading)
>>>>>
>>>>> In all case I get a class not found exception for the spark cassandra
>>>>> connector. The only way to make it works is to go to interpreter settings,
>>>>> edit spark settings, then save and restart the interpreter but it's not
>>>>> automatic at all as we need to do it each time Zeppelin is started.
>>>>>
>>>>> Is the interpreter.json file automatically loaded at the start of
>>>>> Zeppelin ?
>>>>>
>>>>
>>>>
>>

Re: how to automatically load jars

Posted by moon soo Lee <mo...@apache.org>.
I tried with current master branch, but i couldn't reproduce.
After "3. Start zeppelin", before "4. Run a paragraph", if you go to
interpreter menu, can you see dependency you have added on GUI?

Thanks,
moon

On Tue, Feb 23, 2016 at 11:41 AM vincent gromakowski <
vincent.gromakowski@gmail.com> wrote:

> 1. Stop zeppelin
> 2. Add a dependency in interpreter.json
> "dependencies": [
>         {
>           "groupArtifactVersion":
> "/........./spark-cassandra-connector-1.5.0_2.10.jar",
>           "local": false,
>           "exclusions": []
>         }
>       ]
> 3. Start zeppelin
> 4. Run a paragraph with
> import com.datastax.spark.connector._
> => error: object datastax is not a member of package com
> 5. Edit Spark interpreter settings, save it and restart it
> 6. Re run the paragraph
> => no error
>
> 2016-02-23 19:25 GMT+01:00 moon soo Lee <mo...@apache.org>:
>
>> interpreter.json supposed to be loaded on launch.
>> Could double check that interpreter.json is not read at zeppelin launch?
>> Or if it keep happening, could you let me know how to reproduce?
>>
>> Thanks,
>> moon
>>
>>
>> On Tue, Feb 23, 2016 at 8:22 AM vincent gromakowski <
>> vincent.gromakowski@gmail.com> wrote:
>>
>>> What is the best way to configure spark interpreter ?
>>>
>>> Should I use zeppelin-env.sh and a very long line of "export
>>> SPARK_SUBMIT_OPTIONS"
>>>
>>> or configure interpreter.json before launching Zeppelin daemon
>>>
>>> It seems interpreter.json is not read at zeppelin launch,I need to
>>> manually go to settings web UI,edit the spark interpreter and restart it...
>>>
>>> 2016-02-23 15:15 GMT+01:00 vincent gromakowski <
>>> vincent.gromakowski@gmail.com>:
>>>
>>>> Hi,
>>>> I am trying to automatcally add jars to spark interpreter with several
>>>> methods but I cannot achieve it.
>>>> I am currently generating an interpreter.json file from ansible
>>>> templates before launching Zeppelin in Marathon.
>>>> 1.  spark.jars
>>>> 2.  spark.driver.extraClassPath
>>>> 3.  groupArtifactVersion  (dependency loading)
>>>>
>>>> In all case I get a class not found exception for the spark cassandra
>>>> connector. The only way to make it works is to go to interpreter settings,
>>>> edit spark settings, then save and restart the interpreter but it's not
>>>> automatic at all as we need to do it each time Zeppelin is started.
>>>>
>>>> Is the interpreter.json file automatically loaded at the start of
>>>> Zeppelin ?
>>>>
>>>
>>>
>

Re: how to automatically load jars

Posted by vincent gromakowski <vi...@gmail.com>.
1. Stop zeppelin
2. Add a dependency in interpreter.json
"dependencies": [
        {
          "groupArtifactVersion":
"/........./spark-cassandra-connector-1.5.0_2.10.jar",
          "local": false,
          "exclusions": []
        }
      ]
3. Start zeppelin
4. Run a paragraph with
import com.datastax.spark.connector._
=> error: object datastax is not a member of package com
5. Edit Spark interpreter settings, save it and restart it
6. Re run the paragraph
=> no error

2016-02-23 19:25 GMT+01:00 moon soo Lee <mo...@apache.org>:

> interpreter.json supposed to be loaded on launch.
> Could double check that interpreter.json is not read at zeppelin launch?
> Or if it keep happening, could you let me know how to reproduce?
>
> Thanks,
> moon
>
>
> On Tue, Feb 23, 2016 at 8:22 AM vincent gromakowski <
> vincent.gromakowski@gmail.com> wrote:
>
>> What is the best way to configure spark interpreter ?
>>
>> Should I use zeppelin-env.sh and a very long line of "export
>> SPARK_SUBMIT_OPTIONS"
>>
>> or configure interpreter.json before launching Zeppelin daemon
>>
>> It seems interpreter.json is not read at zeppelin launch,I need to
>> manually go to settings web UI,edit the spark interpreter and restart it...
>>
>> 2016-02-23 15:15 GMT+01:00 vincent gromakowski <
>> vincent.gromakowski@gmail.com>:
>>
>>> Hi,
>>> I am trying to automatcally add jars to spark interpreter with several
>>> methods but I cannot achieve it.
>>> I am currently generating an interpreter.json file from ansible
>>> templates before launching Zeppelin in Marathon.
>>> 1.  spark.jars
>>> 2.  spark.driver.extraClassPath
>>> 3.  groupArtifactVersion  (dependency loading)
>>>
>>> In all case I get a class not found exception for the spark cassandra
>>> connector. The only way to make it works is to go to interpreter settings,
>>> edit spark settings, then save and restart the interpreter but it's not
>>> automatic at all as we need to do it each time Zeppelin is started.
>>>
>>> Is the interpreter.json file automatically loaded at the start of
>>> Zeppelin ?
>>>
>>
>>

Re: how to automatically load jars

Posted by moon soo Lee <mo...@apache.org>.
interpreter.json supposed to be loaded on launch.
Could double check that interpreter.json is not read at zeppelin launch?
Or if it keep happening, could you let me know how to reproduce?

Thanks,
moon


On Tue, Feb 23, 2016 at 8:22 AM vincent gromakowski <
vincent.gromakowski@gmail.com> wrote:

> What is the best way to configure spark interpreter ?
>
> Should I use zeppelin-env.sh and a very long line of "export
> SPARK_SUBMIT_OPTIONS"
>
> or configure interpreter.json before launching Zeppelin daemon
>
> It seems interpreter.json is not read at zeppelin launch,I need to
> manually go to settings web UI,edit the spark interpreter and restart it...
>
> 2016-02-23 15:15 GMT+01:00 vincent gromakowski <
> vincent.gromakowski@gmail.com>:
>
>> Hi,
>> I am trying to automatcally add jars to spark interpreter with several
>> methods but I cannot achieve it.
>> I am currently generating an interpreter.json file from ansible templates
>> before launching Zeppelin in Marathon.
>> 1.  spark.jars
>> 2.  spark.driver.extraClassPath
>> 3.  groupArtifactVersion  (dependency loading)
>>
>> In all case I get a class not found exception for the spark cassandra
>> connector. The only way to make it works is to go to interpreter settings,
>> edit spark settings, then save and restart the interpreter but it's not
>> automatic at all as we need to do it each time Zeppelin is started.
>>
>> Is the interpreter.json file automatically loaded at the start of
>> Zeppelin ?
>>
>
>

Re: how to automatically load jars

Posted by vincent gromakowski <vi...@gmail.com>.
What is the best way to configure spark interpreter ?

Should I use zeppelin-env.sh and a very long line of "export
SPARK_SUBMIT_OPTIONS"

or configure interpreter.json before launching Zeppelin daemon

It seems interpreter.json is not read at zeppelin launch,I need to manually
go to settings web UI,edit the spark interpreter and restart it...

2016-02-23 15:15 GMT+01:00 vincent gromakowski <
vincent.gromakowski@gmail.com>:

> Hi,
> I am trying to automatcally add jars to spark interpreter with several
> methods but I cannot achieve it.
> I am currently generating an interpreter.json file from ansible templates
> before launching Zeppelin in Marathon.
> 1.  spark.jars
> 2.  spark.driver.extraClassPath
> 3.  groupArtifactVersion  (dependency loading)
>
> In all case I get a class not found exception for the spark cassandra
> connector. The only way to make it works is to go to interpreter settings,
> edit spark settings, then save and restart the interpreter but it's not
> automatic at all as we need to do it each time Zeppelin is started.
>
> Is the interpreter.json file automatically loaded at the start of Zeppelin
> ?
>