You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Windy Qin <wi...@163.com> on 2017/02/07 05:22:49 UTC

why not provide 'test' function in the interpreter

hi,
  why not provide 'test' function in the interpreter.
  After I cfg the interpreter ,I want to test whether it is ok now, and I can't fount where to test in the page of interpreter. 
  How about adding a function to test the interpreter in the page of interpreter cfg?

Re: why not provide 'test' function in the interpreter

Posted by Rick Moritz <ra...@gmail.com>.
Having gone through configuring Spark 1.6 for Z 0.6.2 without bein able to
use the installer, and using "provided" Spark and Hadoop, I do understand
the appaeal of a test functionality for an interpreter.

The challenge of scoping the test functionality is evident, but I think not
insurmountable.

In particular with the Spark interpreter, I mostly wanted to know whetger I
would be able to instantiate a SparkContext at all. That test is probably
applicable to all interpreter types. Be it a connection test for jdbc, or
other basic functionality test. This should be part of the interpreter API
and - perhaps most importantly - be triggered right when you restart an
interpreter. The current procesure of switching between a failing notebook
and the interpreter settings is quite cludgy.

In the end, each interpreter should implement its own testing logic, and
focus on core functionality. As an example for Spark: Provide a
SparkContext, and a HiveContext, if requested/enabled. This checks whether
we can fit the interpreter into Yarn, and the basic classpath/dependency
requirements.

For a start that kind of functionality would be sufficient, further
requirements can then be added at a later time.

This would also make the per-notebook interpreter settings more concise,
since non-functioning interpreters could be hidden. A next step would be to
do per-user-testing, for example if a ressource manager with quotas is
used, large spark interpreters should not appear fornusers who can't
allocate that many ressources.

Having diagnostic features inside Zeppelin should be part of the next push
to make it more end-user friendly, and could even be offered as a service
to a monitoring tool: for example the Ambari-page could show currently
failing interpreters as a warning/error.

Best regards,

Rick


On 7 Feb 2017 06:47, "Jeff Zhang" <zj...@gmail.com> wrote:

It is hard to figure out what user want to test.
Do they want to test whether the interpreter works or whether the changed
interpreter setting take effect. It makes the test function hard to be
implemented.


Windy Qin <wi...@163.com>于2017年2月7日周二 下午1:22写道:

> hi,
>   why not provide 'test' function in the interpreter.
>   After I cfg the interpreter ,I want to test whether it is ok now, and I
> can't fount where to test in the page of interpreter.
>   How about adding a function to test the interpreter in the page of
> interpreter cfg?
>

Re: why not provide 'test' function in the interpreter

Posted by Jeff Zhang <zj...@gmail.com>.
It is hard to figure out what user want to test.
Do they want to test whether the interpreter works or whether the changed
interpreter setting take effect. It makes the test function hard to be
implemented.


Windy Qin <wi...@163.com>于2017年2月7日周二 下午1:22写道:

> hi,
>   why not provide 'test' function in the interpreter.
>   After I cfg the interpreter ,I want to test whether it is ok now, and I
> can't fount where to test in the page of interpreter.
>   How about adding a function to test the interpreter in the page of
> interpreter cfg?
>