You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by 崔苗 <cu...@danale.com> on 2017/07/21 08:11:42 UTC

the api of kylin

we have some problems in using kylin api:1 the wipe cache doesn't work,we want to 'drop' the 'kylin_sales_cube',the response status_code was 200,but the cube is still ready2 we want to build job by hour, and there were a dozen of cubes,so the jobs were building parallelly,how to manage these jobs,is there a api that can tell us all the jobs' status,instead of only one job?在使用kylin api的时候我们遇到了一些问题:1 wipe cache好像不起作用,在drop cube时,即使返回的 response status_code是200,cube没有被删除2 如果想要每个小时都增量构建streaming cube,并且有几十个cube都需要这样做,那么并发的job会很多,会造成OOM或者job pending吗?我们如何用api来监控所有job的状态,像web UI中的monitor那样,有返回所有job status的API吗?目前好像只有返回指定的job status的API。

Re: the api of kylin

Posted by ShaoFeng Shi <sh...@apache.org>.
Hi Miao,

For query 1, we haven't received similar issue reporting; Did you properly
set "kylin.server.cluster-servers" in all Kylin instances? Would you please
provide more information?

For question 2, try to allocate more resources to the job engine node,
which build the dictionaries and need more CPU and memory. Regarding the
API, yes there is API to get the jobs by status. You can enable brower
inspection (F12 for Chrome), in the "Network" section, check how Kylin web
communicates with server. For example, the following request will get the
error jobs of a project:

http://host:7070/kylin/api/jobs?limit=15&offset=0&projectName=streaming&status=8&timeFilter=1

2017-07-21 16:11 GMT+08:00 崔苗 <cu...@danale.com>:

> we have some problems in using kylin api:
> 1 the wipe cache doesn't work,we want to 'drop' the 'kylin_sales_cube',the
> response status_code was 200,but the cube is still ready
> 2 we want to build job by hour, and there were a dozen of cubes,so the
> jobs were building parallelly,how to manage these jobs,is there a api that
> can tell us all the jobs' status,instead of only one job?
> 在使用kylin api的时候我们遇到了一些问题:
> 1 wipe cache好像不起作用,在drop cube时,即使返回的 response status_code是200,cube没有被删除
> 2 如果想要每个小时都增量构建streaming cube,并且有几十个cube都需要这样做,那么并发的job会很多,会造成OOM或者job
> pending吗?我们如何用api来监控所有job的状态,像web UI中的monitor那样,有返回所有job
> status的API吗?目前好像只有返回指定的job status的API。
>



-- 
Best regards,

Shaofeng Shi 史少锋