You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "lun zhang (Jira)" <ji...@apache.org> on 2020/04/24 08:15:00 UTC

[jira] [Comment Edited] (FLINK-17312) Support sql client start with savepoint

    [ https://issues.apache.org/jira/browse/FLINK-17312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091310#comment-17091310 ] 

lun zhang edited comment on FLINK-17312 at 4/24/20, 8:14 AM:
-------------------------------------------------------------

Thanks for your reply.I have build a *sql client*  *platform*   where you can write 、 manage and debug your `flink sql`. You can see more in my github project [fsqlfly|[https://github.com/mrzhangboss/fsqlfly]]  .Now  I'm ready for use it in really world .

But I found  a missing significant future in *sql client*. You can stop with savepoint in *sql client* by  running *_flink stop -s savepoint jobid_*. But we can't use this *savepoint* in *sql client* command line. So I pull a request add savepoint support when start flink sql job in yml file. You can use like :

 

1. first stop your *insert sql job*  to get savepoint dir

2. then start your sql job again just need add one line in your *envirment.yml*

 

 

{{execution:}}

{{  planner}}{{: blink}}

{{  type: streaming}}

{{  savepoint-path: hdfs:///tmp/savepoints/jasdf   # the location of you latest stoppoint file}}

 

By support this future.This will help  sql client build a high availability sql job. I've already tests my code in flink-1.10.It's very helpful when you can stop and restart you `sql job` by savepoint. 

[~ykt836]


was (Author: zhanglun):
Thanks for your reply.I have build a *sql client*  *platform*   where you can write and manage your `flink sql`. You can see more in my github project [fsqlfly|[https://github.com/mrzhangboss/fsqlfly]]  .Now  I'm ready for use it in really world .

But I found  a missing significant future in *sql client*. You can stop with savepoint in *sql client* by  running *_flink stop -s savepoint jobid_*. But we can't use this *savepoint* in *sql client* command line. So I pull a request add savepoint support when start flink sql job in yml file. You can use like :

 

1. first stop your *insert sql job*  to get savepoint dir

2. then start your sql job again just need add one line in your *envirment.yml*

 

 

{{execution:}}

{{  planner}}{{: blink}}

{{  type: streaming}}

{{  savepoint-path: hdfs:///tmp/savepoints/jasdf   # the location of you latest stoppoint file}}

 

By support this future.This will help  sql client build a high availability sql job. I've already tests my code in flink-1.10.It's very helpful when you can stop and restart you `sql job` by savepoint. 

[~ykt836]

> Support sql client start with savepoint
> ---------------------------------------
>
>                 Key: FLINK-17312
>                 URL: https://issues.apache.org/jira/browse/FLINK-17312
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Client
>    Affects Versions: 1.10.0, 1.11.0
>            Reporter: lun zhang
>            Priority: Major
>              Labels: pull-request-available
>
> Sql client  not support *insert sql job* restart with *savepoint* current.It's very helpful when you can stop your  flink *insert sql job* and restart with savepoint .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)