You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by 大森林 <ap...@foxmail.com> on 2020/10/02 14:45:26 UTC

回复: need help about "incremental checkpoint",Thanks

Thanks for your replies~!


My English is poor ,I have an understanding of your replies:


Write in RocksDbStateBackend.
Read in FsStateBackend.
It's NOT a match.
So I'm wrong in step 5?
Is my above understanding right?


Thanks for your help.


------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "David Anderson"                                                                                    <danderson@apache.org&gt;;
发送时间:&nbsp;2020年10月2日(星期五) 晚上10:35
收件人:&nbsp;"大森林"<appleyuchi@foxmail.com&gt;;
抄送:&nbsp;"user"<user@flink.apache.org&gt;;
主题:&nbsp;Re: need help about "incremental checkpoint",Thanks



It looks like you were trying&nbsp;to resume from a checkpoint taken with the FsStateBackend into a revised version of the job that uses the RocksDbStateBackend. Switching state backends in this way is not supported: checkpoints and savepoints are written in a state-backend-specific format, and can only be read by the same backend that wrote them.

It is possible, however, to migrate between state backends using the State Processor API [1].


[1]&nbsp;https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html


Best,
David


On Fri, Oct 2, 2020 at 4:07 PM 大森林 <appleyuchi@foxmail.com&gt; wrote:


I want to do an experiment of"incremental checkpoint"
 
my code is:
 
https://paste.ubuntu.com/p/DpTyQKq6Vk/
 
&nbsp;
 
pom.xml is:
 
<?xml version="1.0" encoding="UTF-8"?&gt;
 <project xmlns="http://maven.apache.org/POM/4.0.0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
 <modelVersion&gt;4.0.0</modelVersion&gt;
 
<groupId&gt;example</groupId&gt;
 <artifactId&gt;datastream_api</artifactId&gt;
 <version&gt;1.0-SNAPSHOT</version&gt;
 <build&gt;
 <plugins&gt;
 <plugin&gt;
 <groupId&gt;org.apache.maven.plugins</groupId&gt;
 <artifactId&gt;maven-compiler-plugin</artifactId&gt;
 <version&gt;3.1</version&gt;
 <configuration&gt;
 <source&gt;1.8</source&gt;
 <target&gt;1.8</target&gt;
 </configuration&gt;
 </plugin&gt;
 
<plugin&gt;
 <groupId&gt;org.scala-tools</groupId&gt;
 <artifactId&gt;maven-scala-plugin</artifactId&gt;
 <version&gt;2.15.2</version&gt;
 <executions&gt;
 <execution&gt;
 <goals&gt;
 <goal&gt;compile</goal&gt;
 <goal&gt;testCompile</goal&gt;
 </goals&gt;
 </execution&gt;
 </executions&gt;
 </plugin&gt;
 
&nbsp;
 
</plugins&gt;
 </build&gt;
 
<dependencies&gt;
 
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala --&gt;
 <dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-streaming-scala_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 <!-<scope&gt;provided</scope&gt;-&gt;
 </dependency&gt;
 
<!-<dependency&gt;-&gt;
 <!-<groupId&gt;org.apache.flink</groupId&gt;-&gt;
 <!-<artifactId&gt;flink-streaming-java_2.12</artifactId&gt;-&gt;
 <!-<version&gt;1.11.1</version&gt;-&gt;
 <!-<!–<scope&gt;compile</scope&gt;–&gt;-&gt;
 <!-</dependency&gt;-&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-clients_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
&nbsp;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-statebackend-rocksdb_2.11</artifactId&gt;
 <version&gt;1.11.2</version&gt;
 <!-<scope&gt;test</scope&gt;-&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.hadoop</groupId&gt;
 <artifactId&gt;hadoop-client</artifactId&gt;
 <version&gt;3.3.0</version&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-core</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
<!-<dependency&gt;-&gt;
 <!-<groupId&gt;org.slf4j</groupId&gt;-&gt;
 <!-<artifactId&gt;slf4j-simple</artifactId&gt;-&gt;
 <!-<version&gt;1.7.25</version&gt;-&gt;
 <!-<scope&gt;compile</scope&gt;-&gt;
 <!-</dependency&gt;-&gt;
 
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-cep --&gt;
 <dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-cep_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-cep-scala_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-scala_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
&nbsp;
 
<dependency&gt;
 <groupId&gt;org.projectlombok</groupId&gt;
 <artifactId&gt;lombok</artifactId&gt;
 <version&gt;1.18.4</version&gt;
 <!-<scope&gt;provided</scope&gt;-&gt;
 </dependency&gt;
 
</dependencies&gt;
 </project&gt;
 
&nbsp;
 
the error I got is:
 
https://paste.ubuntu.com/p/49HRYXFzR2/
 
&nbsp;
 
some of the above error is:
 
Caused by: java.lang.IllegalStateException: Unexpected state handle type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle, but found: class org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle
 
&nbsp;
 
&nbsp;
 
The steps are:
 
1.mvn clean scala:compile compile package
 
2.nc -lk 9999
 
3.flink run -c wordcount_increstate&nbsp; datastream_api-1.0-SNAPSHOT.jar
 Job has been submitted with JobID df6d62a43620f258155b8538ef0ddf1b
 
4.input the following conents in nc -lk 9999
 
before
 error
 error
 error
 error
 
5.
 
flink run -s hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c StateWordCount datastream_api-1.0-SNAPSHOT.jar
 
Then the above error happens.
 
&nbsp;
 
Please help,Thanks~!




I have tried to subscried to user@flink.apache.org;

but no replies.If possible ,send to appleyuchi@foxmail.com with your valuable replies,thanks.
 
&nbsp;

Re: need help about "incremental checkpoint",Thanks

Posted by David Anderson <da...@apache.org>.
If hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3
was written by the RocksDbStateBackend, then you can use it to recover if
the new job is also using the RocksDbStateBackend. The command would be

$ bin/flink run -s
hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3 <jar
file> [args]

The ":" character is meant to indicate that you should not use the literal
string "checkpointMetaDataPath", but rather replace that with the actual
path. Do not include the : character.

David

On Fri, Oct 2, 2020 at 5:58 PM 大森林 <75...@qq.com> wrote:
>
> I have read the official document
>
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/state/checkpoints.html#directory-structure
>
> at the end of above link,it said:
>
> $ bin/flink run -s :checkpointMetaDataPath [:runArgs]
>
> I have tried the above command in previous experiment,but still no luck.
> And why the above official command has " :" after "run -s"?
> I guess " :" not necessary.
>
> Could you tell me what the right command is to recover(resume) from
incremental checkpoint(RocksdbStateBackEnd)?
>
> Much Thanks~!
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "大森林" <ap...@foxmail.com>;
> 发送时间: 2020年10月2日(星期五) 晚上11:41
> 收件人: "David Anderson"<da...@apache.org>;
> 抄送: "user"<us...@flink.apache.org>;
> 主题: 回复: need help about "incremental checkpoint",Thanks
>
> Thanks for your replies~!
>
> Could you tell me what the right command is to recover from checkpoint
 manually using Rocksdb file?
>
> I understand that checkpoint is for automatically recovery,
> but in this experiment I stop it by force(input 4 error in nc -lk 9999),
> Is there a way to recover from incremental checkpoint manually ( with
RocksdbStateBackend)?
>
> I can only find
hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3  in
my WEB UI (I guess this is only used for fsStateBackend)
>
> Thanks for your help~!
>
> ------------------ 原始邮件 ------------------
> 发件人: "David Anderson" <da...@apache.org>;
> 发送时间: 2020年10月2日(星期五) 晚上11:24
> 收件人: "大森林"<ap...@foxmail.com>;
> 抄送: "user"<us...@flink.apache.org>;
> 主题: Re: need help about "incremental checkpoint",Thanks
>
>> Write in RocksDbStateBackend.
>> Read in FsStateBackend.
>> It's NOT a match.
>
>
> Yes, that is right. Also, this does not work:
>
> Write in FsStateBackend
> Read in RocksDbStateBackend
>
> For questions and support in Chinese, you can use the
user-zh@flink.apache.org. See the instructions at
https://flink.apache.org/zh/community.html for how to join the list.
>
> Best,
> David
>
> On Fri, Oct 2, 2020 at 4:45 PM 大森林 <ap...@foxmail.com> wrote:
>>
>> Thanks for your replies~!
>>
>> My English is poor ,I have an understanding of your replies:
>>
>> Write in RocksDbStateBackend.
>> Read in FsStateBackend.
>> It's NOT a match.
>> So I'm wrong in step 5?
>> Is my above understanding right?
>>
>> Thanks for your help.
>>
>> ------------------ 原始邮件 ------------------
>> 发件人: "David Anderson" <da...@apache.org>;
>> 发送时间: 2020年10月2日(星期五) 晚上10:35
>> 收件人: "大森林"<ap...@foxmail.com>;
>> 抄送: "user"<us...@flink.apache.org>;
>> 主题: Re: need help about "incremental checkpoint",Thanks
>>
>> It looks like you were trying to resume from a checkpoint taken with the
FsStateBackend into a revised version of the job that uses the
RocksDbStateBackend. Switching state backends in this way is not supported:
checkpoints and savepoints are written in a state-backend-specific format,
and can only be read by the same backend that wrote them.
>>
>> It is possible, however, to migrate between state backends using the
State Processor API [1].
>>
>> [1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
>>
>> Best,
>> David
>>
>> On Fri, Oct 2, 2020 at 4:07 PM 大森林 <ap...@foxmail.com> wrote:
>>>
>>> I want to do an experiment of"incremental checkpoint"
>>>
>>> my code is:
>>>
>>> https://paste.ubuntu.com/p/DpTyQKq6Vk/
>>>
>>>
>>>
>>> pom.xml is:
>>>
>>> <?xml version="1.0" encoding="UTF-8"?>
>>> <project xmlns="http://maven.apache.org/POM/4.0.0"
>>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
>>> <modelVersion>4.0.0</modelVersion>
>>>
>>> <groupId>example</groupId>
>>> <artifactId>datastream_api</artifactId>
>>> <version>1.0-SNAPSHOT</version>
>>> <build>
>>> <plugins>
>>> <plugin>
>>> <groupId>org.apache.maven.plugins</groupId>
>>> <artifactId>maven-compiler-plugin</artifactId>
>>> <version>3.1</version>
>>> <configuration>
>>> <source>1.8</source>
>>> <target>1.8</target>
>>> </configuration>
>>> </plugin>
>>>
>>> <plugin>
>>> <groupId>org.scala-tools</groupId>
>>> <artifactId>maven-scala-plugin</artifactId>
>>> <version>2.15.2</version>
>>> <executions>
>>> <execution>
>>> <goals>
>>> <goal>compile</goal>
>>> <goal>testCompile</goal>
>>> </goals>
>>> </execution>
>>> </executions>
>>> </plugin>
>>>
>>>
>>>
>>> </plugins>
>>> </build>
>>>
>>> <dependencies>
>>>
>>> <!--
https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala
-->
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-streaming-scala_2.11</artifactId>
>>> <version>1.11.1</version>
>>> <!-<scope>provided</scope>->
>>> </dependency>
>>>
>>> <!-<dependency>->
>>> <!-<groupId>org.apache.flink</groupId>->
>>> <!-<artifactId>flink-streaming-java_2.12</artifactId>->
>>> <!-<version>1.11.1</version>->
>>> <!-<!–<scope>compile</scope>–>->
>>> <!-</dependency>->
>>>
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-clients_2.11</artifactId>
>>> <version>1.11.1</version>
>>> </dependency>
>>>
>>>
>>>
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-statebackend-rocksdb_2.11</artifactId>
>>> <version>1.11.2</version>
>>> <!-<scope>test</scope>->
>>> </dependency>
>>>
>>> <dependency>
>>> <groupId>org.apache.hadoop</groupId>
>>> <artifactId>hadoop-client</artifactId>
>>> <version>3.3.0</version>
>>> </dependency>
>>>
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-core</artifactId>
>>> <version>1.11.1</version>
>>> </dependency>
>>>
>>> <!-<dependency>->
>>> <!-<groupId>org.slf4j</groupId>->
>>> <!-<artifactId>slf4j-simple</artifactId>->
>>> <!-<version>1.7.25</version>->
>>> <!-<scope>compile</scope>->
>>> <!-</dependency>->
>>>
>>> <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-cep -->
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-cep_2.11</artifactId>
>>> <version>1.11.1</version>
>>> </dependency>
>>>
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-cep-scala_2.11</artifactId>
>>> <version>1.11.1</version>
>>> </dependency>
>>>
>>> <dependency>
>>> <groupId>org.apache.flink</groupId>
>>> <artifactId>flink-scala_2.11</artifactId>
>>> <version>1.11.1</version>
>>> </dependency>
>>>
>>>
>>>
>>> <dependency>
>>> <groupId>org.projectlombok</groupId>
>>> <artifactId>lombok</artifactId>
>>> <version>1.18.4</version>
>>> <!-<scope>provided</scope>->
>>> </dependency>
>>>
>>> </dependencies>
>>> </project>
>>>
>>>
>>>
>>> the error I got is:
>>>
>>> https://paste.ubuntu.com/p/49HRYXFzR2/
>>>
>>>
>>>
>>> some of the above error is:
>>>
>>> Caused by: java.lang.IllegalStateException: Unexpected state handle
type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle,
but found: class
org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle
>>>
>>>
>>>
>>>
>>>
>>> The steps are:
>>>
>>> 1.mvn clean scala:compile compile package
>>>
>>> 2.nc -lk 9999
>>>
>>> 3.flink run -c wordcount_increstate  datastream_api-1.0-SNAPSHOT.jar
>>> Job has been submitted with JobID df6d62a43620f258155b8538ef0ddf1b
>>>
>>> 4.input the following conents in nc -lk 9999
>>>
>>> before
>>> error
>>> error
>>> error
>>> error
>>>
>>> 5.
>>>
>>> flink run -s
hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c
StateWordCount datastream_api-1.0-SNAPSHOT.jar
>>>
>>> Then the above error happens.
>>>
>>>
>>>
>>> Please help,Thanks~!
>>>
>>>
>>> I have tried to subscried to user@flink.apache.org;
>>>
>>> but no replies.If possible ,send to appleyuchi@foxmail.com with your
valuable replies,thanks.
>>>
>>>

回复: need help about "incremental checkpoint",Thanks

Posted by 大森林 <ap...@foxmail.com>.
Thanks for your replies~!


Could you tell me what the right command is to recover from checkpoint&nbsp; manually using Rocksdb file?


I understand that checkpoint is for automatically recovery,
but in this experiment I stop it by force(input 4 error in nc -lk 9999),
Is there a way to recover from incremental checkpoint manually ( with RocksdbStateBackend)?


I can only find&nbsp;hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3&nbsp;&nbsp;in my WEB UI&nbsp;(I guess this is only used for fsStateBackend)


Thanks for your help~!


------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "David Anderson"                                                                                    <danderson@apache.org&gt;;
发送时间:&nbsp;2020年10月2日(星期五) 晚上11:24
收件人:&nbsp;"大森林"<appleyuchi@foxmail.com&gt;;
抄送:&nbsp;"user"<user@flink.apache.org&gt;;
主题:&nbsp;Re: need help about "incremental checkpoint",Thanks



Write in RocksDbStateBackend.
Read in FsStateBackend.
It's NOT a match.
Yes, that is right. Also, this does not work:


Write in&nbsp;FsStateBackend
Read in&nbsp;RocksDbStateBackend



For questions and support in Chinese, you can use the user-zh@flink.apache.org. See the instructions at&nbsp;https://flink.apache.org/zh/community.html for how to join the list.


Best,
David


On Fri, Oct 2, 2020 at 4:45 PM 大森林 <appleyuchi@foxmail.com&gt; wrote:

Thanks for your replies~!


My English is poor ,I have an understanding of your replies:


Write in RocksDbStateBackend.
Read in FsStateBackend.
It's NOT a match.
So I'm wrong in step 5?
Is my above understanding right?


Thanks for your help.


------------------&nbsp;原始邮件&nbsp;------------------
发件人:                                                                                                                        "David Anderson"                                                                                    <danderson@apache.org&gt;;
发送时间:&nbsp;2020年10月2日(星期五) 晚上10:35
收件人:&nbsp;"大森林"<appleyuchi@foxmail.com&gt;;
抄送:&nbsp;"user"<user@flink.apache.org&gt;;
主题:&nbsp;Re: need help about "incremental checkpoint",Thanks



It looks like you were trying&nbsp;to resume from a checkpoint taken with the FsStateBackend into a revised version of the job that uses the RocksDbStateBackend. Switching state backends in this way is not supported: checkpoints and savepoints are written in a state-backend-specific format, and can only be read by the same backend that wrote them.

It is possible, however, to migrate between state backends using the State Processor API [1].


[1]&nbsp;https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html


Best,
David


On Fri, Oct 2, 2020 at 4:07 PM 大森林 <appleyuchi@foxmail.com&gt; wrote:


I want to do an experiment of"incremental checkpoint"
 
my code is:
 
https://paste.ubuntu.com/p/DpTyQKq6Vk/
 
&nbsp;
 
pom.xml is:
 
<?xml version="1.0" encoding="UTF-8"?&gt;
 <project xmlns="http://maven.apache.org/POM/4.0.0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
 <modelVersion&gt;4.0.0</modelVersion&gt;
 
<groupId&gt;example</groupId&gt;
 <artifactId&gt;datastream_api</artifactId&gt;
 <version&gt;1.0-SNAPSHOT</version&gt;
 <build&gt;
 <plugins&gt;
 <plugin&gt;
 <groupId&gt;org.apache.maven.plugins</groupId&gt;
 <artifactId&gt;maven-compiler-plugin</artifactId&gt;
 <version&gt;3.1</version&gt;
 <configuration&gt;
 <source&gt;1.8</source&gt;
 <target&gt;1.8</target&gt;
 </configuration&gt;
 </plugin&gt;
 
<plugin&gt;
 <groupId&gt;org.scala-tools</groupId&gt;
 <artifactId&gt;maven-scala-plugin</artifactId&gt;
 <version&gt;2.15.2</version&gt;
 <executions&gt;
 <execution&gt;
 <goals&gt;
 <goal&gt;compile</goal&gt;
 <goal&gt;testCompile</goal&gt;
 </goals&gt;
 </execution&gt;
 </executions&gt;
 </plugin&gt;
 
&nbsp;
 
</plugins&gt;
 </build&gt;
 
<dependencies&gt;
 
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala --&gt;
 <dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-streaming-scala_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 <!-<scope&gt;provided</scope&gt;-&gt;
 </dependency&gt;
 
<!-<dependency&gt;-&gt;
 <!-<groupId&gt;org.apache.flink</groupId&gt;-&gt;
 <!-<artifactId&gt;flink-streaming-java_2.12</artifactId&gt;-&gt;
 <!-<version&gt;1.11.1</version&gt;-&gt;
 <!-<!–<scope&gt;compile</scope&gt;–&gt;-&gt;
 <!-</dependency&gt;-&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-clients_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
&nbsp;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-statebackend-rocksdb_2.11</artifactId&gt;
 <version&gt;1.11.2</version&gt;
 <!-<scope&gt;test</scope&gt;-&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.hadoop</groupId&gt;
 <artifactId&gt;hadoop-client</artifactId&gt;
 <version&gt;3.3.0</version&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-core</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
<!-<dependency&gt;-&gt;
 <!-<groupId&gt;org.slf4j</groupId&gt;-&gt;
 <!-<artifactId&gt;slf4j-simple</artifactId&gt;-&gt;
 <!-<version&gt;1.7.25</version&gt;-&gt;
 <!-<scope&gt;compile</scope&gt;-&gt;
 <!-</dependency&gt;-&gt;
 
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-cep --&gt;
 <dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-cep_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-cep-scala_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
<dependency&gt;
 <groupId&gt;org.apache.flink</groupId&gt;
 <artifactId&gt;flink-scala_2.11</artifactId&gt;
 <version&gt;1.11.1</version&gt;
 </dependency&gt;
 
&nbsp;
 
<dependency&gt;
 <groupId&gt;org.projectlombok</groupId&gt;
 <artifactId&gt;lombok</artifactId&gt;
 <version&gt;1.18.4</version&gt;
 <!-<scope&gt;provided</scope&gt;-&gt;
 </dependency&gt;
 
</dependencies&gt;
 </project&gt;
 
&nbsp;
 
the error I got is:
 
https://paste.ubuntu.com/p/49HRYXFzR2/
 
&nbsp;
 
some of the above error is:
 
Caused by: java.lang.IllegalStateException: Unexpected state handle type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle, but found: class org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle
 
&nbsp;
 
&nbsp;
 
The steps are:
 
1.mvn clean scala:compile compile package
 
2.nc -lk 9999
 
3.flink run -c wordcount_increstate&nbsp; datastream_api-1.0-SNAPSHOT.jar
 Job has been submitted with JobID df6d62a43620f258155b8538ef0ddf1b
 
4.input the following conents in nc -lk 9999
 
before
 error
 error
 error
 error
 
5.
 
flink run -s hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c StateWordCount datastream_api-1.0-SNAPSHOT.jar
 
Then the above error happens.
 
&nbsp;
 
Please help,Thanks~!




I have tried to subscried to user@flink.apache.org;

but no replies.If possible ,send to appleyuchi@foxmail.com with your valuable replies,thanks.
 
&nbsp;

Re: need help about "incremental checkpoint",Thanks

Posted by David Anderson <da...@apache.org>.
>
>
> *Write in RocksDbStateBackend.*
> *Read in FsStateBackend.**It's NOT a match.*


Yes, that is right. Also, this does not work:

Write in FsStateBackend
Read in RocksDbStateBackend

For questions and support in Chinese, you can use the
user-zh@flink.apache.org. See the instructions at
https://flink.apache.org/zh/community.html for how to join the list.

Best,
David

On Fri, Oct 2, 2020 at 4:45 PM 大森林 <ap...@foxmail.com> wrote:

> Thanks for your replies~!
>
> My English is poor ,I have an understanding of your replies:
>
> *Write in RocksDbStateBackend.*
> *Read in FsStateBackend.*
> *It's NOT a match.*
> So I'm wrong in step 5?
> Is my above understanding right?
>
> Thanks for your help.
>
> ------------------ 原始邮件 ------------------
> *发件人:* "David Anderson" <da...@apache.org>;
> *发送时间:* 2020年10月2日(星期五) 晚上10:35
> *收件人:* "大森林"<ap...@foxmail.com>;
> *抄送:* "user"<us...@flink.apache.org>;
> *主题:* Re: need help about "incremental checkpoint",Thanks
>
> It looks like you were trying to resume from a checkpoint taken with the
> FsStateBackend into a revised version of the job that uses the
> RocksDbStateBackend. Switching state backends in this way is not supported:
> checkpoints and savepoints are written in a state-backend-specific format,
> and can only be read by the same backend that wrote them.
>
> It is possible, however, to migrate between state backends using the State
> Processor API [1].
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
>
> Best,
> David
>
> On Fri, Oct 2, 2020 at 4:07 PM 大森林 <ap...@foxmail.com> wrote:
>
>> *I want to do an experiment of"incremental checkpoint"*
>>
>> my code is:
>>
>> https://paste.ubuntu.com/p/DpTyQKq6Vk/
>>
>>
>>
>> pom.xml is:
>>
>> <?xml version="1.0" encoding="UTF-8"?>
>> <project xmlns="http://maven.apache.org/POM/4.0.0"
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> http://maven.apache.org/xsd/maven-4.0.0.xsd">
>> <modelVersion>4.0.0</modelVersion>
>>
>> <groupId>example</groupId>
>> <artifactId>datastream_api</artifactId>
>> <version>1.0-SNAPSHOT</version>
>> <build>
>> <plugins>
>> <plugin>
>> <groupId>org.apache.maven.plugins</groupId>
>> <artifactId>maven-compiler-plugin</artifactId>
>> <version>3.1</version>
>> <configuration>
>> <source>1.8</source>
>> <target>1.8</target>
>> </configuration>
>> </plugin>
>>
>> <plugin>
>> <groupId>org.scala-tools</groupId>
>> <artifactId>maven-scala-plugin</artifactId>
>> <version>2.15.2</version>
>> <executions>
>> <execution>
>> <goals>
>> <goal>compile</goal>
>> <goal>testCompile</goal>
>> </goals>
>> </execution>
>> </executions>
>> </plugin>
>>
>>
>>
>> </plugins>
>> </build>
>>
>> <dependencies>
>>
>> <!--
>> https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala
>> -->
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-streaming-scala_2.11</artifactId>
>> <version>1.11.1</version>
>> <!-<scope>provided</scope>->
>> </dependency>
>>
>> <!-<dependency>->
>> <!-<groupId>org.apache.flink</groupId>->
>> <!-<artifactId>flink-streaming-java_2.12</artifactId>->
>> <!-<version>1.11.1</version>->
>> <!-<!–<scope>compile</scope>–>->
>> <!-</dependency>->
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-clients_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-statebackend-rocksdb_2.11</artifactId>
>> <version>1.11.2</version>
>> <!-<scope>test</scope>->
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.hadoop</groupId>
>> <artifactId>hadoop-client</artifactId>
>> <version>3.3.0</version>
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-core</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>> <!-<dependency>->
>> <!-<groupId>org.slf4j</groupId>->
>> <!-<artifactId>slf4j-simple</artifactId>->
>> <!-<version>1.7.25</version>->
>> <!-<scope>compile</scope>->
>> <!-</dependency>->
>>
>> <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-cep -->
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-cep_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-cep-scala_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-scala_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>>
>>
>> <dependency>
>> <groupId>org.projectlombok</groupId>
>> <artifactId>lombok</artifactId>
>> <version>1.18.4</version>
>> <!-<scope>provided</scope>->
>> </dependency>
>>
>> </dependencies>
>> </project>
>>
>>
>>
>> the error I got is:
>>
>> https://paste.ubuntu.com/p/49HRYXFzR2/
>>
>>
>>
>> *some of the above error is:*
>>
>> *Caused by: java.lang.IllegalStateException: Unexpected state handle
>> type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle,
>> but found: class
>> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle*
>>
>>
>>
>>
>>
>> The steps are:
>>
>> 1.mvn clean scala:compile compile package
>>
>> 2.nc -lk 9999
>>
>> 3.flink run -c wordcount_increstate  datastream_api-1.0-SNAPSHOT.jar
>> Job has been submitted with JobID df6d62a43620f258155b8538ef0ddf1b
>>
>> 4.input the following conents in nc -lk 9999
>>
>> before
>> error
>> error
>> error
>> error
>>
>> 5.
>>
>> flink run -s
>> hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c
>> StateWordCount datastream_api-1.0-SNAPSHOT.jar
>>
>> Then the above error happens.
>>
>>
>>
>> Please help,Thanks~!
>>
>>
>> I have tried to subscried to user@flink.apache.org;
>>
>> but no replies.If possible ,send to appleyuchi@foxmail.com with your
>> valuable replies,thanks.
>>
>>
>>
>