You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Petter Arvidsson <pe...@relayr.io> on 2018/04/10 16:06:01 UTC

Recovering snapshot state saved with TypeInformation generated by the implicit macro from the scala API

Hello everyone,

We are trying to recover state from a snapshot which we can no longer load.
When it is loaded we receive the following exception:
java.lang.ClassNotFoundException: io.relayr.counter.FttCounter$$
anon$71$$anon$33
This, via a couple more exceptions, leads to:
java.io.IOException: Unloadable class for type serializer.

The cause of this behavior is an implicit macro that is part of the scala
API package, https://github.com/apache/flink/blob/release-1.4/flink-scala
/src/main/scala/org/apache/flink/api/scala/package.scala#L46. This macro
generates an anonymous class implementing TypeInformation for classes that
lack it. In our case it seems to have generated
"io.relayr.counter.FttCounter$$anon$71$$anon$33" which does not have a
stable name. When we change the class implementing the job, the name of
this anonymous class changes and we can no longer load the snapshot.

To solve the problem we introduced an explicit TypeInformation instance
instead, which makes new instances of the job work properly. The problem is
that this new version is no longer compatible with the old state (loading
it generates the same exception), since the original TypeInformation is no
longer generated. This is due to the explicitly provided instance
preventing the macro from being executed.

Did anyone else experience this or a similar problem? Is there a good way
to get out of this situation, i.e. how could we migrate the snapshot to one
where the state points to a TypeInformation instance with a stable class
name and not the macro generated one without losing the state?

We are using Flink 1.4.2.

Regards,
Petter

Re: Recovering snapshot state saved with TypeInformation generated by the implicit macro from the scala API

Posted by Gábor Gévay <gg...@gmail.com>.
Hello,

A bit of an ugly hack, but maybe you could manually create a class
named exactly io.relayr.counter.FttCounter$$anon$71$$anon$33, and
copy-paste into it the code that the macro is expanded into [1]?

Best,
Gábor

[1] https://stackoverflow.com/questions/11677609/how-do-i-print-an-expanded-macro-in-scala





On Tue, Apr 10, 2018 at 6:06 PM, Petter Arvidsson
<pe...@relayr.io> wrote:
> Hello everyone,
>
> We are trying to recover state from a snapshot which we can no longer load.
> When it is loaded we receive the following exception:
> java.lang.ClassNotFoundException:
> io.relayr.counter.FttCounter$$anon$71$$anon$33
> This, via a couple more exceptions, leads to:
> java.io.IOException: Unloadable class for type serializer.
>
> The cause of this behavior is an implicit macro that is part of the scala
> API package,
> https://github.com/apache/flink/blob/release-1.4/flink-scala/src/main/scala/org/apache/flink/api/scala/package.scala#L46.
> This macro generates an anonymous class implementing TypeInformation for
> classes that lack it. In our case it seems to have generated
> "io.relayr.counter.FttCounter$$anon$71$$anon$33" which does not have a
> stable name. When we change the class implementing the job, the name of this
> anonymous class changes and we can no longer load the snapshot.
>
> To solve the problem we introduced an explicit TypeInformation instance
> instead, which makes new instances of the job work properly. The problem is
> that this new version is no longer compatible with the old state (loading it
> generates the same exception), since the original TypeInformation is no
> longer generated. This is due to the explicitly provided instance preventing
> the macro from being executed.
>
> Did anyone else experience this or a similar problem? Is there a good way to
> get out of this situation, i.e. how could we migrate the snapshot to one
> where the state points to a TypeInformation instance with a stable class
> name and not the macro generated one without losing the state?
>
> We are using Flink 1.4.2.
>
> Regards,
> Petter